annotate org/ullman.org @ 379:f1b8727360fb

add images.
author rlm
date Wed, 10 Apr 2013 16:38:52 -0400
parents
children 2d0afb231081
rev   line source
rlm@379 1 #+title: Ullman Literature Review
rlm@379 2 #+author: Robert McIntyre
rlm@379 3 #+email: rlm@mit.edu
rlm@379 4 #+description: Review of some of the AI works of Professor Shimon Ullman.
rlm@379 5 #+keywords: Shimon, Ullman, computer vision, artificial intelligence, literature review
rlm@379 6 #+SETUPFILE: ../../aurellem/org/setup.org
rlm@379 7 #+INCLUDE: ../../aurellem/org/level-0.org
rlm@379 8 #+babel: :mkdirp yes :noweb yes :exports both
rlm@379 9
rlm@379 10
rlm@379 11 * Ullman
rlm@379 12
rlm@379 13 Actual code reuse!
rlm@379 14
rlm@379 15 precision = fraction of retrieved instances that are relevant
rlm@379 16 (true-postives/(true-positives+false-positives))
rlm@379 17
rlm@379 18 recall = fraction of relevant instances that are retrieved
rlm@379 19 (true-positives/total-in-class)
rlm@379 20
rlm@379 21 cross-validation = train the model on two different sets to prevent
rlm@379 22 overfitting.
rlm@379 23
rlm@379 24 nifty, relevant, realistic ideas
rlm@379 25 He doesn't confine himself to unplasaubile assumptions
rlm@379 26
rlm@379 27 ** Our Reading
rlm@379 28
rlm@379 29 *** 2002 Visual features of intermediate complexity and their use in classification
rlm@379 30
rlm@379 31
rlm@379 32
rlm@379 33
rlm@379 34 ** Getting around the dumb "fixed training set" methods
rlm@379 35
rlm@379 36 *** 2006 Learning to classify by ongoing feature selection
rlm@379 37
rlm@379 38 Brings in the most informative features of a class, based on
rlm@379 39 mutual information between that feature and all the examples
rlm@379 40 encountered so far. To bound the running time, he uses only a
rlm@379 41 fixed number of the most recent examples. He uses a replacement
rlm@379 42 strategy to tell whether a new feature is better than one of the
rlm@379 43 corrent features.
rlm@379 44
rlm@379 45 *** 2009 Learning model complexity in an online environment
rlm@379 46
rlm@379 47 Sort of like the heirichal baysean models of Tennanbaum, this
rlm@379 48 system makes the model more and more complicated as it gets more
rlm@379 49 and more training data. It does this by using two systems in
rlm@379 50 parallell and then whenever the more complex one seems to be
rlm@379 51 needed by the data, the less complex one is thrown out, and an
rlm@379 52 even more complex model is initialized in its place.
rlm@379 53
rlm@379 54 He uses a SVM with polynominal kernels of varying complexity. He
rlm@379 55 gets good perfoemance on a handwriting classfication using a large
rlm@379 56 range of training samples, since his model changes complexity
rlm@379 57 depending on the number of training samples. The simpler models do
rlm@379 58 better with few training points, and the more complex ones do
rlm@379 59 better with many training points.
rlm@379 60
rlm@379 61 The final model had intermediate complexity between published
rlm@379 62 extremes.
rlm@379 63
rlm@379 64 The more complex models must be able to be initialized efficiently
rlm@379 65 from the less complex models which they replace!
rlm@379 66
rlm@379 67
rlm@379 68 ** Non Parametric Models
rlm@379 69
rlm@379 70 [[../images/viola-parzen-1.png]]
rlm@379 71 [[../images/viola-parzen-2.png]]
rlm@379 72
rlm@379 73 *** 2010 The chains model for detecting parts by their context
rlm@379 74
rlm@379 75 Like the constelation method for rigid objects, but extended to
rlm@379 76 non-rigid objects as well.
rlm@379 77
rlm@379 78 Allows you to build a hand detector from a face detector. This is
rlm@379 79 usefull because hands might be only a few pixels, and very
rlm@379 80 ambiguous in an image, but if you are expecting them at the end of
rlm@379 81 an arm, then they become easier to find.
rlm@379 82
rlm@379 83 They make chains by using spatial proximity of features. That way,
rlm@379 84 a hand can be idntified by chaining back from the head. If there
rlm@379 85 is a good chain to the head, then it is more likely that there is
rlm@379 86 a hand than if there isn't. Since there is some give in the
rlm@379 87 proximity detection, the system can accomodate new poses that it
rlm@379 88 has never seen before.
rlm@379 89
rlm@379 90 Does not use any motion information.
rlm@379 91
rlm@379 92 *** 2005 A Hierarchical Non-Parametric Method for Capturing Non-Rigid Deformations
rlm@379 93
rlm@379 94 (relative dynamic programming [RDP])
rlm@379 95
rlm@379 96 Goal is to match images, as in SIFT, but this time the images can
rlm@379 97 be subject to non rigid transformations. They do this by finding
rlm@379 98 small patches that look the same, then building up bigger
rlm@379 99 patches. They get a tree of patches that describes each image, and
rlm@379 100 find the edit distance between each tree. Editing operations
rlm@379 101 involve a coherent shift of features, so they can accomodate local
rlm@379 102 shifts of patches in any direction. They get some cool results
rlm@379 103 over just straight correlation. Basically, they made an image
rlm@379 104 comparor that is resistant to multiple independent deformations.
rlm@379 105
rlm@379 106 !important small regions are treated the same as nonimportant
rlm@379 107 small regions
rlm@379 108
rlm@379 109 !no conception of shape
rlm@379 110
rlm@379 111 quote:
rlm@379 112 The dynamic programming procedure looks for an optimal
rlm@379 113 transformation that aligns the patches of both images. This
rlm@379 114 transformation is not a global transformation, but a composition
rlm@379 115 of many local transformations of sub-patches at various sizes,
rlm@379 116 performed one on top of the other.
rlm@379 117
rlm@379 118 *** 2006 Satellite Features for the Classification of Visually Similar Classes
rlm@379 119
rlm@379 120 Finds features that can distinguish subclasses of a class, by
rlm@379 121 first finding a rigid set of anghor features that are common to
rlm@379 122 both subclasses, then finding distinguishing features relative to
rlm@379 123 those subfeatures. They keep things rigid because the satellite
rlm@379 124 features don't have much information in and of themselves, and are
rlm@379 125 only informative relative to other features.
rlm@379 126
rlm@379 127 *** 2005 Learning a novel class from a single example by cross-generalization.
rlm@379 128
rlm@379 129 Let's you use a vast visual experience to generate a classifier
rlm@379 130 for a novel class by generating synthetic examples by replaceing
rlm@379 131 features from the single example with features from similiar
rlm@379 132 classes.
rlm@379 133
rlm@379 134 quote: feature F is likely to be useful for class C if a similar
rlm@379 135 feature F proved effective for a similar class C in the past.
rlm@379 136
rlm@379 137 Allows you to trasfer the "gestalt" of a similiar class to a new
rlm@379 138 class, by adapting all the features of the learned class that have
rlm@379 139 correspondance to the new class.
rlm@379 140
rlm@379 141 *** 2007 Semantic Hierarchies for Recognizing Objects and Parts
rlm@379 142
rlm@379 143 Better learning of complex objects like faces by learning each
rlm@379 144 piece (like nose, mouth, eye, etc) separately, then making sure
rlm@379 145 that the features are in plausable positions.