changeset 382:9b3487a515a7

merge
author Robert McIntyre <rlm@mit.edu>
date Tue, 16 Apr 2013 03:51:41 +0000
parents 9ac42f1fdf0a (current diff) 2d0afb231081 (diff)
children 31814b600935
files
diffstat 3 files changed, 148 insertions(+), 0 deletions(-) [+]
line wrap: on
line diff
     1.1 Binary file images/viola-parzen-1.png has changed
     2.1 Binary file images/viola-parzen-2.png has changed
     3.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     3.2 +++ b/org/ullman.org	Tue Apr 16 03:51:41 2013 +0000
     3.3 @@ -0,0 +1,148 @@
     3.4 +#+title: Ullman Literature Review
     3.5 +#+author: Robert McIntyre
     3.6 +#+email: rlm@mit.edu
     3.7 +#+description: Review of some of the AI works of Professor Shimon Ullman.
     3.8 +#+keywords: Shimon, Ullman, computer vision, artificial intelligence, literature review
     3.9 +#+SETUPFILE: ../../aurellem/org/setup.org
    3.10 +#+INCLUDE: ../../aurellem/org/level-0.org
    3.11 +#+babel: :mkdirp yes :noweb yes :exports both
    3.12 +
    3.13 +
    3.14 +* Ullman 
    3.15 +
    3.16 +Actual code reuse!
    3.17 +
    3.18 +precision = fraction of retrieved instances that are relevant
    3.19 +  (true-positives/(true-positives+false-positives))
    3.20 +
    3.21 +recall    =  fraction of relevant instances that are retrieved
    3.22 +  (true-positives/total-in-class)
    3.23 +
    3.24 +cross-validation = train the model on two different sets to prevent
    3.25 +overfitting, and confirm that you have enough training samples.
    3.26 +
    3.27 +nifty, relevant, realistic ideas
    3.28 +He doesn't confine himself to implausible assumptions
    3.29 +
    3.30 +** Our Reading
    3.31 +
    3.32 +*** 2002 Visual features of intermediate complexity and their use in classification
    3.33 +
    3.34 +    
    3.35 +
    3.36 +
    3.37 +    Viola's PhD thesis has a good introduction to entropy and mutual
    3.38 +    information 
    3.39 +
    3.40 +** Getting around the dumb "fixed training set" methods
    3.41 +
    3.42 +*** 2006 Learning to classify by ongoing feature selection
    3.43 +    
    3.44 +    Brings in the most informative features of a class, based on
    3.45 +    mutual information between that feature and all the examples
    3.46 +    encountered so far. To bound the running time, he uses only a
    3.47 +    fixed number of the most recent examples. He uses a replacement
    3.48 +    strategy to tell whether a new feature is better than one of the
    3.49 +    current features.
    3.50 +
    3.51 +*** 2009 Learning model complexity in an online environment
    3.52 +    
    3.53 +    Sort of like the hierarchical Bayesan models of Tennanbaum, this
    3.54 +    system makes the model more and more complicated as it gets more
    3.55 +    and more training data. It does this by using two systems in
    3.56 +    parallel and then whenever the more complex one seems to be
    3.57 +    needed by the data, the less complex one is thrown out, and an
    3.58 +    even more complex model is initialized in its place.
    3.59 +
    3.60 +    He uses a SVM with polynomial kernels of varying complexity. He
    3.61 +    gets good performance on a handwriting classification using a large
    3.62 +    range of training samples, since his model changes complexity
    3.63 +    depending on the number of training samples. The simpler models do
    3.64 +    better with few training points, and the more complex ones do
    3.65 +    better with many training points.
    3.66 +
    3.67 +    The final model had intermediate complexity between published
    3.68 +    extremes. 
    3.69 +
    3.70 +    The more complex models must be able to be initialized efficiently
    3.71 +    from the less complex models which they replace!
    3.72 +
    3.73 +
    3.74 +** Non Parametric Models
    3.75 +
    3.76 +[[../images/viola-parzen-1.png]]
    3.77 +[[../images/viola-parzen-2.png]]
    3.78 +
    3.79 +*** 2010 The chains model for detecting parts by their context
    3.80 +
    3.81 +    Like the constellation method for rigid objects, but extended to
    3.82 +    non-rigid objects as well.
    3.83 +
    3.84 +    Allows you to build a hand detector from a face detector. This is
    3.85 +    useful because hands might be only a few pixels, and very
    3.86 +    ambiguous in an image, but if you are expecting them at the end of
    3.87 +    an arm, then they become easier to find.
    3.88 +
    3.89 +    They make chains by using spatial proximity of features. That way,
    3.90 +    a hand can be identified by chaining back from the head. If there
    3.91 +    is a good chain to the head, then it is more likely that there is
    3.92 +    a hand than if there isn't. Since there is some give in the
    3.93 +    proximity detection, the system can accommodate new poses that it
    3.94 +    has never seen before.
    3.95 +
    3.96 +    Does not use any motion information.
    3.97 +
    3.98 +*** 2005 A Hierarchical Non-Parametric Method for Capturing Non-Rigid Deformations
    3.99 +    
   3.100 +    (relative dynamic programming [RDP])
   3.101 +
   3.102 +    Goal is to match images, as in SIFT, but this time the images can
   3.103 +    be subject to non rigid transformations. They do this by finding
   3.104 +    small patches that look the same, then building up bigger
   3.105 +    patches. They get a tree of patches that describes each image, and
   3.106 +    find the edit distance between each tree. Editing operations
   3.107 +    involve a coherent shift of features, so they can accommodate local
   3.108 +    shifts of patches in any direction. They get some cool results
   3.109 +    over just straight correlation. Basically, they made an image
   3.110 +    comparator that is resistant to multiple independent deformations.
   3.111 +    
   3.112 +    !important small regions are treated the same as unimportant
   3.113 +     small regions
   3.114 +     
   3.115 +    !no conception of shape
   3.116 +    
   3.117 +    quote:
   3.118 +    The dynamic programming procedure looks for an optimal
   3.119 +    transformation that aligns the patches of both images. This
   3.120 +    transformation is not a global transformation, but a composition
   3.121 +    of many local transformations of sub-patches at various sizes,
   3.122 +    performed one on top of the other.
   3.123 +
   3.124 +*** 2006 Satellite Features for the Classification of Visually Similar Classes
   3.125 +    
   3.126 +    Finds features that can distinguish subclasses of a class, by
   3.127 +    first finding a rigid set of anchor features that are common to
   3.128 +    both subclasses, then finding distinguishing features relative to
   3.129 +    those subfeatures. They keep things rigid because the satellite
   3.130 +    features don't have much information in and of themselves, and are
   3.131 +    only informative relative to other features.
   3.132 +
   3.133 +*** 2005 Learning a novel class from a single example by cross-generalization.
   3.134 +
   3.135 +    Let's you use a vast visual experience to generate a classifier
   3.136 +    for a novel class by generating synthetic examples by replacing
   3.137 +    features from the single example with features from similar
   3.138 +    classes.
   3.139 +
   3.140 +    quote: feature F is likely to be useful for class C if a similar
   3.141 +    feature F proved effective for a similar class C in the past.
   3.142 +
   3.143 +    Allows you to transfer the "gestalt" of a similar class to a new
   3.144 +    class, by adapting all the features of the learned class that have
   3.145 +    correspondence to the new class.
   3.146 +
   3.147 +*** 2007 Semantic Hierarchies for Recognizing Objects and Parts
   3.148 +
   3.149 +    Better learning of complex objects like faces by learning each
   3.150 +    piece (like nose, mouth, eye, etc) separately, then making sure
   3.151 +    that the features are in plausible positions.