Mercurial > cortex
view org/ullman.org @ 428:d53a31969a51
rename complete.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Fri, 21 Mar 2014 15:43:15 -0400 |
parents | 2d0afb231081 |
children |
line wrap: on
line source
1 #+title: Ullman Literature Review2 #+author: Robert McIntyre3 #+email: rlm@mit.edu4 #+description: Review of some of the AI works of Professor Shimon Ullman.5 #+keywords: Shimon, Ullman, computer vision, artificial intelligence, literature review6 #+SETUPFILE: ../../aurellem/org/setup.org7 #+INCLUDE: ../../aurellem/org/level-0.org8 #+babel: :mkdirp yes :noweb yes :exports both11 * Ullman13 Actual code reuse!15 precision = fraction of retrieved instances that are relevant16 (true-positives/(true-positives+false-positives))18 recall = fraction of relevant instances that are retrieved19 (true-positives/total-in-class)21 cross-validation = train the model on two different sets to prevent22 overfitting, and confirm that you have enough training samples.24 nifty, relevant, realistic ideas25 He doesn't confine himself to implausible assumptions27 ** Our Reading29 *** 2002 Visual features of intermediate complexity and their use in classification34 Viola's PhD thesis has a good introduction to entropy and mutual35 information37 ** Getting around the dumb "fixed training set" methods39 *** 2006 Learning to classify by ongoing feature selection41 Brings in the most informative features of a class, based on42 mutual information between that feature and all the examples43 encountered so far. To bound the running time, he uses only a44 fixed number of the most recent examples. He uses a replacement45 strategy to tell whether a new feature is better than one of the46 current features.48 *** 2009 Learning model complexity in an online environment50 Sort of like the hierarchical Bayesan models of Tennanbaum, this51 system makes the model more and more complicated as it gets more52 and more training data. It does this by using two systems in53 parallel and then whenever the more complex one seems to be54 needed by the data, the less complex one is thrown out, and an55 even more complex model is initialized in its place.57 He uses a SVM with polynomial kernels of varying complexity. He58 gets good performance on a handwriting classification using a large59 range of training samples, since his model changes complexity60 depending on the number of training samples. The simpler models do61 better with few training points, and the more complex ones do62 better with many training points.64 The final model had intermediate complexity between published65 extremes.67 The more complex models must be able to be initialized efficiently68 from the less complex models which they replace!71 ** Non Parametric Models73 [[../images/viola-parzen-1.png]]74 [[../images/viola-parzen-2.png]]76 *** 2010 The chains model for detecting parts by their context78 Like the constellation method for rigid objects, but extended to79 non-rigid objects as well.81 Allows you to build a hand detector from a face detector. This is82 useful because hands might be only a few pixels, and very83 ambiguous in an image, but if you are expecting them at the end of84 an arm, then they become easier to find.86 They make chains by using spatial proximity of features. That way,87 a hand can be identified by chaining back from the head. If there88 is a good chain to the head, then it is more likely that there is89 a hand than if there isn't. Since there is some give in the90 proximity detection, the system can accommodate new poses that it91 has never seen before.93 Does not use any motion information.95 *** 2005 A Hierarchical Non-Parametric Method for Capturing Non-Rigid Deformations97 (relative dynamic programming [RDP])99 Goal is to match images, as in SIFT, but this time the images can100 be subject to non rigid transformations. They do this by finding101 small patches that look the same, then building up bigger102 patches. They get a tree of patches that describes each image, and103 find the edit distance between each tree. Editing operations104 involve a coherent shift of features, so they can accommodate local105 shifts of patches in any direction. They get some cool results106 over just straight correlation. Basically, they made an image107 comparator that is resistant to multiple independent deformations.109 !important small regions are treated the same as unimportant110 small regions112 !no conception of shape114 quote:115 The dynamic programming procedure looks for an optimal116 transformation that aligns the patches of both images. This117 transformation is not a global transformation, but a composition118 of many local transformations of sub-patches at various sizes,119 performed one on top of the other.121 *** 2006 Satellite Features for the Classification of Visually Similar Classes123 Finds features that can distinguish subclasses of a class, by124 first finding a rigid set of anchor features that are common to125 both subclasses, then finding distinguishing features relative to126 those subfeatures. They keep things rigid because the satellite127 features don't have much information in and of themselves, and are128 only informative relative to other features.130 *** 2005 Learning a novel class from a single example by cross-generalization.132 Let's you use a vast visual experience to generate a classifier133 for a novel class by generating synthetic examples by replacing134 features from the single example with features from similar135 classes.137 quote: feature F is likely to be useful for class C if a similar138 feature F proved effective for a similar class C in the past.140 Allows you to transfer the "gestalt" of a similar class to a new141 class, by adapting all the features of the learned class that have142 correspondence to the new class.144 *** 2007 Semantic Hierarchies for Recognizing Objects and Parts146 Better learning of complex objects like faces by learning each147 piece (like nose, mouth, eye, etc) separately, then making sure148 that the features are in plausible positions.