Mercurial > cortex
changeset 379:f1b8727360fb
add images.
author | rlm |
---|---|
date | Wed, 10 Apr 2013 16:38:52 -0400 |
parents | 8e62bf52be59 |
children | 2d0afb231081 |
files | images/viola-parzen-1.png images/viola-parzen-2.png org/ullman.org |
diffstat | 3 files changed, 145 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
1.1 Binary file images/viola-parzen-1.png has changed
2.1 Binary file images/viola-parzen-2.png has changed
3.1 --- /dev/null Thu Jan 01 00:00:00 1970 +0000 3.2 +++ b/org/ullman.org Wed Apr 10 16:38:52 2013 -0400 3.3 @@ -0,0 +1,145 @@ 3.4 +#+title: Ullman Literature Review 3.5 +#+author: Robert McIntyre 3.6 +#+email: rlm@mit.edu 3.7 +#+description: Review of some of the AI works of Professor Shimon Ullman. 3.8 +#+keywords: Shimon, Ullman, computer vision, artificial intelligence, literature review 3.9 +#+SETUPFILE: ../../aurellem/org/setup.org 3.10 +#+INCLUDE: ../../aurellem/org/level-0.org 3.11 +#+babel: :mkdirp yes :noweb yes :exports both 3.12 + 3.13 + 3.14 +* Ullman 3.15 + 3.16 +Actual code reuse! 3.17 + 3.18 +precision = fraction of retrieved instances that are relevant 3.19 + (true-postives/(true-positives+false-positives)) 3.20 + 3.21 +recall = fraction of relevant instances that are retrieved 3.22 + (true-positives/total-in-class) 3.23 + 3.24 +cross-validation = train the model on two different sets to prevent 3.25 +overfitting. 3.26 + 3.27 +nifty, relevant, realistic ideas 3.28 +He doesn't confine himself to unplasaubile assumptions 3.29 + 3.30 +** Our Reading 3.31 + 3.32 +*** 2002 Visual features of intermediate complexity and their use in classification 3.33 + 3.34 + 3.35 + 3.36 + 3.37 +** Getting around the dumb "fixed training set" methods 3.38 + 3.39 +*** 2006 Learning to classify by ongoing feature selection 3.40 + 3.41 + Brings in the most informative features of a class, based on 3.42 + mutual information between that feature and all the examples 3.43 + encountered so far. To bound the running time, he uses only a 3.44 + fixed number of the most recent examples. He uses a replacement 3.45 + strategy to tell whether a new feature is better than one of the 3.46 + corrent features. 3.47 + 3.48 +*** 2009 Learning model complexity in an online environment 3.49 + 3.50 + Sort of like the heirichal baysean models of Tennanbaum, this 3.51 + system makes the model more and more complicated as it gets more 3.52 + and more training data. It does this by using two systems in 3.53 + parallell and then whenever the more complex one seems to be 3.54 + needed by the data, the less complex one is thrown out, and an 3.55 + even more complex model is initialized in its place. 3.56 + 3.57 + He uses a SVM with polynominal kernels of varying complexity. He 3.58 + gets good perfoemance on a handwriting classfication using a large 3.59 + range of training samples, since his model changes complexity 3.60 + depending on the number of training samples. The simpler models do 3.61 + better with few training points, and the more complex ones do 3.62 + better with many training points. 3.63 + 3.64 + The final model had intermediate complexity between published 3.65 + extremes. 3.66 + 3.67 + The more complex models must be able to be initialized efficiently 3.68 + from the less complex models which they replace! 3.69 + 3.70 + 3.71 +** Non Parametric Models 3.72 + 3.73 +[[../images/viola-parzen-1.png]] 3.74 +[[../images/viola-parzen-2.png]] 3.75 + 3.76 +*** 2010 The chains model for detecting parts by their context 3.77 + 3.78 + Like the constelation method for rigid objects, but extended to 3.79 + non-rigid objects as well. 3.80 + 3.81 + Allows you to build a hand detector from a face detector. This is 3.82 + usefull because hands might be only a few pixels, and very 3.83 + ambiguous in an image, but if you are expecting them at the end of 3.84 + an arm, then they become easier to find. 3.85 + 3.86 + They make chains by using spatial proximity of features. That way, 3.87 + a hand can be idntified by chaining back from the head. If there 3.88 + is a good chain to the head, then it is more likely that there is 3.89 + a hand than if there isn't. Since there is some give in the 3.90 + proximity detection, the system can accomodate new poses that it 3.91 + has never seen before. 3.92 + 3.93 + Does not use any motion information. 3.94 + 3.95 +*** 2005 A Hierarchical Non-Parametric Method for Capturing Non-Rigid Deformations 3.96 + 3.97 + (relative dynamic programming [RDP]) 3.98 + 3.99 + Goal is to match images, as in SIFT, but this time the images can 3.100 + be subject to non rigid transformations. They do this by finding 3.101 + small patches that look the same, then building up bigger 3.102 + patches. They get a tree of patches that describes each image, and 3.103 + find the edit distance between each tree. Editing operations 3.104 + involve a coherent shift of features, so they can accomodate local 3.105 + shifts of patches in any direction. They get some cool results 3.106 + over just straight correlation. Basically, they made an image 3.107 + comparor that is resistant to multiple independent deformations. 3.108 + 3.109 + !important small regions are treated the same as nonimportant 3.110 + small regions 3.111 + 3.112 + !no conception of shape 3.113 + 3.114 + quote: 3.115 + The dynamic programming procedure looks for an optimal 3.116 + transformation that aligns the patches of both images. This 3.117 + transformation is not a global transformation, but a composition 3.118 + of many local transformations of sub-patches at various sizes, 3.119 + performed one on top of the other. 3.120 + 3.121 +*** 2006 Satellite Features for the Classification of Visually Similar Classes 3.122 + 3.123 + Finds features that can distinguish subclasses of a class, by 3.124 + first finding a rigid set of anghor features that are common to 3.125 + both subclasses, then finding distinguishing features relative to 3.126 + those subfeatures. They keep things rigid because the satellite 3.127 + features don't have much information in and of themselves, and are 3.128 + only informative relative to other features. 3.129 + 3.130 +*** 2005 Learning a novel class from a single example by cross-generalization. 3.131 + 3.132 + Let's you use a vast visual experience to generate a classifier 3.133 + for a novel class by generating synthetic examples by replaceing 3.134 + features from the single example with features from similiar 3.135 + classes. 3.136 + 3.137 + quote: feature F is likely to be useful for class C if a similar 3.138 + feature F proved effective for a similar class C in the past. 3.139 + 3.140 + Allows you to trasfer the "gestalt" of a similiar class to a new 3.141 + class, by adapting all the features of the learned class that have 3.142 + correspondance to the new class. 3.143 + 3.144 +*** 2007 Semantic Hierarchies for Recognizing Objects and Parts 3.145 + 3.146 + Better learning of complex objects like faces by learning each 3.147 + piece (like nose, mouth, eye, etc) separately, then making sure 3.148 + that the features are in plausable positions.