Mercurial > cortex
comparison org/ullman.org @ 380:2d0afb231081
spellcheck
author | rlm |
---|---|
date | Wed, 10 Apr 2013 16:49:05 -0400 |
parents | f1b8727360fb |
children |
comparison
equal
deleted
inserted
replaced
379:f1b8727360fb | 380:2d0afb231081 |
---|---|
11 * Ullman | 11 * Ullman |
12 | 12 |
13 Actual code reuse! | 13 Actual code reuse! |
14 | 14 |
15 precision = fraction of retrieved instances that are relevant | 15 precision = fraction of retrieved instances that are relevant |
16 (true-postives/(true-positives+false-positives)) | 16 (true-positives/(true-positives+false-positives)) |
17 | 17 |
18 recall = fraction of relevant instances that are retrieved | 18 recall = fraction of relevant instances that are retrieved |
19 (true-positives/total-in-class) | 19 (true-positives/total-in-class) |
20 | 20 |
21 cross-validation = train the model on two different sets to prevent | 21 cross-validation = train the model on two different sets to prevent |
22 overfitting. | 22 overfitting, and confirm that you have enough training samples. |
23 | 23 |
24 nifty, relevant, realistic ideas | 24 nifty, relevant, realistic ideas |
25 He doesn't confine himself to unplasaubile assumptions | 25 He doesn't confine himself to implausible assumptions |
26 | 26 |
27 ** Our Reading | 27 ** Our Reading |
28 | 28 |
29 *** 2002 Visual features of intermediate complexity and their use in classification | 29 *** 2002 Visual features of intermediate complexity and their use in classification |
30 | 30 |
31 | 31 |
32 | 32 |
33 | |
34 Viola's PhD thesis has a good introduction to entropy and mutual | |
35 information | |
33 | 36 |
34 ** Getting around the dumb "fixed training set" methods | 37 ** Getting around the dumb "fixed training set" methods |
35 | 38 |
36 *** 2006 Learning to classify by ongoing feature selection | 39 *** 2006 Learning to classify by ongoing feature selection |
37 | 40 |
38 Brings in the most informative features of a class, based on | 41 Brings in the most informative features of a class, based on |
39 mutual information between that feature and all the examples | 42 mutual information between that feature and all the examples |
40 encountered so far. To bound the running time, he uses only a | 43 encountered so far. To bound the running time, he uses only a |
41 fixed number of the most recent examples. He uses a replacement | 44 fixed number of the most recent examples. He uses a replacement |
42 strategy to tell whether a new feature is better than one of the | 45 strategy to tell whether a new feature is better than one of the |
43 corrent features. | 46 current features. |
44 | 47 |
45 *** 2009 Learning model complexity in an online environment | 48 *** 2009 Learning model complexity in an online environment |
46 | 49 |
47 Sort of like the heirichal baysean models of Tennanbaum, this | 50 Sort of like the hierarchical Bayesan models of Tennanbaum, this |
48 system makes the model more and more complicated as it gets more | 51 system makes the model more and more complicated as it gets more |
49 and more training data. It does this by using two systems in | 52 and more training data. It does this by using two systems in |
50 parallell and then whenever the more complex one seems to be | 53 parallel and then whenever the more complex one seems to be |
51 needed by the data, the less complex one is thrown out, and an | 54 needed by the data, the less complex one is thrown out, and an |
52 even more complex model is initialized in its place. | 55 even more complex model is initialized in its place. |
53 | 56 |
54 He uses a SVM with polynominal kernels of varying complexity. He | 57 He uses a SVM with polynomial kernels of varying complexity. He |
55 gets good perfoemance on a handwriting classfication using a large | 58 gets good performance on a handwriting classification using a large |
56 range of training samples, since his model changes complexity | 59 range of training samples, since his model changes complexity |
57 depending on the number of training samples. The simpler models do | 60 depending on the number of training samples. The simpler models do |
58 better with few training points, and the more complex ones do | 61 better with few training points, and the more complex ones do |
59 better with many training points. | 62 better with many training points. |
60 | 63 |
70 [[../images/viola-parzen-1.png]] | 73 [[../images/viola-parzen-1.png]] |
71 [[../images/viola-parzen-2.png]] | 74 [[../images/viola-parzen-2.png]] |
72 | 75 |
73 *** 2010 The chains model for detecting parts by their context | 76 *** 2010 The chains model for detecting parts by their context |
74 | 77 |
75 Like the constelation method for rigid objects, but extended to | 78 Like the constellation method for rigid objects, but extended to |
76 non-rigid objects as well. | 79 non-rigid objects as well. |
77 | 80 |
78 Allows you to build a hand detector from a face detector. This is | 81 Allows you to build a hand detector from a face detector. This is |
79 usefull because hands might be only a few pixels, and very | 82 useful because hands might be only a few pixels, and very |
80 ambiguous in an image, but if you are expecting them at the end of | 83 ambiguous in an image, but if you are expecting them at the end of |
81 an arm, then they become easier to find. | 84 an arm, then they become easier to find. |
82 | 85 |
83 They make chains by using spatial proximity of features. That way, | 86 They make chains by using spatial proximity of features. That way, |
84 a hand can be idntified by chaining back from the head. If there | 87 a hand can be identified by chaining back from the head. If there |
85 is a good chain to the head, then it is more likely that there is | 88 is a good chain to the head, then it is more likely that there is |
86 a hand than if there isn't. Since there is some give in the | 89 a hand than if there isn't. Since there is some give in the |
87 proximity detection, the system can accomodate new poses that it | 90 proximity detection, the system can accommodate new poses that it |
88 has never seen before. | 91 has never seen before. |
89 | 92 |
90 Does not use any motion information. | 93 Does not use any motion information. |
91 | 94 |
92 *** 2005 A Hierarchical Non-Parametric Method for Capturing Non-Rigid Deformations | 95 *** 2005 A Hierarchical Non-Parametric Method for Capturing Non-Rigid Deformations |
96 Goal is to match images, as in SIFT, but this time the images can | 99 Goal is to match images, as in SIFT, but this time the images can |
97 be subject to non rigid transformations. They do this by finding | 100 be subject to non rigid transformations. They do this by finding |
98 small patches that look the same, then building up bigger | 101 small patches that look the same, then building up bigger |
99 patches. They get a tree of patches that describes each image, and | 102 patches. They get a tree of patches that describes each image, and |
100 find the edit distance between each tree. Editing operations | 103 find the edit distance between each tree. Editing operations |
101 involve a coherent shift of features, so they can accomodate local | 104 involve a coherent shift of features, so they can accommodate local |
102 shifts of patches in any direction. They get some cool results | 105 shifts of patches in any direction. They get some cool results |
103 over just straight correlation. Basically, they made an image | 106 over just straight correlation. Basically, they made an image |
104 comparor that is resistant to multiple independent deformations. | 107 comparator that is resistant to multiple independent deformations. |
105 | 108 |
106 !important small regions are treated the same as nonimportant | 109 !important small regions are treated the same as unimportant |
107 small regions | 110 small regions |
108 | 111 |
109 !no conception of shape | 112 !no conception of shape |
110 | 113 |
111 quote: | 114 quote: |
116 performed one on top of the other. | 119 performed one on top of the other. |
117 | 120 |
118 *** 2006 Satellite Features for the Classification of Visually Similar Classes | 121 *** 2006 Satellite Features for the Classification of Visually Similar Classes |
119 | 122 |
120 Finds features that can distinguish subclasses of a class, by | 123 Finds features that can distinguish subclasses of a class, by |
121 first finding a rigid set of anghor features that are common to | 124 first finding a rigid set of anchor features that are common to |
122 both subclasses, then finding distinguishing features relative to | 125 both subclasses, then finding distinguishing features relative to |
123 those subfeatures. They keep things rigid because the satellite | 126 those subfeatures. They keep things rigid because the satellite |
124 features don't have much information in and of themselves, and are | 127 features don't have much information in and of themselves, and are |
125 only informative relative to other features. | 128 only informative relative to other features. |
126 | 129 |
127 *** 2005 Learning a novel class from a single example by cross-generalization. | 130 *** 2005 Learning a novel class from a single example by cross-generalization. |
128 | 131 |
129 Let's you use a vast visual experience to generate a classifier | 132 Let's you use a vast visual experience to generate a classifier |
130 for a novel class by generating synthetic examples by replaceing | 133 for a novel class by generating synthetic examples by replacing |
131 features from the single example with features from similiar | 134 features from the single example with features from similar |
132 classes. | 135 classes. |
133 | 136 |
134 quote: feature F is likely to be useful for class C if a similar | 137 quote: feature F is likely to be useful for class C if a similar |
135 feature F proved effective for a similar class C in the past. | 138 feature F proved effective for a similar class C in the past. |
136 | 139 |
137 Allows you to trasfer the "gestalt" of a similiar class to a new | 140 Allows you to transfer the "gestalt" of a similar class to a new |
138 class, by adapting all the features of the learned class that have | 141 class, by adapting all the features of the learned class that have |
139 correspondance to the new class. | 142 correspondence to the new class. |
140 | 143 |
141 *** 2007 Semantic Hierarchies for Recognizing Objects and Parts | 144 *** 2007 Semantic Hierarchies for Recognizing Objects and Parts |
142 | 145 |
143 Better learning of complex objects like faces by learning each | 146 Better learning of complex objects like faces by learning each |
144 piece (like nose, mouth, eye, etc) separately, then making sure | 147 piece (like nose, mouth, eye, etc) separately, then making sure |
145 that the features are in plausable positions. | 148 that the features are in plausible positions. |