changeset 380:2d0afb231081

spellcheck
author rlm
date Wed, 10 Apr 2013 16:49:05 -0400
parents f1b8727360fb
children 9b3487a515a7
files org/ullman.org
diffstat 1 files changed, 24 insertions(+), 21 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/org/ullman.org	Wed Apr 10 16:38:52 2013 -0400
     1.2 +++ b/org/ullman.org	Wed Apr 10 16:49:05 2013 -0400
     1.3 @@ -13,16 +13,16 @@
     1.4  Actual code reuse!
     1.5  
     1.6  precision = fraction of retrieved instances that are relevant
     1.7 -  (true-postives/(true-positives+false-positives))
     1.8 +  (true-positives/(true-positives+false-positives))
     1.9  
    1.10  recall    =  fraction of relevant instances that are retrieved
    1.11    (true-positives/total-in-class)
    1.12  
    1.13  cross-validation = train the model on two different sets to prevent
    1.14 -overfitting. 
    1.15 +overfitting, and confirm that you have enough training samples.
    1.16  
    1.17  nifty, relevant, realistic ideas
    1.18 -He doesn't confine himself to unplasaubile assumptions
    1.19 +He doesn't confine himself to implausible assumptions
    1.20  
    1.21  ** Our Reading
    1.22  
    1.23 @@ -31,6 +31,9 @@
    1.24      
    1.25  
    1.26  
    1.27 +    Viola's PhD thesis has a good introduction to entropy and mutual
    1.28 +    information 
    1.29 +
    1.30  ** Getting around the dumb "fixed training set" methods
    1.31  
    1.32  *** 2006 Learning to classify by ongoing feature selection
    1.33 @@ -40,19 +43,19 @@
    1.34      encountered so far. To bound the running time, he uses only a
    1.35      fixed number of the most recent examples. He uses a replacement
    1.36      strategy to tell whether a new feature is better than one of the
    1.37 -    corrent features.
    1.38 +    current features.
    1.39  
    1.40  *** 2009 Learning model complexity in an online environment
    1.41      
    1.42 -    Sort of like the heirichal baysean models of Tennanbaum, this
    1.43 +    Sort of like the hierarchical Bayesan models of Tennanbaum, this
    1.44      system makes the model more and more complicated as it gets more
    1.45      and more training data. It does this by using two systems in
    1.46 -    parallell and then whenever the more complex one seems to be
    1.47 +    parallel and then whenever the more complex one seems to be
    1.48      needed by the data, the less complex one is thrown out, and an
    1.49      even more complex model is initialized in its place.
    1.50  
    1.51 -    He uses a SVM with polynominal kernels of varying complexity. He
    1.52 -    gets good perfoemance on a handwriting classfication using a large
    1.53 +    He uses a SVM with polynomial kernels of varying complexity. He
    1.54 +    gets good performance on a handwriting classification using a large
    1.55      range of training samples, since his model changes complexity
    1.56      depending on the number of training samples. The simpler models do
    1.57      better with few training points, and the more complex ones do
    1.58 @@ -72,19 +75,19 @@
    1.59  
    1.60  *** 2010 The chains model for detecting parts by their context
    1.61  
    1.62 -    Like the constelation method for rigid objects, but extended to
    1.63 +    Like the constellation method for rigid objects, but extended to
    1.64      non-rigid objects as well.
    1.65  
    1.66      Allows you to build a hand detector from a face detector. This is
    1.67 -    usefull because hands might be only a few pixels, and very
    1.68 +    useful because hands might be only a few pixels, and very
    1.69      ambiguous in an image, but if you are expecting them at the end of
    1.70      an arm, then they become easier to find.
    1.71  
    1.72      They make chains by using spatial proximity of features. That way,
    1.73 -    a hand can be idntified by chaining back from the head. If there
    1.74 +    a hand can be identified by chaining back from the head. If there
    1.75      is a good chain to the head, then it is more likely that there is
    1.76      a hand than if there isn't. Since there is some give in the
    1.77 -    proximity detection, the system can accomodate new poses that it
    1.78 +    proximity detection, the system can accommodate new poses that it
    1.79      has never seen before.
    1.80  
    1.81      Does not use any motion information.
    1.82 @@ -98,12 +101,12 @@
    1.83      small patches that look the same, then building up bigger
    1.84      patches. They get a tree of patches that describes each image, and
    1.85      find the edit distance between each tree. Editing operations
    1.86 -    involve a coherent shift of features, so they can accomodate local
    1.87 +    involve a coherent shift of features, so they can accommodate local
    1.88      shifts of patches in any direction. They get some cool results
    1.89      over just straight correlation. Basically, they made an image
    1.90 -    comparor that is resistant to multiple independent deformations.
    1.91 +    comparator that is resistant to multiple independent deformations.
    1.92      
    1.93 -    !important small regions are treated the same as nonimportant
    1.94 +    !important small regions are treated the same as unimportant
    1.95       small regions
    1.96       
    1.97      !no conception of shape
    1.98 @@ -118,7 +121,7 @@
    1.99  *** 2006 Satellite Features for the Classification of Visually Similar Classes
   1.100      
   1.101      Finds features that can distinguish subclasses of a class, by
   1.102 -    first finding a rigid set of anghor features that are common to
   1.103 +    first finding a rigid set of anchor features that are common to
   1.104      both subclasses, then finding distinguishing features relative to
   1.105      those subfeatures. They keep things rigid because the satellite
   1.106      features don't have much information in and of themselves, and are
   1.107 @@ -127,19 +130,19 @@
   1.108  *** 2005 Learning a novel class from a single example by cross-generalization.
   1.109  
   1.110      Let's you use a vast visual experience to generate a classifier
   1.111 -    for a novel class by generating synthetic examples by replaceing
   1.112 -    features from the single example with features from similiar
   1.113 +    for a novel class by generating synthetic examples by replacing
   1.114 +    features from the single example with features from similar
   1.115      classes.
   1.116  
   1.117      quote: feature F is likely to be useful for class C if a similar
   1.118      feature F proved effective for a similar class C in the past.
   1.119  
   1.120 -    Allows you to trasfer the "gestalt" of a similiar class to a new
   1.121 +    Allows you to transfer the "gestalt" of a similar class to a new
   1.122      class, by adapting all the features of the learned class that have
   1.123 -    correspondance to the new class.
   1.124 +    correspondence to the new class.
   1.125  
   1.126  *** 2007 Semantic Hierarchies for Recognizing Objects and Parts
   1.127  
   1.128      Better learning of complex objects like faces by learning each
   1.129      piece (like nose, mouth, eye, etc) separately, then making sure
   1.130 -    that the features are in plausable positions.
   1.131 +    that the features are in plausible positions.