annotate org/notes.org @ 450:432f2c4646cb

sleepig.
author Robert McIntyre <rlm@mit.edu>
date Wed, 26 Mar 2014 03:18:57 -0400
parents 2d8a8422ff59
children
rev   line source
rlm@327 1
rlm@327 2
rlm@327 3 * scribbles
rlm@327 4
rlm@327 5 scratch.mit.edu
rlm@327 6
rlm@327 7 fredkin
rlm@327 8 ef@cmu.edu
rlm@327 9
rlm@327 10 gary Drescher logo papert lab
rlm@327 11
rlm@327 12 * stuff for thesis from Winston 2/22
rlm@327 13
rlm@327 14 Shimon Ullman tank -- a 3D model vision thing
rlm@327 15
rlm@327 16 Paul Viola MIT PHD thesis (intensity -> surface)
rlm@327 17
rlm@327 18 Grimson -- multiple scales
rlm@327 19
rlm@327 20 Winston -- "most effective way to find papers is to ask someone."
rlm@327 21
rlm@327 22 Talk to Poggio/Ullman/Finlayson to get moar papers
rlm@327 23
rlm@327 24
rlm@327 25 * Notes from Vision meeting Wednesday, 2/20
rlm@327 26
rlm@327 27 David Clemems, PHG Thesis
rlm@327 28 Arizona university mind's eye project used model based vision
rlm@327 29
rlm@327 30 "Model based vision" -> "generative vision"
rlm@327 31
rlm@327 32 Microsoft Research flickr city square
rlm@327 33
rlm@369 34 Ce Liu -- M$ motion guy
rlm@327 35
rlm@327 36 prakesh -- read paper
rlm@327 37
rlm@327 38 from ADK:
rlm@327 39
rlm@327 40 I wasn't able to find much in an hour but I probably just don't know
rlm@327 41 the magic keywords. You should look at David Clemens's thesis as I
rlm@327 42 mentioned. Also I believe it is Paul Cohen from Arizona State
rlm@327 43 University who initially worked on the simulation-driven activity
rlm@327 44 recognition in Mind's Eye, and then afaik changed tack completely.
rlm@327 45
rlm@327 46 http://w3.sista.arizona.edu/~cohen/Publications/
rlm@327 47
rlm@327 48 There is also Michael Burl of JPL whose page seems unavailable right
rlm@327 49 now. The JPL team also used 3D models of pose along with some kind of
rlm@327 50 scripts to recognize events.