rlm@327: rlm@327: rlm@327: * scribbles rlm@327: rlm@327: scratch.mit.edu rlm@327: rlm@327: fredkin rlm@327: ef@cmu.edu rlm@327: rlm@327: gary Drescher logo papert lab rlm@327: rlm@327: * stuff for thesis from Winston 2/22 rlm@327: rlm@327: Shimon Ullman tank -- a 3D model vision thing rlm@327: rlm@327: Paul Viola MIT PHD thesis (intensity -> surface) rlm@327: rlm@327: Grimson -- multiple scales rlm@327: rlm@327: Winston -- "most effective way to find papers is to ask someone." rlm@327: rlm@327: Talk to Poggio/Ullman/Finlayson to get moar papers rlm@327: rlm@327: rlm@327: * Notes from Vision meeting Wednesday, 2/20 rlm@327: rlm@327: David Clemems, PHG Thesis rlm@327: Arizona university mind's eye project used model based vision rlm@327: rlm@327: "Model based vision" -> "generative vision" rlm@327: rlm@327: Microsoft Research flickr city square rlm@327: rlm@369: Ce Liu -- M$ motion guy rlm@327: rlm@327: prakesh -- read paper rlm@327: rlm@327: from ADK: rlm@327: rlm@327: I wasn't able to find much in an hour but I probably just don't know rlm@327: the magic keywords. You should look at David Clemens's thesis as I rlm@327: mentioned. Also I believe it is Paul Cohen from Arizona State rlm@327: University who initially worked on the simulation-driven activity rlm@327: recognition in Mind's Eye, and then afaik changed tack completely. rlm@327: rlm@327: http://w3.sista.arizona.edu/~cohen/Publications/ rlm@327: rlm@327: There is also Michael Burl of JPL whose page seems unavailable right rlm@327: now. The JPL team also used 3D models of pose along with some kind of rlm@327: scripts to recognize events.