comparison org/notes.org @ 353:7239aee7267f

merge.
author Robert McIntyre <rlm@mit.edu>
date Tue, 05 Mar 2013 18:55:21 +0000
parents eb7c94a03188
children 2d8a8422ff59
comparison
equal deleted inserted replaced
352:d9128eb5f42e 353:7239aee7267f
1
2
3 * scribbles
4
5 scratch.mit.edu
6
7 fredkin
8 ef@cmu.edu
9
10 gary Drescher logo papert lab
11
12 * stuff for thesis from Winston 2/22
13
14 Shimon Ullman tank -- a 3D model vision thing
15
16 Paul Viola MIT PHD thesis (intensity -> surface)
17
18 Grimson -- multiple scales
19
20 Winston -- "most effective way to find papers is to ask someone."
21
22 Talk to Poggio/Ullman/Finlayson to get moar papers
23
24
25 * Notes from Vision meeting Wednesday, 2/20
26
27 David Clemems, PHG Thesis
28 Arizona university mind's eye project used model based vision
29
30 "Model based vision" -> "generative vision"
31
32 Microsoft Research flickr city square
33
34 celiu -- M$ motion guy
35
36 prakesh -- read paper
37
38 from ADK:
39
40 I wasn't able to find much in an hour but I probably just don't know
41 the magic keywords. You should look at David Clemens's thesis as I
42 mentioned. Also I believe it is Paul Cohen from Arizona State
43 University who initially worked on the simulation-driven activity
44 recognition in Mind's Eye, and then afaik changed tack completely.
45
46 http://w3.sista.arizona.edu/~cohen/Publications/
47
48 There is also Michael Burl of JPL whose page seems unavailable right
49 now. The JPL team also used 3D models of pose along with some kind of
50 scripts to recognize events.