Mercurial > cortex
comparison thesis/org/roadmap.org @ 401:7ee735a836da
incorporate thesis.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sun, 16 Mar 2014 23:31:16 -0400 |
parents | |
children | a533a0038bd7 |
comparison
equal
deleted
inserted
replaced
400:6ba908c1a0a9 | 401:7ee735a836da |
---|---|
1 In order for this to be a reasonable thesis that I can be proud of, | |
2 what are the /minimum/ number of things I need to get done? | |
3 | |
4 | |
5 * worm OR hand registration | |
6 - training from a few examples (2 to start out) | |
7 - aligning the body with the scene | |
8 - generating sensory data | |
9 - matching previous labeled examples using dot-products or some | |
10 other basic thing | |
11 - showing that it works with different views | |
12 | |
13 * first draft | |
14 - draft of thesis without bibliography or formatting | |
15 - should have basic experiment and have full description of | |
16 framework with code | |
17 - review with Winston | |
18 | |
19 * final draft | |
20 - implement stretch goals from Winston if possible | |
21 - complete final formatting and submit | |
22 | |
23 | |
24 | |
25 | |
26 * CORTEX | |
27 DEADLINE: <2014-05-09 Fri> | |
28 SHIT THAT'S IN 67 DAYS!!! | |
29 | |
30 ** TODO program simple feature matching code for the worm's segments | |
31 DEADLINE: <2014-03-11 Tue> | |
32 Subgoals: | |
33 *** DONE Get cortex working again, run tests, no jmonkeyengine updates | |
34 CLOSED: [2014-03-03 Mon 22:07] SCHEDULED: <2014-03-03 Mon> | |
35 *** DONE get blender working again | |
36 CLOSED: [2014-03-03 Mon 22:43] SCHEDULED: <2014-03-03 Mon> | |
37 *** DONE make sparce touch worm segment in blender | |
38 CLOSED: [2014-03-03 Mon 23:16] SCHEDULED: <2014-03-03 Mon> | |
39 CLOCK: [2014-03-03 Mon 22:44]--[2014-03-03 Mon 23:16] => 0:32 | |
40 *** DONE make multi-segment touch worm with touch sensors and display | |
41 CLOSED: [2014-03-03 Mon 23:54] SCHEDULED: <2014-03-03 Mon> | |
42 CLOCK: [2014-03-03 Mon 23:17]--[2014-03-03 Mon 23:54] => 0:37 | |
43 | |
44 | |
45 *** DONE Make a worm wiggle and curl | |
46 CLOSED: [2014-03-04 Tue 23:03] SCHEDULED: <2014-03-04 Tue> | |
47 *** TODO work on alignment for the worm (can "cheat") | |
48 SCHEDULED: <2014-03-05 Wed> | |
49 | |
50 ** First draft | |
51 DEADLINE: <2014-03-14 Fri> | |
52 Subgoals: | |
53 *** Writeup new worm experiments. | |
54 *** Triage implementation code and get it into chapter form. | |
55 | |
56 | |
57 | |
58 | |
59 | |
60 ** for today | |
61 | |
62 - guided worm :: control the worm with the keyboard. Useful for | |
63 testing the body-centered recog scripts, and for | |
64 preparing a cool demo video. | |
65 | |
66 - body-centered recognition :: detect actions using hard coded | |
67 body-centered scripts. | |
68 | |
69 - cool demo video of the worm being moved and recognizing things :: | |
70 will be a neat part of the thesis. | |
71 | |
72 - thesis export :: refactoring and organization of code so that it | |
73 spits out a thesis in addition to the web page. | |
74 | |
75 - video alignment :: analyze the frames of a video in order to align | |
76 the worm. Requires body-centered recognition. Can "cheat". | |
77 | |
78 - smoother actions :: use debugging controls to directly influence the | |
79 demo actions, and to generate recoginition procedures. | |
80 | |
81 - degenerate video demonstration :: show the system recognizing a | |
82 curled worm from dead on. Crowning achievement of thesis. | |
83 | |
84 ** Ordered from easiest to hardest | |
85 | |
86 Just report the positions of everything. I don't think that this | |
87 necessairly shows anything usefull. | |
88 | |
89 Worm-segment vision -- you initialize a view of the worm, but instead | |
90 of pixels you use labels via ray tracing. Has the advantage of still | |
91 allowing for visual occlusion, but reliably identifies the objects, | |
92 even without rainbow coloring. You can code this as an image. | |
93 | |
94 Same as above, except just with worm/non-worm labels. | |
95 | |
96 Color code each worm segment and then recognize them using blob | |
97 detectors. Then you solve for the perspective and the action | |
98 simultaneously. | |
99 | |
100 The entire worm can be colored the same, high contrast color against a | |
101 nearly black background. | |
102 | |
103 "Rooted" vision. You give the exact coordinates of ONE piece of the | |
104 worm, but the algorithm figures out the rest. | |
105 | |
106 More rooted vision -- start off the entire worm with one posistion. | |
107 | |
108 The right way to do alignment is to use motion over multiple frames to | |
109 snap individual pieces of the model into place sharing and | |
110 propragating the individual alignments over the whole model. We also | |
111 want to limit the alignment search to just those actions we are | |
112 prepared to identify. This might mean that I need some small "micro | |
113 actions" such as the individual movements of the worm pieces. | |
114 | |
115 Get just the centers of each segment projected onto the imaging | |
116 plane. (best so far). | |
117 | |
118 | |
119 Repertoire of actions + video frames --> | |
120 directed multi-frame-search alg | |
121 | |
122 | |
123 | |
124 | |
125 | |
126 | |
127 !! Could also have a bounding box around the worm provided by | |
128 filtering the worm/non-worm render, and use bbbgs. As a bonus, I get | |
129 to include bbbgs in my thesis! Could finally do that recursive things | |
130 where I make bounding boxes be those things that give results that | |
131 give good bounding boxes. If I did this I could use a disruptive | |
132 pattern on the worm. | |
133 | |
134 Re imagining using default textures is very simple for this system, | |
135 but hard for others. | |
136 | |
137 | |
138 Want to demonstrate, at minimum, alignment of some model of the worm | |
139 to the video, and a lookup of the action by simulated perception. | |
140 | |
141 note: the purple/white points is a very beautiful texture, because | |
142 when it moves slightly, the white dots look like they're | |
143 twinkling. Would look even better if it was a darker purple. Also | |
144 would look better more spread out. | |
145 | |
146 | |
147 embed assumption of one frame of view, search by moving around in | |
148 simulated world. | |
149 | |
150 Allowed to limit search by setting limits to a hemisphere around the | |
151 imagined worm! This limits scale also. | |
152 | |
153 | |
154 | |
155 | |
156 | |
157 !! Limited search with worm/non-worm rendering. | |
158 How much inverse kinematics do we have to do? | |
159 What about cached (allowed state-space) paths, derived from labeled | |
160 training. You have to lead from one to another. | |
161 | |
162 What about initial state? Could start the input videos at a specific | |
163 state, then just match that explicitly. | |
164 | |
165 !! The training doesn't have to be labeled -- you can just move around | |
166 for a while!! | |
167 | |
168 !! Limited search with motion based alignment. | |
169 | |
170 | |
171 | |
172 | |
173 "play arounds" can establish a chain of linked sensoriums. Future | |
174 matches must fall into one of the already experienced things, and once | |
175 they do, it greatly limits the things that are possible in the future. | |
176 | |
177 | |
178 frame differences help to detect muscle exertion. | |
179 | |
180 Can try to match on a few "representative" frames. Can also just have | |
181 a few "bodies" in various states which we try to match. | |
182 | |
183 | |
184 | |
185 Paths through state-space have the exact same signature as | |
186 simulation. BUT, these can be searched in parallel and don't interfere | |
187 with each other. | |
188 | |
189 |