Mercurial > cortex
comparison thesis/org/roadmap.org @ 430:5205535237fb
fix skew in self-organizing-touch, work on thesis.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sat, 22 Mar 2014 16:10:34 -0400 |
parents | thesis/aux/org/roadmap.org@6b0f77df0e53 |
children | 8e52a2802821 |
comparison
equal
deleted
inserted
replaced
429:b5d0f0adf19f | 430:5205535237fb |
---|---|
1 In order for this to be a reasonable thesis that I can be proud of, | |
2 what are the /minimum/ number of things I need to get done? | |
3 | |
4 | |
5 * worm OR hand registration | |
6 - training from a few examples (2 to start out) | |
7 - aligning the body with the scene | |
8 - generating sensory data | |
9 - matching previous labeled examples using dot-products or some | |
10 other basic thing | |
11 - showing that it works with different views | |
12 | |
13 * first draft | |
14 - draft of thesis without bibliography or formatting | |
15 - should have basic experiment and have full description of | |
16 framework with code | |
17 - review with Winston | |
18 | |
19 * final draft | |
20 - implement stretch goals from Winston if possible | |
21 - complete final formatting and submit | |
22 | |
23 * CORTEX | |
24 DEADLINE: <2014-05-09 Fri> | |
25 SHIT THAT'S IN 67 DAYS!!! | |
26 | |
27 ** program simple feature matching code for the worm's segments | |
28 | |
29 Subgoals: | |
30 *** DONE Get cortex working again, run tests, no jmonkeyengine updates | |
31 CLOSED: [2014-03-03 Mon 22:07] SCHEDULED: <2014-03-03 Mon> | |
32 *** DONE get blender working again | |
33 CLOSED: [2014-03-03 Mon 22:43] SCHEDULED: <2014-03-03 Mon> | |
34 *** DONE make sparce touch worm segment in blender | |
35 CLOSED: [2014-03-03 Mon 23:16] SCHEDULED: <2014-03-03 Mon> | |
36 CLOCK: [2014-03-03 Mon 22:44]--[2014-03-03 Mon 23:16] => 0:32 | |
37 *** DONE make multi-segment touch worm with touch sensors and display | |
38 CLOSED: [2014-03-03 Mon 23:54] SCHEDULED: <2014-03-03 Mon> | |
39 | |
40 *** DONE Make a worm wiggle and curl | |
41 CLOSED: [2014-03-04 Tue 23:03] SCHEDULED: <2014-03-04 Tue> | |
42 | |
43 | |
44 ** First draft | |
45 | |
46 Subgoals: | |
47 *** Writeup new worm experiments. | |
48 *** Triage implementation code and get it into chapter form. | |
49 | |
50 | |
51 | |
52 | |
53 | |
54 ** for today | |
55 | |
56 - guided worm :: control the worm with the keyboard. Useful for | |
57 testing the body-centered recog scripts, and for | |
58 preparing a cool demo video. | |
59 | |
60 - body-centered recognition :: detect actions using hard coded | |
61 body-centered scripts. | |
62 | |
63 - cool demo video of the worm being moved and recognizing things :: | |
64 will be a neat part of the thesis. | |
65 | |
66 - thesis export :: refactoring and organization of code so that it | |
67 spits out a thesis in addition to the web page. | |
68 | |
69 - video alignment :: analyze the frames of a video in order to align | |
70 the worm. Requires body-centered recognition. Can "cheat". | |
71 | |
72 - smoother actions :: use debugging controls to directly influence the | |
73 demo actions, and to generate recoginition procedures. | |
74 | |
75 - degenerate video demonstration :: show the system recognizing a | |
76 curled worm from dead on. Crowning achievement of thesis. | |
77 | |
78 ** Ordered from easiest to hardest | |
79 | |
80 Just report the positions of everything. I don't think that this | |
81 necessairly shows anything usefull. | |
82 | |
83 Worm-segment vision -- you initialize a view of the worm, but instead | |
84 of pixels you use labels via ray tracing. Has the advantage of still | |
85 allowing for visual occlusion, but reliably identifies the objects, | |
86 even without rainbow coloring. You can code this as an image. | |
87 | |
88 Same as above, except just with worm/non-worm labels. | |
89 | |
90 Color code each worm segment and then recognize them using blob | |
91 detectors. Then you solve for the perspective and the action | |
92 simultaneously. | |
93 | |
94 The entire worm can be colored the same, high contrast color against a | |
95 nearly black background. | |
96 | |
97 "Rooted" vision. You give the exact coordinates of ONE piece of the | |
98 worm, but the algorithm figures out the rest. | |
99 | |
100 More rooted vision -- start off the entire worm with one posistion. | |
101 | |
102 The right way to do alignment is to use motion over multiple frames to | |
103 snap individual pieces of the model into place sharing and | |
104 propragating the individual alignments over the whole model. We also | |
105 want to limit the alignment search to just those actions we are | |
106 prepared to identify. This might mean that I need some small "micro | |
107 actions" such as the individual movements of the worm pieces. | |
108 | |
109 Get just the centers of each segment projected onto the imaging | |
110 plane. (best so far). | |
111 | |
112 | |
113 Repertoire of actions + video frames --> | |
114 directed multi-frame-search alg | |
115 | |
116 | |
117 | |
118 | |
119 | |
120 | |
121 !! Could also have a bounding box around the worm provided by | |
122 filtering the worm/non-worm render, and use bbbgs. As a bonus, I get | |
123 to include bbbgs in my thesis! Could finally do that recursive things | |
124 where I make bounding boxes be those things that give results that | |
125 give good bounding boxes. If I did this I could use a disruptive | |
126 pattern on the worm. | |
127 | |
128 Re imagining using default textures is very simple for this system, | |
129 but hard for others. | |
130 | |
131 | |
132 Want to demonstrate, at minimum, alignment of some model of the worm | |
133 to the video, and a lookup of the action by simulated perception. | |
134 | |
135 note: the purple/white points is a very beautiful texture, because | |
136 when it moves slightly, the white dots look like they're | |
137 twinkling. Would look even better if it was a darker purple. Also | |
138 would look better more spread out. | |
139 | |
140 | |
141 embed assumption of one frame of view, search by moving around in | |
142 simulated world. | |
143 | |
144 Allowed to limit search by setting limits to a hemisphere around the | |
145 imagined worm! This limits scale also. | |
146 | |
147 | |
148 | |
149 | |
150 | |
151 !! Limited search with worm/non-worm rendering. | |
152 How much inverse kinematics do we have to do? | |
153 What about cached (allowed state-space) paths, derived from labeled | |
154 training. You have to lead from one to another. | |
155 | |
156 What about initial state? Could start the input videos at a specific | |
157 state, then just match that explicitly. | |
158 | |
159 !! The training doesn't have to be labeled -- you can just move around | |
160 for a while!! | |
161 | |
162 !! Limited search with motion based alignment. | |
163 | |
164 | |
165 | |
166 | |
167 "play arounds" can establish a chain of linked sensoriums. Future | |
168 matches must fall into one of the already experienced things, and once | |
169 they do, it greatly limits the things that are possible in the future. | |
170 | |
171 | |
172 frame differences help to detect muscle exertion. | |
173 | |
174 Can try to match on a few "representative" frames. Can also just have | |
175 a few "bodies" in various states which we try to match. | |
176 | |
177 | |
178 | |
179 Paths through state-space have the exact same signature as | |
180 simulation. BUT, these can be searched in parallel and don't interfere | |
181 with each other. | |
182 | |
183 | |
184 | |
185 | |
186 ** Final stretch up to First Draft | |
187 | |
188 *** DONE complete debug control of worm | |
189 CLOSED: [2014-03-17 Mon 17:29] SCHEDULED: <2014-03-17 Mon> | |
190 CLOCK: [2014-03-17 Mon 14:01]--[2014-03-17 Mon 17:29] => 3:28 | |
191 *** DONE add phi-space output to debug control | |
192 CLOSED: [2014-03-17 Mon 17:42] SCHEDULED: <2014-03-17 Mon> | |
193 CLOCK: [2014-03-17 Mon 17:31]--[2014-03-17 Mon 17:42] => 0:11 | |
194 | |
195 *** DONE complete automatic touch partitioning | |
196 CLOSED: [2014-03-18 Tue 21:43] SCHEDULED: <2014-03-18 Tue> | |
197 *** DONE complete cyclic predicate | |
198 CLOSED: [2014-03-19 Wed 16:34] SCHEDULED: <2014-03-18 Tue> | |
199 CLOCK: [2014-03-19 Wed 13:16]--[2014-03-19 Wed 16:34] => 3:18 | |
200 *** DONE complete three phi-stream action predicatates; test them with debug control | |
201 CLOSED: [2014-03-19 Wed 16:35] SCHEDULED: <2014-03-17 Mon> | |
202 CLOCK: [2014-03-18 Tue 18:36]--[2014-03-18 Tue 21:43] => 3:07 | |
203 CLOCK: [2014-03-18 Tue 18:34]--[2014-03-18 Tue 18:36] => 0:02 | |
204 CLOCK: [2014-03-17 Mon 19:19]--[2014-03-17 Mon 21:19] => 2:00 | |
205 *** DONE build an automatic "do all the things" sequence. | |
206 CLOSED: [2014-03-19 Wed 16:55] SCHEDULED: <2014-03-19 Wed> | |
207 CLOCK: [2014-03-19 Wed 16:53]--[2014-03-19 Wed 16:55] => 0:02 | |
208 *** DONE implement proprioception based movement lookup in phi-space | |
209 CLOSED: [2014-03-19 Wed 22:04] SCHEDULED: <2014-03-19 Wed> | |
210 CLOCK: [2014-03-19 Wed 19:32]--[2014-03-19 Wed 22:04] => 2:32 | |
211 *** DONE make proprioception reference phi-space indexes | |
212 CLOSED: [2014-03-19 Wed 22:47] SCHEDULED: <2014-03-19 Wed> | |
213 CLOCK: [2014-03-19 Wed 22:07] | |
214 | |
215 | |
216 *** DONE create test videos, also record positions of worm segments | |
217 CLOSED: [2014-03-20 Thu 22:02] SCHEDULED: <2014-03-19 Wed> | |
218 | |
219 *** TODO Collect intro, worm-learn and cortex creation into draft thesis. | |
220 |