rlm@401
|
1 In order for this to be a reasonable thesis that I can be proud of,
|
rlm@401
|
2 what are the /minimum/ number of things I need to get done?
|
rlm@401
|
3
|
rlm@401
|
4
|
rlm@401
|
5 * worm OR hand registration
|
rlm@401
|
6 - training from a few examples (2 to start out)
|
rlm@401
|
7 - aligning the body with the scene
|
rlm@401
|
8 - generating sensory data
|
rlm@401
|
9 - matching previous labeled examples using dot-products or some
|
rlm@401
|
10 other basic thing
|
rlm@401
|
11 - showing that it works with different views
|
rlm@401
|
12
|
rlm@401
|
13 * first draft
|
rlm@401
|
14 - draft of thesis without bibliography or formatting
|
rlm@401
|
15 - should have basic experiment and have full description of
|
rlm@401
|
16 framework with code
|
rlm@401
|
17 - review with Winston
|
rlm@401
|
18
|
rlm@401
|
19 * final draft
|
rlm@401
|
20 - implement stretch goals from Winston if possible
|
rlm@401
|
21 - complete final formatting and submit
|
rlm@401
|
22
|
rlm@568
|
23 * DONE Turn in CORTEX
|
rlm@568
|
24 CLOSED: [2014-05-22 Thu 12:01] DEADLINE: <2014-05-12 Mon>
|
rlm@401
|
25 SHIT THAT'S IN 67 DAYS!!!
|
rlm@401
|
26
|
rlm@403
|
27 ** program simple feature matching code for the worm's segments
|
rlm@403
|
28
|
rlm@401
|
29 Subgoals:
|
rlm@401
|
30 *** DONE Get cortex working again, run tests, no jmonkeyengine updates
|
rlm@401
|
31 CLOSED: [2014-03-03 Mon 22:07] SCHEDULED: <2014-03-03 Mon>
|
rlm@401
|
32 *** DONE get blender working again
|
rlm@401
|
33 CLOSED: [2014-03-03 Mon 22:43] SCHEDULED: <2014-03-03 Mon>
|
rlm@401
|
34 *** DONE make sparce touch worm segment in blender
|
rlm@401
|
35 CLOSED: [2014-03-03 Mon 23:16] SCHEDULED: <2014-03-03 Mon>
|
rlm@401
|
36 CLOCK: [2014-03-03 Mon 22:44]--[2014-03-03 Mon 23:16] => 0:32
|
rlm@401
|
37 *** DONE make multi-segment touch worm with touch sensors and display
|
rlm@401
|
38 CLOSED: [2014-03-03 Mon 23:54] SCHEDULED: <2014-03-03 Mon>
|
rlm@401
|
39
|
rlm@401
|
40 *** DONE Make a worm wiggle and curl
|
rlm@401
|
41 CLOSED: [2014-03-04 Tue 23:03] SCHEDULED: <2014-03-04 Tue>
|
rlm@403
|
42
|
rlm@401
|
43
|
rlm@401
|
44 ** First draft
|
rlm@403
|
45
|
rlm@401
|
46 Subgoals:
|
rlm@401
|
47 *** Writeup new worm experiments.
|
rlm@401
|
48 *** Triage implementation code and get it into chapter form.
|
rlm@401
|
49
|
rlm@401
|
50
|
rlm@401
|
51
|
rlm@401
|
52
|
rlm@401
|
53
|
rlm@401
|
54 ** for today
|
rlm@401
|
55
|
rlm@401
|
56 - guided worm :: control the worm with the keyboard. Useful for
|
rlm@401
|
57 testing the body-centered recog scripts, and for
|
rlm@401
|
58 preparing a cool demo video.
|
rlm@401
|
59
|
rlm@401
|
60 - body-centered recognition :: detect actions using hard coded
|
rlm@401
|
61 body-centered scripts.
|
rlm@401
|
62
|
rlm@401
|
63 - cool demo video of the worm being moved and recognizing things ::
|
rlm@401
|
64 will be a neat part of the thesis.
|
rlm@401
|
65
|
rlm@401
|
66 - thesis export :: refactoring and organization of code so that it
|
rlm@401
|
67 spits out a thesis in addition to the web page.
|
rlm@401
|
68
|
rlm@401
|
69 - video alignment :: analyze the frames of a video in order to align
|
rlm@401
|
70 the worm. Requires body-centered recognition. Can "cheat".
|
rlm@401
|
71
|
rlm@401
|
72 - smoother actions :: use debugging controls to directly influence the
|
rlm@401
|
73 demo actions, and to generate recoginition procedures.
|
rlm@401
|
74
|
rlm@401
|
75 - degenerate video demonstration :: show the system recognizing a
|
rlm@401
|
76 curled worm from dead on. Crowning achievement of thesis.
|
rlm@401
|
77
|
rlm@401
|
78 ** Ordered from easiest to hardest
|
rlm@401
|
79
|
rlm@401
|
80 Just report the positions of everything. I don't think that this
|
rlm@401
|
81 necessairly shows anything usefull.
|
rlm@401
|
82
|
rlm@401
|
83 Worm-segment vision -- you initialize a view of the worm, but instead
|
rlm@401
|
84 of pixels you use labels via ray tracing. Has the advantage of still
|
rlm@401
|
85 allowing for visual occlusion, but reliably identifies the objects,
|
rlm@401
|
86 even without rainbow coloring. You can code this as an image.
|
rlm@401
|
87
|
rlm@401
|
88 Same as above, except just with worm/non-worm labels.
|
rlm@401
|
89
|
rlm@401
|
90 Color code each worm segment and then recognize them using blob
|
rlm@401
|
91 detectors. Then you solve for the perspective and the action
|
rlm@401
|
92 simultaneously.
|
rlm@401
|
93
|
rlm@401
|
94 The entire worm can be colored the same, high contrast color against a
|
rlm@401
|
95 nearly black background.
|
rlm@401
|
96
|
rlm@401
|
97 "Rooted" vision. You give the exact coordinates of ONE piece of the
|
rlm@401
|
98 worm, but the algorithm figures out the rest.
|
rlm@401
|
99
|
rlm@401
|
100 More rooted vision -- start off the entire worm with one posistion.
|
rlm@401
|
101
|
rlm@401
|
102 The right way to do alignment is to use motion over multiple frames to
|
rlm@401
|
103 snap individual pieces of the model into place sharing and
|
rlm@401
|
104 propragating the individual alignments over the whole model. We also
|
rlm@401
|
105 want to limit the alignment search to just those actions we are
|
rlm@401
|
106 prepared to identify. This might mean that I need some small "micro
|
rlm@401
|
107 actions" such as the individual movements of the worm pieces.
|
rlm@401
|
108
|
rlm@401
|
109 Get just the centers of each segment projected onto the imaging
|
rlm@401
|
110 plane. (best so far).
|
rlm@401
|
111
|
rlm@401
|
112
|
rlm@401
|
113 Repertoire of actions + video frames -->
|
rlm@401
|
114 directed multi-frame-search alg
|
rlm@401
|
115
|
rlm@401
|
116
|
rlm@401
|
117
|
rlm@401
|
118
|
rlm@401
|
119
|
rlm@401
|
120
|
rlm@401
|
121 !! Could also have a bounding box around the worm provided by
|
rlm@401
|
122 filtering the worm/non-worm render, and use bbbgs. As a bonus, I get
|
rlm@401
|
123 to include bbbgs in my thesis! Could finally do that recursive things
|
rlm@401
|
124 where I make bounding boxes be those things that give results that
|
rlm@401
|
125 give good bounding boxes. If I did this I could use a disruptive
|
rlm@401
|
126 pattern on the worm.
|
rlm@401
|
127
|
rlm@401
|
128 Re imagining using default textures is very simple for this system,
|
rlm@401
|
129 but hard for others.
|
rlm@401
|
130
|
rlm@401
|
131
|
rlm@401
|
132 Want to demonstrate, at minimum, alignment of some model of the worm
|
rlm@401
|
133 to the video, and a lookup of the action by simulated perception.
|
rlm@401
|
134
|
rlm@401
|
135 note: the purple/white points is a very beautiful texture, because
|
rlm@401
|
136 when it moves slightly, the white dots look like they're
|
rlm@401
|
137 twinkling. Would look even better if it was a darker purple. Also
|
rlm@401
|
138 would look better more spread out.
|
rlm@401
|
139
|
rlm@401
|
140
|
rlm@401
|
141 embed assumption of one frame of view, search by moving around in
|
rlm@401
|
142 simulated world.
|
rlm@401
|
143
|
rlm@401
|
144 Allowed to limit search by setting limits to a hemisphere around the
|
rlm@401
|
145 imagined worm! This limits scale also.
|
rlm@401
|
146
|
rlm@401
|
147
|
rlm@401
|
148
|
rlm@401
|
149
|
rlm@401
|
150
|
rlm@401
|
151 !! Limited search with worm/non-worm rendering.
|
rlm@401
|
152 How much inverse kinematics do we have to do?
|
rlm@401
|
153 What about cached (allowed state-space) paths, derived from labeled
|
rlm@401
|
154 training. You have to lead from one to another.
|
rlm@401
|
155
|
rlm@401
|
156 What about initial state? Could start the input videos at a specific
|
rlm@401
|
157 state, then just match that explicitly.
|
rlm@401
|
158
|
rlm@401
|
159 !! The training doesn't have to be labeled -- you can just move around
|
rlm@401
|
160 for a while!!
|
rlm@401
|
161
|
rlm@401
|
162 !! Limited search with motion based alignment.
|
rlm@401
|
163
|
rlm@401
|
164
|
rlm@401
|
165
|
rlm@401
|
166
|
rlm@401
|
167 "play arounds" can establish a chain of linked sensoriums. Future
|
rlm@401
|
168 matches must fall into one of the already experienced things, and once
|
rlm@401
|
169 they do, it greatly limits the things that are possible in the future.
|
rlm@401
|
170
|
rlm@401
|
171
|
rlm@401
|
172 frame differences help to detect muscle exertion.
|
rlm@401
|
173
|
rlm@401
|
174 Can try to match on a few "representative" frames. Can also just have
|
rlm@401
|
175 a few "bodies" in various states which we try to match.
|
rlm@401
|
176
|
rlm@401
|
177
|
rlm@401
|
178
|
rlm@401
|
179 Paths through state-space have the exact same signature as
|
rlm@401
|
180 simulation. BUT, these can be searched in parallel and don't interfere
|
rlm@401
|
181 with each other.
|
rlm@401
|
182
|
rlm@401
|
183
|
rlm@402
|
184
|
rlm@402
|
185
|
rlm@402
|
186 ** Final stretch up to First Draft
|
rlm@402
|
187
|
rlm@404
|
188 *** DONE complete debug control of worm
|
rlm@404
|
189 CLOSED: [2014-03-17 Mon 17:29] SCHEDULED: <2014-03-17 Mon>
|
rlm@404
|
190 CLOCK: [2014-03-17 Mon 14:01]--[2014-03-17 Mon 17:29] => 3:28
|
rlm@405
|
191 *** DONE add phi-space output to debug control
|
rlm@405
|
192 CLOSED: [2014-03-17 Mon 17:42] SCHEDULED: <2014-03-17 Mon>
|
rlm@405
|
193 CLOCK: [2014-03-17 Mon 17:31]--[2014-03-17 Mon 17:42] => 0:11
|
rlm@407
|
194
|
rlm@409
|
195 *** DONE complete automatic touch partitioning
|
rlm@409
|
196 CLOSED: [2014-03-18 Tue 21:43] SCHEDULED: <2014-03-18 Tue>
|
rlm@415
|
197 *** DONE complete cyclic predicate
|
rlm@415
|
198 CLOSED: [2014-03-19 Wed 16:34] SCHEDULED: <2014-03-18 Tue>
|
rlm@415
|
199 CLOCK: [2014-03-19 Wed 13:16]--[2014-03-19 Wed 16:34] => 3:18
|
rlm@415
|
200 *** DONE complete three phi-stream action predicatates; test them with debug control
|
rlm@415
|
201 CLOSED: [2014-03-19 Wed 16:35] SCHEDULED: <2014-03-17 Mon>
|
rlm@415
|
202 CLOCK: [2014-03-18 Tue 18:36]--[2014-03-18 Tue 21:43] => 3:07
|
rlm@407
|
203 CLOCK: [2014-03-18 Tue 18:34]--[2014-03-18 Tue 18:36] => 0:02
|
rlm@407
|
204 CLOCK: [2014-03-17 Mon 19:19]--[2014-03-17 Mon 21:19] => 2:00
|
rlm@415
|
205 *** DONE build an automatic "do all the things" sequence.
|
rlm@415
|
206 CLOSED: [2014-03-19 Wed 16:55] SCHEDULED: <2014-03-19 Wed>
|
rlm@415
|
207 CLOCK: [2014-03-19 Wed 16:53]--[2014-03-19 Wed 16:55] => 0:02
|
rlm@417
|
208 *** DONE implement proprioception based movement lookup in phi-space
|
rlm@417
|
209 CLOSED: [2014-03-19 Wed 22:04] SCHEDULED: <2014-03-19 Wed>
|
rlm@417
|
210 CLOCK: [2014-03-19 Wed 19:32]--[2014-03-19 Wed 22:04] => 2:32
|
rlm@418
|
211 *** DONE make proprioception reference phi-space indexes
|
rlm@418
|
212 CLOSED: [2014-03-19 Wed 22:47] SCHEDULED: <2014-03-19 Wed>
|
rlm@417
|
213
|
rlm@415
|
214
|
rlm@420
|
215 *** DONE create test videos, also record positions of worm segments
|
rlm@420
|
216 CLOSED: [2014-03-20 Thu 22:02] SCHEDULED: <2014-03-19 Wed>
|
rlm@402
|
217
|
rlm@403
|
218 *** TODO Collect intro, worm-learn and cortex creation into draft thesis.
|
rlm@405
|
219
|