diff thesis/aux/org/roadmap.org @ 422:6b0f77df0e53

building latex scaffolding for thesis.
author Robert McIntyre <rlm@mit.edu>
date Fri, 21 Mar 2014 01:17:41 -0400
parents thesis/org/roadmap.org@7f3581dc58ff
children
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/thesis/aux/org/roadmap.org	Fri Mar 21 01:17:41 2014 -0400
     1.3 @@ -0,0 +1,220 @@
     1.4 +In order for this to be a reasonable thesis that I can be proud of,
     1.5 +what are the /minimum/ number of things I need to get done?
     1.6 +
     1.7 +
     1.8 +* worm OR hand registration
     1.9 +  - training from a few examples (2 to start out)
    1.10 +  - aligning the body with the scene
    1.11 +  - generating sensory data
    1.12 +  - matching previous labeled examples using dot-products or some
    1.13 +    other basic thing
    1.14 +  - showing that it works with different views
    1.15 +
    1.16 +* first draft
    1.17 +  - draft of thesis without bibliography or formatting
    1.18 +  - should have basic experiment and have full description of
    1.19 +    framework with code
    1.20 +  - review with Winston
    1.21 +  
    1.22 +* final draft
    1.23 +  - implement stretch goals from Winston if possible
    1.24 +  - complete final formatting and submit
    1.25 +
    1.26 +* CORTEX
    1.27 +  DEADLINE: <2014-05-09 Fri>
    1.28 +  SHIT THAT'S IN 67 DAYS!!!
    1.29 +
    1.30 +** program simple feature matching code for the worm's segments
    1.31 +
    1.32 +Subgoals:
    1.33 +*** DONE Get cortex working again, run tests, no jmonkeyengine updates
    1.34 +    CLOSED: [2014-03-03 Mon 22:07] SCHEDULED: <2014-03-03 Mon>
    1.35 +*** DONE get blender working again
    1.36 +    CLOSED: [2014-03-03 Mon 22:43] SCHEDULED: <2014-03-03 Mon>
    1.37 +*** DONE make sparce touch worm segment in blender
    1.38 +    CLOSED: [2014-03-03 Mon 23:16] SCHEDULED: <2014-03-03 Mon>
    1.39 +    CLOCK: [2014-03-03 Mon 22:44]--[2014-03-03 Mon 23:16] =>  0:32
    1.40 +*** DONE make multi-segment touch worm with touch sensors and display
    1.41 +    CLOSED: [2014-03-03 Mon 23:54] SCHEDULED: <2014-03-03 Mon>
    1.42 +
    1.43 +*** DONE Make a worm wiggle and curl
    1.44 +    CLOSED: [2014-03-04 Tue 23:03] SCHEDULED: <2014-03-04 Tue>
    1.45 +
    1.46 +
    1.47 +** First draft
    1.48 +
    1.49 +Subgoals:
    1.50 +*** Writeup new worm experiments.
    1.51 +*** Triage implementation code and get it into chapter form.
    1.52 +
    1.53 +
    1.54 +
    1.55 + 
    1.56 +
    1.57 +** for today
    1.58 +
    1.59 +- guided worm :: control the worm with the keyboard. Useful for
    1.60 +                 testing the body-centered recog scripts, and for
    1.61 +                 preparing a cool demo video.
    1.62 +
    1.63 +- body-centered recognition :: detect actions using hard coded
    1.64 +     body-centered scripts. 
    1.65 +
    1.66 +- cool demo video of the worm being moved and recognizing things ::
    1.67 +     will be a neat part of the thesis.
    1.68 +
    1.69 +- thesis export :: refactoring and organization of code so that it
    1.70 +                   spits out a thesis in addition to the web page.
    1.71 +
    1.72 +- video alignment :: analyze the frames of a video in order to align
    1.73 +     the worm. Requires body-centered recognition. Can "cheat".
    1.74 +
    1.75 +- smoother actions :: use debugging controls to directly influence the
    1.76 +     demo actions, and to generate recoginition procedures.
    1.77 +
    1.78 +- degenerate video demonstration :: show the system recognizing a
    1.79 +     curled worm from dead on. Crowning achievement of thesis.
    1.80 +
    1.81 +** Ordered from easiest to hardest
    1.82 +
    1.83 +Just report the positions of everything. I don't think that this
    1.84 +necessairly shows anything usefull.
    1.85 +
    1.86 +Worm-segment vision -- you initialize a view of the worm, but instead
    1.87 +of pixels you use labels via ray tracing. Has the advantage of still
    1.88 +allowing for visual occlusion, but reliably identifies the objects,
    1.89 +even without rainbow coloring. You can code this as an image. 
    1.90 +
    1.91 +Same as above, except just with worm/non-worm labels.
    1.92 +
    1.93 +Color code each worm segment and then recognize them using blob
    1.94 +detectors. Then you solve for the perspective and the action
    1.95 +simultaneously.
    1.96 +
    1.97 +The entire worm can be colored the same, high contrast color against a
    1.98 +nearly black background.
    1.99 +
   1.100 +"Rooted" vision. You give the exact coordinates of ONE piece of the
   1.101 +worm, but the algorithm figures out the rest.
   1.102 +
   1.103 +More rooted vision -- start off the entire worm with one posistion.
   1.104 +
   1.105 +The right way to do alignment is to use motion over multiple frames to
   1.106 +snap individual pieces of the model into place sharing and
   1.107 +propragating the individual alignments over the whole model. We also
   1.108 +want to limit the alignment search to just those actions we are
   1.109 +prepared to identify. This might mean that I need some small "micro
   1.110 +actions" such as the individual movements of the worm pieces.
   1.111 +
   1.112 +Get just the centers of each segment projected onto the imaging
   1.113 +plane. (best so far).
   1.114 +
   1.115 +
   1.116 +Repertoire of actions  +  video frames -->
   1.117 +   directed multi-frame-search alg
   1.118 +
   1.119 +
   1.120 +
   1.121 +
   1.122 +
   1.123 +
   1.124 +!! Could also have a bounding box around the worm provided by
   1.125 +filtering the worm/non-worm render, and use bbbgs. As a bonus, I get
   1.126 +to include bbbgs in my thesis! Could finally do that recursive things
   1.127 +where I make bounding boxes be those things that give results that
   1.128 +give good bounding boxes. If I did this I could use a disruptive
   1.129 +pattern on the worm.
   1.130 +
   1.131 +Re imagining using default textures is very simple for this system,
   1.132 +but hard for others.
   1.133 +
   1.134 +
   1.135 +Want to demonstrate, at minimum, alignment of some model of the worm
   1.136 +to the video, and a lookup of the action by simulated perception.
   1.137 +
   1.138 +note: the purple/white points is a very beautiful texture, because
   1.139 +when it moves slightly, the white dots look like they're
   1.140 +twinkling. Would look even better if it was a darker purple. Also
   1.141 +would look better more spread out.
   1.142 +
   1.143 +
   1.144 +embed assumption of one frame of view, search by moving around in
   1.145 +simulated world.
   1.146 +
   1.147 +Allowed to limit search by setting limits to a hemisphere around the
   1.148 +imagined worm! This limits scale also.
   1.149 +
   1.150 +
   1.151 +
   1.152 +
   1.153 +
   1.154 +!! Limited search with worm/non-worm rendering. 
   1.155 +How much inverse kinematics do we have to do?
   1.156 +What about cached (allowed state-space) paths, derived from labeled
   1.157 +training. You have to lead from one to another.
   1.158 +
   1.159 +What about initial state? Could start the input videos at a specific
   1.160 +state, then just match that explicitly.
   1.161 +
   1.162 +!! The training doesn't have to be labeled -- you can just move around
   1.163 +for a while!!
   1.164 +
   1.165 +!! Limited search with motion based alignment.
   1.166 +
   1.167 +
   1.168 +
   1.169 +
   1.170 +"play arounds" can establish a chain of linked sensoriums. Future
   1.171 +matches must fall into one of the already experienced things, and once
   1.172 +they do, it greatly limits the things that are possible in the future.
   1.173 +
   1.174 +
   1.175 +frame differences help to detect muscle exertion.
   1.176 +
   1.177 +Can try to match on a few "representative" frames. Can also just have
   1.178 +a few "bodies" in various states which we try to match.
   1.179 +
   1.180 +
   1.181 +
   1.182 +Paths through state-space have the exact same signature as
   1.183 +simulation. BUT, these can be searched in parallel and don't interfere
   1.184 +with each other.
   1.185 +
   1.186 +
   1.187 +
   1.188 +
   1.189 +** Final stretch up to First Draft
   1.190 +
   1.191 +*** DONE complete debug control of worm
   1.192 +    CLOSED: [2014-03-17 Mon 17:29] SCHEDULED: <2014-03-17 Mon>
   1.193 +    CLOCK: [2014-03-17 Mon 14:01]--[2014-03-17 Mon 17:29] =>  3:28
   1.194 +*** DONE add phi-space output to debug control
   1.195 +    CLOSED: [2014-03-17 Mon 17:42] SCHEDULED: <2014-03-17 Mon>
   1.196 +    CLOCK: [2014-03-17 Mon 17:31]--[2014-03-17 Mon 17:42] =>  0:11
   1.197 +
   1.198 +*** DONE complete automatic touch partitioning
   1.199 +    CLOSED: [2014-03-18 Tue 21:43] SCHEDULED: <2014-03-18 Tue>
   1.200 +*** DONE complete cyclic predicate
   1.201 +    CLOSED: [2014-03-19 Wed 16:34] SCHEDULED: <2014-03-18 Tue>
   1.202 +    CLOCK: [2014-03-19 Wed 13:16]--[2014-03-19 Wed 16:34] =>  3:18
   1.203 +*** DONE complete three phi-stream action predicatates; test them with debug control
   1.204 +    CLOSED: [2014-03-19 Wed 16:35] SCHEDULED: <2014-03-17 Mon>
   1.205 +    CLOCK: [2014-03-18 Tue 18:36]--[2014-03-18 Tue 21:43] =>  3:07
   1.206 +    CLOCK: [2014-03-18 Tue 18:34]--[2014-03-18 Tue 18:36] =>  0:02
   1.207 +    CLOCK: [2014-03-17 Mon 19:19]--[2014-03-17 Mon 21:19] =>  2:00
   1.208 +*** DONE build an automatic "do all the things" sequence.
   1.209 +    CLOSED: [2014-03-19 Wed 16:55] SCHEDULED: <2014-03-19 Wed>
   1.210 +    CLOCK: [2014-03-19 Wed 16:53]--[2014-03-19 Wed 16:55] =>  0:02
   1.211 +*** DONE implement proprioception based movement lookup in phi-space
   1.212 +    CLOSED: [2014-03-19 Wed 22:04] SCHEDULED: <2014-03-19 Wed>
   1.213 +    CLOCK: [2014-03-19 Wed 19:32]--[2014-03-19 Wed 22:04] =>  2:32
   1.214 +*** DONE make proprioception reference phi-space indexes
   1.215 +    CLOSED: [2014-03-19 Wed 22:47] SCHEDULED: <2014-03-19 Wed>
   1.216 +    CLOCK: [2014-03-19 Wed 22:07]
   1.217 +
   1.218 +
   1.219 +*** DONE create test videos, also record positions of worm segments
   1.220 +    CLOSED: [2014-03-20 Thu 22:02] SCHEDULED: <2014-03-19 Wed>
   1.221 +
   1.222 +*** TODO Collect intro, worm-learn and cortex creation into draft thesis. 
   1.223 +