diff thesis/org/roadmap.org @ 401:7ee735a836da

incorporate thesis.
author Robert McIntyre <rlm@mit.edu>
date Sun, 16 Mar 2014 23:31:16 -0400
parents
children a533a0038bd7
line wrap: on
line diff
     1.1 --- /dev/null	Thu Jan 01 00:00:00 1970 +0000
     1.2 +++ b/thesis/org/roadmap.org	Sun Mar 16 23:31:16 2014 -0400
     1.3 @@ -0,0 +1,189 @@
     1.4 +In order for this to be a reasonable thesis that I can be proud of,
     1.5 +what are the /minimum/ number of things I need to get done?
     1.6 +
     1.7 +
     1.8 +* worm OR hand registration
     1.9 +  - training from a few examples (2 to start out)
    1.10 +  - aligning the body with the scene
    1.11 +  - generating sensory data
    1.12 +  - matching previous labeled examples using dot-products or some
    1.13 +    other basic thing
    1.14 +  - showing that it works with different views
    1.15 +
    1.16 +* first draft
    1.17 +  - draft of thesis without bibliography or formatting
    1.18 +  - should have basic experiment and have full description of
    1.19 +    framework with code
    1.20 +  - review with Winston
    1.21 +  
    1.22 +* final draft
    1.23 +  - implement stretch goals from Winston if possible
    1.24 +  - complete final formatting and submit
    1.25 +
    1.26 +
    1.27 +
    1.28 +
    1.29 +* CORTEX
    1.30 +  DEADLINE: <2014-05-09 Fri>
    1.31 +  SHIT THAT'S IN 67 DAYS!!!
    1.32 +
    1.33 +** TODO program simple feature matching code for the worm's segments
    1.34 +   DEADLINE: <2014-03-11 Tue>
    1.35 +Subgoals:
    1.36 +*** DONE Get cortex working again, run tests, no jmonkeyengine updates
    1.37 +    CLOSED: [2014-03-03 Mon 22:07] SCHEDULED: <2014-03-03 Mon>
    1.38 +*** DONE get blender working again
    1.39 +    CLOSED: [2014-03-03 Mon 22:43] SCHEDULED: <2014-03-03 Mon>
    1.40 +*** DONE make sparce touch worm segment in blender
    1.41 +    CLOSED: [2014-03-03 Mon 23:16] SCHEDULED: <2014-03-03 Mon>
    1.42 +    CLOCK: [2014-03-03 Mon 22:44]--[2014-03-03 Mon 23:16] =>  0:32
    1.43 +*** DONE make multi-segment touch worm with touch sensors and display
    1.44 +    CLOSED: [2014-03-03 Mon 23:54] SCHEDULED: <2014-03-03 Mon>
    1.45 +    CLOCK: [2014-03-03 Mon 23:17]--[2014-03-03 Mon 23:54] =>  0:37
    1.46 +    
    1.47 +
    1.48 +*** DONE Make a worm wiggle and curl
    1.49 +    CLOSED: [2014-03-04 Tue 23:03] SCHEDULED: <2014-03-04 Tue>
    1.50 +*** TODO work on alignment for the worm (can "cheat")
    1.51 +    SCHEDULED: <2014-03-05 Wed>
    1.52 +
    1.53 +** First draft
    1.54 +   DEADLINE: <2014-03-14 Fri>
    1.55 +Subgoals:
    1.56 +*** Writeup new worm experiments.
    1.57 +*** Triage implementation code and get it into chapter form.
    1.58 +
    1.59 +
    1.60 +
    1.61 + 
    1.62 +
    1.63 +** for today
    1.64 +
    1.65 +- guided worm :: control the worm with the keyboard. Useful for
    1.66 +                 testing the body-centered recog scripts, and for
    1.67 +                 preparing a cool demo video.
    1.68 +
    1.69 +- body-centered recognition :: detect actions using hard coded
    1.70 +     body-centered scripts. 
    1.71 +
    1.72 +- cool demo video of the worm being moved and recognizing things ::
    1.73 +     will be a neat part of the thesis.
    1.74 +
    1.75 +- thesis export :: refactoring and organization of code so that it
    1.76 +                   spits out a thesis in addition to the web page.
    1.77 +
    1.78 +- video alignment :: analyze the frames of a video in order to align
    1.79 +     the worm. Requires body-centered recognition. Can "cheat".
    1.80 +
    1.81 +- smoother actions :: use debugging controls to directly influence the
    1.82 +     demo actions, and to generate recoginition procedures.
    1.83 +
    1.84 +- degenerate video demonstration :: show the system recognizing a
    1.85 +     curled worm from dead on. Crowning achievement of thesis.
    1.86 +
    1.87 +** Ordered from easiest to hardest
    1.88 +
    1.89 +Just report the positions of everything. I don't think that this
    1.90 +necessairly shows anything usefull.
    1.91 +
    1.92 +Worm-segment vision -- you initialize a view of the worm, but instead
    1.93 +of pixels you use labels via ray tracing. Has the advantage of still
    1.94 +allowing for visual occlusion, but reliably identifies the objects,
    1.95 +even without rainbow coloring. You can code this as an image. 
    1.96 +
    1.97 +Same as above, except just with worm/non-worm labels.
    1.98 +
    1.99 +Color code each worm segment and then recognize them using blob
   1.100 +detectors. Then you solve for the perspective and the action
   1.101 +simultaneously.
   1.102 +
   1.103 +The entire worm can be colored the same, high contrast color against a
   1.104 +nearly black background.
   1.105 +
   1.106 +"Rooted" vision. You give the exact coordinates of ONE piece of the
   1.107 +worm, but the algorithm figures out the rest.
   1.108 +
   1.109 +More rooted vision -- start off the entire worm with one posistion.
   1.110 +
   1.111 +The right way to do alignment is to use motion over multiple frames to
   1.112 +snap individual pieces of the model into place sharing and
   1.113 +propragating the individual alignments over the whole model. We also
   1.114 +want to limit the alignment search to just those actions we are
   1.115 +prepared to identify. This might mean that I need some small "micro
   1.116 +actions" such as the individual movements of the worm pieces.
   1.117 +
   1.118 +Get just the centers of each segment projected onto the imaging
   1.119 +plane. (best so far).
   1.120 +
   1.121 +
   1.122 +Repertoire of actions  +  video frames -->
   1.123 +   directed multi-frame-search alg
   1.124 +
   1.125 +
   1.126 +
   1.127 +
   1.128 +
   1.129 +
   1.130 +!! Could also have a bounding box around the worm provided by
   1.131 +filtering the worm/non-worm render, and use bbbgs. As a bonus, I get
   1.132 +to include bbbgs in my thesis! Could finally do that recursive things
   1.133 +where I make bounding boxes be those things that give results that
   1.134 +give good bounding boxes. If I did this I could use a disruptive
   1.135 +pattern on the worm.
   1.136 +
   1.137 +Re imagining using default textures is very simple for this system,
   1.138 +but hard for others.
   1.139 +
   1.140 +
   1.141 +Want to demonstrate, at minimum, alignment of some model of the worm
   1.142 +to the video, and a lookup of the action by simulated perception.
   1.143 +
   1.144 +note: the purple/white points is a very beautiful texture, because
   1.145 +when it moves slightly, the white dots look like they're
   1.146 +twinkling. Would look even better if it was a darker purple. Also
   1.147 +would look better more spread out.
   1.148 +
   1.149 +
   1.150 +embed assumption of one frame of view, search by moving around in
   1.151 +simulated world.
   1.152 +
   1.153 +Allowed to limit search by setting limits to a hemisphere around the
   1.154 +imagined worm! This limits scale also.
   1.155 +
   1.156 +
   1.157 +
   1.158 +
   1.159 +
   1.160 +!! Limited search with worm/non-worm rendering. 
   1.161 +How much inverse kinematics do we have to do?
   1.162 +What about cached (allowed state-space) paths, derived from labeled
   1.163 +training. You have to lead from one to another.
   1.164 +
   1.165 +What about initial state? Could start the input videos at a specific
   1.166 +state, then just match that explicitly.
   1.167 +
   1.168 +!! The training doesn't have to be labeled -- you can just move around
   1.169 +for a while!!
   1.170 +
   1.171 +!! Limited search with motion based alignment.
   1.172 +
   1.173 +
   1.174 +
   1.175 +
   1.176 +"play arounds" can establish a chain of linked sensoriums. Future
   1.177 +matches must fall into one of the already experienced things, and once
   1.178 +they do, it greatly limits the things that are possible in the future.
   1.179 +
   1.180 +
   1.181 +frame differences help to detect muscle exertion.
   1.182 +
   1.183 +Can try to match on a few "representative" frames. Can also just have
   1.184 +a few "bodies" in various states which we try to match.
   1.185 +
   1.186 +
   1.187 +
   1.188 +Paths through state-space have the exact same signature as
   1.189 +simulation. BUT, these can be searched in parallel and don't interfere
   1.190 +with each other.
   1.191 +
   1.192 +