Mercurial > cortex
changeset 448:af13fc73e851
completing second part of first chapter.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Tue, 25 Mar 2014 22:54:41 -0400 |
parents | 284316604be0 |
children | 09b7c8dd4365 |
files | assets/Models/worm/worm.blend thesis/cortex.org thesis/images/blender-worm.png thesis/images/full-hand.png thesis/images/worm-with-muscle.png |
diffstat | 5 files changed, 107 insertions(+), 40 deletions(-) [+] |
line wrap: on
line diff
1.1 Binary file assets/Models/worm/worm.blend has changed
2.1 --- a/thesis/cortex.org Tue Mar 25 11:30:15 2014 -0400 2.2 +++ b/thesis/cortex.org Tue Mar 25 22:54:41 2014 -0400 2.3 @@ -41,16 +41,10 @@ 2.4 what is happening here? 2.5 2.6 Or suppose that you are building a program that recognizes chairs. 2.7 - How could you ``see'' the chair in figure \ref{invisible-chair} and 2.8 - figure \ref{hidden-chair}? 2.9 - 2.10 - #+caption: When you look at this, do you think ``chair''? I certainly do. 2.11 - #+name: invisible-chair 2.12 - #+ATTR_LaTeX: :width 10cm 2.13 - [[./images/invisible-chair.png]] 2.14 + How could you ``see'' the chair in figure \ref{hidden-chair}? 2.15 2.16 #+caption: The chair in this image is quite obvious to humans, but I 2.17 - #+caption: doubt that any computer program can find it. 2.18 + #+caption: doubt that any modern computer vision program can find it. 2.19 #+name: hidden-chair 2.20 #+ATTR_LaTeX: :width 10cm 2.21 [[./images/fat-person-sitting-at-desk.jpg]] 2.22 @@ -62,7 +56,7 @@ 2.23 #+caption: to discern the difference in how the girl's arm muscles 2.24 #+caption: are activated between the two images. 2.25 #+name: girl 2.26 - #+ATTR_LaTeX: :width 10cm 2.27 + #+ATTR_LaTeX: :width 7cm 2.28 [[./images/wall-push.png]] 2.29 2.30 Each of these examples tells us something about what might be going 2.31 @@ -85,31 +79,31 @@ 2.32 problems above in a form amenable to computation. It is split into 2.33 four parts: 2.34 2.35 - - Free/Guided Play (Training) :: The creature moves around and 2.36 - experiences the world through its unique perspective. Many 2.37 - otherwise complicated actions are easily described in the 2.38 - language of a full suite of body-centered, rich senses. For 2.39 - example, drinking is the feeling of water sliding down your 2.40 - throat, and cooling your insides. It's often accompanied by 2.41 - bringing your hand close to your face, or bringing your face 2.42 - close to water. Sitting down is the feeling of bending your 2.43 - knees, activating your quadriceps, then feeling a surface with 2.44 - your bottom and relaxing your legs. These body-centered action 2.45 - descriptions can be either learned or hard coded. 2.46 - - Alignment (Posture imitation) :: When trying to interpret a video 2.47 - or image, the creature takes a model of itself and aligns it 2.48 - with whatever it sees. This alignment can even cross species, 2.49 - as when humans try to align themselves with things like 2.50 - ponies, dogs, or other humans with a different body type. 2.51 - - Empathy (Sensory extrapolation) :: The alignment triggers 2.52 - associations with sensory data from prior experiences. For 2.53 - example, the alignment itself easily maps to proprioceptive 2.54 - data. Any sounds or obvious skin contact in the video can to a 2.55 - lesser extent trigger previous experience. Segments of 2.56 - previous experiences are stitched together to form a coherent 2.57 - and complete sensory portrait of the scene. 2.58 - - Recognition (Classification) :: With the scene described in terms 2.59 - of first person sensory events, the creature can now run its 2.60 + - Free/Guided Play :: The creature moves around and experiences the 2.61 + world through its unique perspective. Many otherwise 2.62 + complicated actions are easily described in the language of a 2.63 + full suite of body-centered, rich senses. For example, 2.64 + drinking is the feeling of water sliding down your throat, and 2.65 + cooling your insides. It's often accompanied by bringing your 2.66 + hand close to your face, or bringing your face close to water. 2.67 + Sitting down is the feeling of bending your knees, activating 2.68 + your quadriceps, then feeling a surface with your bottom and 2.69 + relaxing your legs. These body-centered action descriptions 2.70 + can be either learned or hard coded. 2.71 + - Posture Imitation :: When trying to interpret a video or image, 2.72 + the creature takes a model of itself and aligns it with 2.73 + whatever it sees. This alignment can even cross species, as 2.74 + when humans try to align themselves with things like ponies, 2.75 + dogs, or other humans with a different body type. 2.76 + - Empathy :: The alignment triggers associations with 2.77 + sensory data from prior experiences. For example, the 2.78 + alignment itself easily maps to proprioceptive data. Any 2.79 + sounds or obvious skin contact in the video can to a lesser 2.80 + extent trigger previous experience. Segments of previous 2.81 + experiences are stitched together to form a coherent and 2.82 + complete sensory portrait of the scene. 2.83 + - Recognition :: With the scene described in terms of first 2.84 + person sensory events, the creature can now run its 2.85 action-identification programs on this synthesized sensory 2.86 data, just as it would if it were actually experiencing the 2.87 scene first-hand. If previous experience has been accurately 2.88 @@ -193,16 +187,16 @@ 2.89 model of your body, and aligns the model with the video. Then, you 2.90 need a /recognizer/, which uses the aligned model to interpret the 2.91 action. The power in this method lies in the fact that you describe 2.92 - all actions form a body-centered, viewpoint You are less tied to 2.93 + all actions form a body-centered viewpoint. You are less tied to 2.94 the particulars of any visual representation of the actions. If you 2.95 teach the system what ``running'' is, and you have a good enough 2.96 aligner, the system will from then on be able to recognize running 2.97 from any point of view, even strange points of view like above or 2.98 underneath the runner. This is in contrast to action recognition 2.99 - schemes that try to identify actions using a non-embodied approach 2.100 - such as TODO:REFERENCE. If these systems learn about running as 2.101 - viewed from the side, they will not automatically be able to 2.102 - recognize running from any other viewpoint. 2.103 + schemes that try to identify actions using a non-embodied approach. 2.104 + If these systems learn about running as viewed from the side, they 2.105 + will not automatically be able to recognize running from any other 2.106 + viewpoint. 2.107 2.108 Another powerful advantage is that using the language of multiple 2.109 body-centered rich senses to describe body-centerd actions offers a 2.110 @@ -234,8 +228,81 @@ 2.111 2.112 ** =CORTEX= is a toolkit for building sensate creatures 2.113 2.114 - Hand integration demo 2.115 + I built =CORTEX= to be a general AI research platform for doing 2.116 + experiments involving multiple rich senses and a wide variety and 2.117 + number of creatures. I intend it to be useful as a library for many 2.118 + more projects than just this one. =CORTEX= was necessary to meet a 2.119 + need among AI researchers at CSAIL and beyond, which is that people 2.120 + often will invent neat ideas that are best expressed in the 2.121 + language of creatures and senses, but in order to explore those 2.122 + ideas they must first build a platform in which they can create 2.123 + simulated creatures with rich senses! There are many ideas that 2.124 + would be simple to execute (such as =EMPATH=), but attached to them 2.125 + is the multi-month effort to make a good creature simulator. Often, 2.126 + that initial investment of time proves to be too much, and the 2.127 + project must make do with a lesser environment. 2.128 2.129 + =CORTEX= is well suited as an environment for embodied AI research 2.130 + for three reasons: 2.131 + 2.132 + - You can create new creatures using Blender, a popular 3D modeling 2.133 + program. Each sense can be specified using special blender nodes 2.134 + with biologically inspired paramaters. You need not write any 2.135 + code to create a creature, and can use a wide library of 2.136 + pre-existing blender models as a base for your own creatures. 2.137 + 2.138 + - =CORTEX= implements a wide variety of senses, including touch, 2.139 + proprioception, vision, hearing, and muscle tension. Complicated 2.140 + senses like touch, and vision involve multiple sensory elements 2.141 + embedded in a 2D surface. You have complete control over the 2.142 + distribution of these sensor elements through the use of simple 2.143 + png image files. In particular, =CORTEX= implements more 2.144 + comprehensive hearing than any other creature simulation system 2.145 + available. 2.146 + 2.147 + - =CORTEX= supports any number of creatures and any number of 2.148 + senses. Time in =CORTEX= dialates so that the simulated creatures 2.149 + always precieve a perfectly smooth flow of time, regardless of 2.150 + the actual computational load. 2.151 + 2.152 + =CORTEX= is built on top of =jMonkeyEngine3=, which is a video game 2.153 + engine designed to create cross-platform 3D desktop games. =CORTEX= 2.154 + is mainly written in clojure, a dialect of =LISP= that runs on the 2.155 + java virtual machine (JVM). The API for creating and simulating 2.156 + creatures is entirely expressed in clojure. Hearing is implemented 2.157 + as a layer of clojure code on top of a layer of java code on top of 2.158 + a layer of =C++= code which implements a modified version of 2.159 + =OpenAL= to support multiple listeners. =CORTEX= is the only 2.160 + simulation environment that I know of that can support multiple 2.161 + entities that can each hear the world from their own perspective. 2.162 + Other senses also require a small layer of Java code. =CORTEX= also 2.163 + uses =bullet=, a physics simulator written in =C=. 2.164 + 2.165 + #+caption: Here is the worm from above modeled in Blender, a free 2.166 + #+caption: 3D-modeling program. Senses and joints are described 2.167 + #+caption: using special nodes in Blender. 2.168 + #+name: worm-recognition-intro 2.169 + #+ATTR_LaTeX: :width 12cm 2.170 + [[./images/blender-worm.png]] 2.171 + 2.172 + During one test with =CORTEX=, I created 3,000 entities each with 2.173 + their own independent senses and ran them all at only 1/80 real 2.174 + time. In another test, I created a detailed model of my own hand, 2.175 + equipped with a realistic distribution of touch (more sensitive at 2.176 + the fingertips), as well as eyes and ears, and it ran at around 1/4 2.177 + real time. 2.178 + 2.179 + #+caption: Here is the worm from above modeled in Blender, a free 2.180 + #+caption: 3D-modeling program. Senses and joints are described 2.181 + #+caption: using special nodes in Blender. 2.182 + #+name: worm-recognition-intro 2.183 + #+ATTR_LaTeX: :width 15cm 2.184 + [[./images/full-hand.png]] 2.185 + 2.186 + 2.187 + 2.188 + 2.189 + 2.190 ** Contributions 2.191 2.192 * Building =CORTEX=
3.1 Binary file thesis/images/blender-worm.png has changed
4.1 Binary file thesis/images/full-hand.png has changed
5.1 Binary file thesis/images/worm-with-muscle.png has changed