# HG changeset patch # User Robert McIntyre # Date 1396191005 14400 # Node ID 447c3c8405a25c26a1e0c2c520ab0a08dd93bb47 # Parent 4c4d45f6f30b605314fb51ec02e593dbc7f071dd accept/reject changes diff -r 4c4d45f6f30b -r 447c3c8405a2 thesis/cortex.org --- a/thesis/cortex.org Sun Mar 30 10:41:18 2014 -0400 +++ b/thesis/cortex.org Sun Mar 30 10:50:05 2014 -0400 @@ -498,9 +498,9 @@ is a vast and rich place, and for now simulations are a very poor reflection of its complexity. It may be that there is a significant qualatative difference between dealing with senses in the real - world and dealing with pale facilimilies of them in a simulation. - What are the advantages and disadvantages of a simulation vs. - reality? + world and dealing with pale facilimilies of them in a simulation + \cite{brooks-representation}. What are the advantages and + disadvantages of a simulation vs. reality? *** Simulation @@ -578,7 +578,7 @@ 40x on my machine! ** All sense organs are two-dimensional surfaces -# What is a sense? + If =CORTEX= is to support a wide variety of senses, it would help to have a better understanding of what a ``sense'' actually is! While vision, touch, and hearing all seem like they are quite @@ -1376,7 +1376,7 @@ capturing in-game video to a file. ** ...but hearing must be built from scratch -# is hard; =CORTEX= does it right + At the end of this section I will have simulated ears that work the same way as the simulated eyes in the last section. I will be able to place any number of ear-nodes in a blender file, and they will bind to @@ -3163,18 +3163,18 @@ to interpretation, and dissaggrement between empathy and experience is more excusable. -** Digression: Learn touch sensor layout through haptic experimentation, instead -# Boostraping touch using free exploration -In the previous section I showed how to compute actions in terms of +** Digression: Learn touch sensor layout through free play + + In the previous section I showed how to compute actions in terms of body-centered predicates which relied averate touch activation of - pre-defined regions of the worm's skin. What if, instead of recieving - touch pre-grouped into the six faces of each worm segment, the true - topology of the worm's skin was unknown? This is more similiar to how - a nerve fiber bundle might be arranged. While two fibers that are - close in a nerve bundle /might/ correspond to two touch sensors that - are close together on the skin, the process of taking a complicated - surface and forcing it into essentially a circle requires some cuts - and rerragenments. + pre-defined regions of the worm's skin. What if, instead of + recieving touch pre-grouped into the six faces of each worm + segment, the true topology of the worm's skin was unknown? This is + more similiar to how a nerve fiber bundle might be arranged. While + two fibers that are close in a nerve bundle /might/ correspond to + two touch sensors that are close together on the skin, the process + of taking a complicated surface and forcing it into essentially a + circle requires some cuts and rerragenments. In this section I show how to automatically learn the skin-topology of a worm segment by free exploration. As the worm rolls around on the diff -r 4c4d45f6f30b -r 447c3c8405a2 thesis/dylan-accept.diff --- a/thesis/dylan-accept.diff Sun Mar 30 10:41:18 2014 -0400 +++ b/thesis/dylan-accept.diff Sun Mar 30 10:50:05 2014 -0400 @@ -20,3 +20,90 @@ You have also seen how I used =CORTEX= as a platform to attach the /action recognition/ problem, which is the problem of recognizing + +@@ -2888,7 +3007,7 @@ + #+end_src + #+end_listing + +-** Efficient action recognition with =EMPATH= ++** =EMPATH= recognizes actions efficiently + + To use =EMPATH= with the worm, I first need to gather a set of + experiences from the worm that includes the actions I want to + +@@ -3044,9 +3163,9 @@ + to interpretation, and dissaggrement between empathy and experience + is more excusable. + +-** Digression: bootstrapping touch using free exploration +- +- In the previous section I showed how to compute actions in terms of ++** Digression: Learn touch sensor layout through haptic experimentation, instead ++# Boostraping touch using free exploration ++In the previous section I showed how to compute actions in terms of + body-centered predicates which relied averate touch activation of + pre-defined regions of the worm's skin. What if, instead of recieving + touch pre-grouped into the six faces of each worm segment, the true + +@@ -2193,7 +2311,7 @@ + #+ATTR_LaTeX: :width 11cm + [[./images/proprio.png]] + +-** Muscles are both effectors and sensors ++** Muscles contain both sensors and effectors + + Surprisingly enough, terrestrial creatures only move by using + torque applied about their joints. There's not a single straight + +@@ -2059,7 +2177,7 @@ + #+ATTR_LaTeX: :width 15cm + [[./images/touch-cube.png]] + +-** Proprioception is the sense that makes everything ``real'' ++** Proprioception provides knowledge of your own body's position + + Close your eyes, and touch your nose with your right index finger. + How did you do it? You could not see your hand, and neither your + +@@ -1257,8 +1375,8 @@ + community and is now (in modified form) part of a system for + capturing in-game video to a file. + +-** Hearing is hard; =CORTEX= does it right +- ++** ...but hearing must be built from scratch ++# is hard; =CORTEX= does it right + At the end of this section I will have simulated ears that work the + same way as the simulated eyes in the last section. I will be able to + place any number of ear-nodes in a blender file, and they will bind to + +@@ -1565,7 +1683,7 @@ + jMonkeyEngine3 community and is used to record audio for demo + videos. + +-** Touch uses hundreds of hair-like elements ++** Hundreds of hair-like elements provide a sense of touch + + Touch is critical to navigation and spatial reasoning and as such I + need a simulated version of it to give to my AI creatures. + +@@ -956,7 +1074,7 @@ + #+ATTR_LaTeX: :width 15cm + [[./images/physical-hand.png]] + +-** Eyes reuse standard video game components ++** Sight reuses standard video game components... + + Vision is one of the most important senses for humans, so I need to + build a simulated sense of vision for my AI. I will do this with +@@ -459,8 +577,8 @@ + simulations of very simple creatures in =CORTEX= generally run at + 40x on my machine! + +-** What is a sense? +- ++** All sense organs are two-dimensional surfaces ++# What is a sense? + If =CORTEX= is to support a wide variety of senses, it would help + to have a better understanding of what a ``sense'' actually is! + While vision, touch, and hearing all seem like they are quite diff -r 4c4d45f6f30b -r 447c3c8405a2 thesis/dylan-cortex-diff.diff --- a/thesis/dylan-cortex-diff.diff Sun Mar 30 10:41:18 2014 -0400 +++ b/thesis/dylan-cortex-diff.diff Sun Mar 30 10:50:05 2014 -0400 @@ -294,64 +294,6 @@ I envision =CORTEX= being used to support rapid prototyping and iteration of ideas. Even if I could put together a well constructed -@@ -459,8 +577,8 @@ - simulations of very simple creatures in =CORTEX= generally run at - 40x on my machine! - --** What is a sense? -- -+** All sense organs are two-dimensional surfaces -+# What is a sense? - If =CORTEX= is to support a wide variety of senses, it would help - to have a better understanding of what a ``sense'' actually is! - While vision, touch, and hearing all seem like they are quite -@@ -956,7 +1074,7 @@ - #+ATTR_LaTeX: :width 15cm - [[./images/physical-hand.png]] - --** Eyes reuse standard video game components -+** Sight reuses standard video game components... - - Vision is one of the most important senses for humans, so I need to - build a simulated sense of vision for my AI. I will do this with -@@ -1257,8 +1375,8 @@ - community and is now (in modified form) part of a system for - capturing in-game video to a file. - --** Hearing is hard; =CORTEX= does it right -- -+** ...but hearing must be built from scratch -+# is hard; =CORTEX= does it right - At the end of this section I will have simulated ears that work the - same way as the simulated eyes in the last section. I will be able to - place any number of ear-nodes in a blender file, and they will bind to -@@ -1565,7 +1683,7 @@ - jMonkeyEngine3 community and is used to record audio for demo - videos. - --** Touch uses hundreds of hair-like elements -+** Hundreds of hair-like elements provide a sense of touch - - Touch is critical to navigation and spatial reasoning and as such I - need a simulated version of it to give to my AI creatures. -@@ -2059,7 +2177,7 @@ - #+ATTR_LaTeX: :width 15cm - [[./images/touch-cube.png]] - --** Proprioception is the sense that makes everything ``real'' -+** Proprioception provides knowledge of your own body's position - - Close your eyes, and touch your nose with your right index finger. - How did you do it? You could not see your hand, and neither your -@@ -2193,7 +2311,7 @@ - #+ATTR_LaTeX: :width 11cm - [[./images/proprio.png]] - --** Muscles are both effectors and sensors -+** Muscles contain both sensors and effectors - - Surprisingly enough, terrestrial creatures only move by using - torque applied about their joints. There's not a single straight @@ -2440,7 +2558,8 @@ hard control problems without worrying about physics or senses. @@ -371,25 +313,3 @@ Here is the core of a basic empathy algorithm, starting with an experience vector: -@@ -2888,7 +3007,7 @@ - #+end_src - #+end_listing - --** Efficient action recognition with =EMPATH= -+** =EMPATH= recognizes actions efficiently - - To use =EMPATH= with the worm, I first need to gather a set of - experiences from the worm that includes the actions I want to -@@ -3044,9 +3163,9 @@ - to interpretation, and dissaggrement between empathy and experience - is more excusable. - --** Digression: bootstrapping touch using free exploration -- -- In the previous section I showed how to compute actions in terms of -+** Digression: Learn touch sensor layout through haptic experimentation, instead -+# Boostraping touch using free exploration -+In the previous section I showed how to compute actions in terms of - body-centered predicates which relied averate touch activation of - pre-defined regions of the worm's skin. What if, instead of recieving - touch pre-grouped into the six faces of each worm segment, the true