# HG changeset patch # User Robert McIntyre # Date 1395761415 14400 # Node ID 284316604be02030d2fe4f325cef0b245949569e # Parent 3e91585b2a1c4342c57ad2fccee003da8c19be77 minor changes from Dylan. diff -r 3e91585b2a1c -r 284316604be0 thesis/cortex.org --- a/thesis/cortex.org Tue Mar 25 03:24:28 2014 -0400 +++ b/thesis/cortex.org Tue Mar 25 11:30:15 2014 -0400 @@ -10,9 +10,9 @@ By the end of this thesis, you will have seen a novel approach to interpreting video using embodiment and empathy. You will have also seen one way to efficiently implement empathy for embodied - creatures. Finally, you will become familiar with =CORTEX=, a - system for designing and simulating creatures with rich senses, - which you may choose to use in your own research. + creatures. Finally, you will become familiar with =CORTEX=, a system + for designing and simulating creatures with rich senses, which you + may choose to use in your own research. This is the core vision of my thesis: That one of the important ways in which we understand others is by imagining ourselves in their @@ -26,8 +26,8 @@ ** Recognizing actions in video is extremely difficult - Consider for example the problem of determining what is happening in - a video of which this is one frame: + Consider for example the problem of determining what is happening + in a video of which this is one frame: #+caption: A cat drinking some water. Identifying this action is #+caption: beyond the state of the art for computers. @@ -35,14 +35,14 @@ [[./images/cat-drinking.jpg]] It is currently impossible for any computer program to reliably - label such a video as "drinking". And rightly so -- it is a very + label such a video as ``drinking''. And rightly so -- it is a very hard problem! What features can you describe in terms of low level functions of pixels that can even begin to describe at a high level what is happening here? - Or suppose that you are building a program that recognizes - chairs. How could you ``see'' the chair in figure - \ref{invisible-chair} and figure \ref{hidden-chair}? + Or suppose that you are building a program that recognizes chairs. + How could you ``see'' the chair in figure \ref{invisible-chair} and + figure \ref{hidden-chair}? #+caption: When you look at this, do you think ``chair''? I certainly do. #+name: invisible-chair @@ -69,9 +69,9 @@ on in our minds as we easily solve these recognition problems. The hidden chairs show us that we are strongly triggered by cues - relating to the position of human bodies, and that we can - determine the overall physical configuration of a human body even - if much of that body is occluded. + relating to the position of human bodies, and that we can determine + the overall physical configuration of a human body even if much of + that body is occluded. The picture of the girl pushing against the wall tells us that we have common sense knowledge about the kinetics of our own bodies. @@ -85,58 +85,54 @@ problems above in a form amenable to computation. It is split into four parts: - - Free/Guided Play :: The creature moves around and experiences the - world through its unique perspective. Many otherwise - complicated actions are easily described in the language of a - full suite of body-centered, rich senses. For example, - drinking is the feeling of water sliding down your throat, and - cooling your insides. It's often accompanied by bringing your - hand close to your face, or bringing your face close to - water. Sitting down is the feeling of bending your knees, - activating your quadriceps, then feeling a surface with your - bottom and relaxing your legs. These body-centered action + - Free/Guided Play (Training) :: The creature moves around and + experiences the world through its unique perspective. Many + otherwise complicated actions are easily described in the + language of a full suite of body-centered, rich senses. For + example, drinking is the feeling of water sliding down your + throat, and cooling your insides. It's often accompanied by + bringing your hand close to your face, or bringing your face + close to water. Sitting down is the feeling of bending your + knees, activating your quadriceps, then feeling a surface with + your bottom and relaxing your legs. These body-centered action descriptions can be either learned or hard coded. - - Alignment :: When trying to interpret a video or image, the - creature takes a model of itself and aligns it with - whatever it sees. This can be a rather loose - alignment that can cross species, as when humans try - to align themselves with things like ponies, dogs, - or other humans with a different body type. - - Empathy :: The alignment triggers the memories of previous - experience. For example, the alignment itself easily - maps to proprioceptive data. Any sounds or obvious - skin contact in the video can to a lesser extent - trigger previous experience. The creatures previous - experience is chained together in short bursts to - coherently describe the new scene. - - Recognition :: With the scene now described in terms of past - experience, the creature can now run its - action-identification programs on this synthesized - sensory data, just as it would if it were actually - experiencing the scene first-hand. If previous - experience has been accurately retrieved, and if - it is analogous enough to the scene, then the - creature will correctly identify the action in the - scene. - - + - Alignment (Posture imitation) :: When trying to interpret a video + or image, the creature takes a model of itself and aligns it + with whatever it sees. This alignment can even cross species, + as when humans try to align themselves with things like + ponies, dogs, or other humans with a different body type. + - Empathy (Sensory extrapolation) :: The alignment triggers + associations with sensory data from prior experiences. For + example, the alignment itself easily maps to proprioceptive + data. Any sounds or obvious skin contact in the video can to a + lesser extent trigger previous experience. Segments of + previous experiences are stitched together to form a coherent + and complete sensory portrait of the scene. + - Recognition (Classification) :: With the scene described in terms + of first person sensory events, the creature can now run its + action-identification programs on this synthesized sensory + data, just as it would if it were actually experiencing the + scene first-hand. If previous experience has been accurately + retrieved, and if it is analogous enough to the scene, then + the creature will correctly identify the action in the scene. + For example, I think humans are able to label the cat video as - "drinking" because they imagine /themselves/ as the cat, and + ``drinking'' because they imagine /themselves/ as the cat, and imagine putting their face up against a stream of water and sticking out their tongue. In that imagined world, they can feel the cool water hitting their tongue, and feel the water entering - their body, and are able to recognize that /feeling/ as - drinking. So, the label of the action is not really in the pixels - of the image, but is found clearly in a simulation inspired by - those pixels. An imaginative system, having been trained on - drinking and non-drinking examples and learning that the most - important component of drinking is the feeling of water sliding - down one's throat, would analyze a video of a cat drinking in the - following manner: + their body, and are able to recognize that /feeling/ as drinking. + So, the label of the action is not really in the pixels of the + image, but is found clearly in a simulation inspired by those + pixels. An imaginative system, having been trained on drinking and + non-drinking examples and learning that the most important + component of drinking is the feeling of water sliding down one's + throat, would analyze a video of a cat drinking in the following + manner: - 1. Create a physical model of the video by putting a "fuzzy" model - of its own body in place of the cat. Possibly also create a - simulation of the stream of water. + 1. Create a physical model of the video by putting a ``fuzzy'' + model of its own body in place of the cat. Possibly also create + a simulation of the stream of water. 2. Play out this simulated scene and generate imagined sensory experience. This will include relevant muscle contractions, a @@ -184,13 +180,12 @@ #+ATTR_LaTeX: :width 15cm [[./images/worm-intro-white.png]] - #+caption: The actions of a worm in a video can be recognized by - #+caption: proprioceptive data and sentory predicates by filling - #+caption: in the missing sensory detail with previous experience. + #+caption: =EMPATH= recognized and classified each of these poses by + #+caption: inferring the complete sensory experience from + #+caption: proprioceptive data. #+name: worm-recognition-intro #+ATTR_LaTeX: :width 15cm [[./images/worm-poses.png]] - One powerful advantage of empathic problem solving is that it factors the action recognition problem into two easier problems. To @@ -198,22 +193,23 @@ model of your body, and aligns the model with the video. Then, you need a /recognizer/, which uses the aligned model to interpret the action. The power in this method lies in the fact that you describe - all actions form a body-centered, rich viewpoint. This way, if you + all actions form a body-centered, viewpoint You are less tied to + the particulars of any visual representation of the actions. If you teach the system what ``running'' is, and you have a good enough aligner, the system will from then on be able to recognize running from any point of view, even strange points of view like above or underneath the runner. This is in contrast to action recognition schemes that try to identify actions using a non-embodied approach - such as TODO:REFERENCE. If these systems learn about running as viewed - from the side, they will not automatically be able to recognize - running from any other viewpoint. + such as TODO:REFERENCE. If these systems learn about running as + viewed from the side, they will not automatically be able to + recognize running from any other viewpoint. Another powerful advantage is that using the language of multiple body-centered rich senses to describe body-centerd actions offers a massive boost in descriptive capability. Consider how difficult it would be to compose a set of HOG filters to describe the action of - a simple worm-creature "curling" so that its head touches its tail, - and then behold the simplicity of describing thus action in a + a simple worm-creature ``curling'' so that its head touches its + tail, and then behold the simplicity of describing thus action in a language designed for the task (listing \ref{grand-circle-intro}): #+caption: Body-centerd actions are best expressed in a body-centered @@ -293,8 +289,8 @@ that creature is feeling. My empathy algorithm involves multiple phases. First is free-play, where the creature moves around and gains sensory experience. From this experience I construct a representation -of the creature's sensory state space, which I call \phi-space. Using -\phi-space, I construct an efficient function for enriching the +of the creature's sensory state space, which I call \Phi-space. Using +\Phi-space, I construct an efficient function for enriching the limited data that comes from observing another creature with a full compliment of imagined sensory data based on previous experience. I can then use the imagined sensory data to recognize what the observed @@ -313,4 +309,13 @@ * COMMENT names for cortex - - bioland \ No newline at end of file + - bioland + + + + +# An anatomical joke: +# - Training +# - Skeletal imitation +# - Sensory fleshing-out +# - Classification