Mercurial > cortex
diff thesis/cortex.org @ 533:122524d39652
adding details to empathy algorithm.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Sun, 27 Apr 2014 09:40:19 -0400 |
parents | 686f77b88292 |
children | 39ee58fef9d8 |
line wrap: on
line diff
1.1 --- a/thesis/cortex.org Sat Apr 26 21:42:15 2014 -0400 1.2 +++ b/thesis/cortex.org Sun Apr 27 09:40:19 2014 -0400 1.3 @@ -2788,47 +2788,68 @@ 1.4 operation. I call this subspace \Phi-space, short for 1.5 physical-space. The experience vector defines a path through 1.6 \Phi-space. This path has interesting properties that all derive 1.7 - from physical embodiment. The proprioceptive components are 1.8 - completely smooth, because in order for the worm to move from one 1.9 + from physical embodiment. The proprioceptive components of the path 1.10 + vary smoothly, because in order for the worm to move from one 1.11 position to another, it must pass through the intermediate 1.12 - positions. The path invariably forms loops as actions are repeated. 1.13 - Finally and most importantly, proprioception actually gives very 1.14 - strong inference about the other senses. For example, when the worm 1.15 - is flat, you can infer that it is touching the ground and that its 1.16 + positions. The path invariably forms loops as common actions are 1.17 + repeated. Finally and most importantly, proprioception alone 1.18 + actually gives very strong inference about the other senses. For 1.19 + example, when the worm is proprioceptively flat over several 1.20 + frames, you can infer that it is touching the ground and that its 1.21 muscles are not active, because if the muscles were active, the 1.22 - worm would be moving and would not be perfectly flat. In order to 1.23 - stay flat, the worm has to be touching the ground, or it would 1.24 + worm would be moving and would not remain perfectly flat. In order 1.25 + to stay flat, the worm has to be touching the ground, or it would 1.26 again be moving out of the flat position due to gravity. If the 1.27 worm is positioned in such a way that it interacts with itself, 1.28 then it is very likely to be feeling the same tactile feelings as 1.29 the last time it was in that position, because it has the same body 1.30 - as then. If you observe multiple frames of proprioceptive data, 1.31 - then you can become increasingly confident about the exact 1.32 - activations of the worm's muscles, because it generally takes a 1.33 - unique combination of muscle contractions to transform the worm's 1.34 - body along a specific path through \Phi-space. 1.35 - 1.36 - There is a simple way of taking \Phi-space and the total ordering 1.37 - provided by an experience vector and reliably inferring the rest of 1.38 - the senses. 1.39 - 1.40 + as then. As you observe multiple frames of proprioceptive data, you 1.41 + can become increasingly confident about the exact activations of 1.42 + the worm's muscles, because it generally takes a unique combination 1.43 + of muscle contractions to transform the worm's body along a 1.44 + specific path through \Phi-space. 1.45 + 1.46 + The worm's total life experience is a long looping path through 1.47 + \Phi-space. I will now introduce simple way of taking that 1.48 + experiece path and building a function that can infer complete 1.49 + sensory experience given only a stream of proprioceptive data. This 1.50 + /empathy/ function will provide a bridge to use the body centered 1.51 + action predicates on video-like streams of information. 1.52 + 1.53 ** Empathy is the process of tracing though \Phi-space 1.54 1.55 Here is the core of a basic empathy algorithm, starting with an 1.56 experience vector: 1.57 1.58 - First, group the experiences into tiered proprioceptive bins. I use 1.59 - powers of 10 and 3 bins, and the smallest bin has an approximate 1.60 - size of 0.001 radians in all proprioceptive dimensions. 1.61 + An /experience-index/ is an index into the grand experience vector 1.62 + that defines the worm's life. It is a timestamp for each set of 1.63 + sensations the worm has experienced. 1.64 + 1.65 + First, group the experience-indices into bins according to the 1.66 + similarity of their proprioceptive data. I organize my bins into a 1.67 + 3 level heirachy. The smallest bins have an approximate size of 1.68 + 0.001 radians in all proprioceptive dimensions. Each higher level 1.69 + is 10x bigger than the level below it. 1.70 + 1.71 + The bins serve as a hashing function for proprioceptive data. Given 1.72 + a single piece of proprioceptive experience, the bins allow us to 1.73 + rapidly find all other similiar experience-indices of past 1.74 + expreience that had a very similiar proprioceptive configuration. 1.75 + When looking up a proprioceptive experience, if the smallest bin 1.76 + does not match any previous experience, then succesively larger 1.77 + bins are used until a match is found or we reach the largest bin. 1.78 1.79 - Then, given a sequence of proprioceptive input, generate a set of 1.80 - matching experience records for each input, using the tiered 1.81 - proprioceptive bins. 1.82 - 1.83 - Finally, to infer sensory data, select the longest consecutive chain 1.84 - of experiences. Consecutive experience means that the experiences 1.85 + Given a sequence of proprioceptive input, I use the bins to 1.86 + generate a set of similiar experiencs for each input using the 1.87 + tiered proprioceptive bins. 1.88 + 1.89 + Finally, to infer sensory data, I select the longest consecutive 1.90 + chain of experiences that threads through the sets of similiar 1.91 + experiences. Consecutive experience means that the experiences 1.92 appear next to each other in the experience vector. 1.93 1.94 + 1.95 + 1.96 This algorithm has three advantages: 1.97 1.98 1. It's simple