Mercurial > cortex
diff thesis/cortex.org @ 521:2529c34caa1a
changes from mom.
author | rlm |
---|---|
date | Mon, 31 Mar 2014 08:29:50 -0400 |
parents | 1803144ec9ae |
children | 1e51263afdc0 |
line wrap: on
line diff
1.1 --- a/thesis/cortex.org Mon Mar 31 08:21:39 2014 -0400 1.2 +++ b/thesis/cortex.org Mon Mar 31 08:29:50 2014 -0400 1.3 @@ -186,7 +186,7 @@ 1.4 model of your body, and aligns the model with the video. Then, you 1.5 need a /recognizer/, which uses the aligned model to interpret the 1.6 action. The power in this method lies in the fact that you describe 1.7 - all actions form a body-centered viewpoint. You are less tied to 1.8 + all actions from a body-centered viewpoint. You are less tied to 1.9 the particulars of any visual representation of the actions. If you 1.10 teach the system what ``running'' is, and you have a good enough 1.11 aligner, the system will from then on be able to recognize running 1.12 @@ -296,10 +296,10 @@ 1.13 1.14 *** Main Results 1.15 1.16 - - After one-shot supervised training, =EMPATH= was able recognize a 1.17 - wide variety of static poses and dynamic actions---ranging from 1.18 - curling in a circle to wiggling with a particular frequency --- 1.19 - with 95\% accuracy. 1.20 + - After one-shot supervised training, =EMPATH= was able to 1.21 + recognize a wide variety of static poses and dynamic 1.22 + actions---ranging from curling in a circle to wiggling with a 1.23 + particular frequency --- with 95\% accuracy. 1.24 1.25 - These results were completely independent of viewing angle 1.26 because the underlying body-centered language fundamentally is 1.27 @@ -381,7 +381,7 @@ 1.28 #+ATTR_LaTeX: :width 12cm 1.29 [[./images/blender-worm.png]] 1.30 1.31 - Here are some thing I anticipate that =CORTEX= might be used for: 1.32 + Here are some things I anticipate that =CORTEX= might be used for: 1.33 1.34 - exploring new ideas about sensory integration 1.35 - distributed communication among swarm creatures 1.36 @@ -542,7 +542,7 @@ 1.37 Most human senses consist of many discrete sensors of various 1.38 properties distributed along a surface at various densities. For 1.39 skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's 1.40 - disks, and Ruffini's endings (\cite{9.01-textbook), which detect 1.41 + disks, and Ruffini's endings \cite{textbook901}, which detect 1.42 pressure and vibration of various intensities. For ears, it is the 1.43 stereocilia distributed along the basilar membrane inside the 1.44 cochlea; each one is sensitive to a slightly different frequency of 1.45 @@ -560,8 +560,8 @@ 1.46 each sense. 1.47 1.48 Fortunately this idea is already a well known computer graphics 1.49 - technique called called /UV-mapping/. The three-dimensional surface 1.50 - of a model is cut and smooshed until it fits on a two-dimensional 1.51 + technique called /UV-mapping/. The three-dimensional surface of a 1.52 + model is cut and smooshed until it fits on a two-dimensional 1.53 image. You paint whatever you want on that image, and when the 1.54 three-dimensional shape is rendered in a game the smooshing and 1.55 cutting is reversed and the image appears on the three-dimensional 1.56 @@ -651,10 +651,9 @@ 1.57 pipeline. The engine was not built to serve any particular 1.58 game but is instead meant to be used for any 3D game. 1.59 1.60 - I chose jMonkeyEngine3 because it because it had the most features 1.61 - out of all the free projects I looked at, and because I could then 1.62 - write my code in clojure, an implementation of =LISP= that runs on 1.63 - the JVM. 1.64 + I chose jMonkeyEngine3 because it had the most features out of all 1.65 + the free projects I looked at, and because I could then write my 1.66 + code in clojure, an implementation of =LISP= that runs on the JVM. 1.67 1.68 ** =CORTEX= uses Blender to create creature models 1.69