# HG changeset patch # User rlm # Date 1396268990 14400 # Node ID 2529c34caa1a25ba2b6fc4a702e9332c8cafb9bf # Parent 1803144ec9aec71ef55b5bef3479a193dd43d171 changes from mom. diff -r 1803144ec9ae -r 2529c34caa1a thesis/cortex.bib --- a/thesis/cortex.bib Mon Mar 31 08:21:39 2014 -0400 +++ b/thesis/cortex.bib Mon Mar 31 08:29:50 2014 -0400 @@ -114,7 +114,7 @@ embodiment is critical to intelligence.}} } -@book{9.01-textbook, +@book{textbook901, author="Bear and Mark F. and Barry W. Connors and Michael A.", title="Neuroscience: Exploring the Brain.", publisher="Lippincott Williams \& Wilkins", diff -r 1803144ec9ae -r 2529c34caa1a thesis/cortex.org --- a/thesis/cortex.org Mon Mar 31 08:21:39 2014 -0400 +++ b/thesis/cortex.org Mon Mar 31 08:29:50 2014 -0400 @@ -186,7 +186,7 @@ model of your body, and aligns the model with the video. Then, you need a /recognizer/, which uses the aligned model to interpret the action. The power in this method lies in the fact that you describe - all actions form a body-centered viewpoint. You are less tied to + all actions from a body-centered viewpoint. You are less tied to the particulars of any visual representation of the actions. If you teach the system what ``running'' is, and you have a good enough aligner, the system will from then on be able to recognize running @@ -296,10 +296,10 @@ *** Main Results - - After one-shot supervised training, =EMPATH= was able recognize a - wide variety of static poses and dynamic actions---ranging from - curling in a circle to wiggling with a particular frequency --- - with 95\% accuracy. + - After one-shot supervised training, =EMPATH= was able to + recognize a wide variety of static poses and dynamic + actions---ranging from curling in a circle to wiggling with a + particular frequency --- with 95\% accuracy. - These results were completely independent of viewing angle because the underlying body-centered language fundamentally is @@ -381,7 +381,7 @@ #+ATTR_LaTeX: :width 12cm [[./images/blender-worm.png]] - Here are some thing I anticipate that =CORTEX= might be used for: + Here are some things I anticipate that =CORTEX= might be used for: - exploring new ideas about sensory integration - distributed communication among swarm creatures @@ -542,7 +542,7 @@ Most human senses consist of many discrete sensors of various properties distributed along a surface at various densities. For skin, it is Pacinian corpuscles, Meissner's corpuscles, Merkel's - disks, and Ruffini's endings (\cite{9.01-textbook), which detect + disks, and Ruffini's endings \cite{textbook901}, which detect pressure and vibration of various intensities. For ears, it is the stereocilia distributed along the basilar membrane inside the cochlea; each one is sensitive to a slightly different frequency of @@ -560,8 +560,8 @@ each sense. Fortunately this idea is already a well known computer graphics - technique called called /UV-mapping/. The three-dimensional surface - of a model is cut and smooshed until it fits on a two-dimensional + technique called /UV-mapping/. The three-dimensional surface of a + model is cut and smooshed until it fits on a two-dimensional image. You paint whatever you want on that image, and when the three-dimensional shape is rendered in a game the smooshing and cutting is reversed and the image appears on the three-dimensional @@ -651,10 +651,9 @@ pipeline. The engine was not built to serve any particular game but is instead meant to be used for any 3D game. - I chose jMonkeyEngine3 because it because it had the most features - out of all the free projects I looked at, and because I could then - write my code in clojure, an implementation of =LISP= that runs on - the JVM. + I chose jMonkeyEngine3 because it had the most features out of all + the free projects I looked at, and because I could then write my + code in clojure, an implementation of =LISP= that runs on the JVM. ** =CORTEX= uses Blender to create creature models