rlm@427: rlm@427: \section{Artificial Imagination} rlm@427: \label{sec-1} rlm@427: rlm@427: Imagine watching a video of someone skateboarding. When you watch rlm@427: the video, you can imagine yourself skateboarding, and your rlm@427: knowledge of the human body and its dynamics guides your rlm@427: interpretation of the scene. For example, even if the skateboarder rlm@427: is partially occluded, you can infer the positions of his arms and rlm@427: body from your own knowledge of how your body would be positioned if rlm@427: you were skateboarding. If the skateboarder suffers an accident, you rlm@427: wince in sympathy, imagining the pain your own body would experience rlm@427: if it were in the same situation. This empathy with other people rlm@427: guides our understanding of whatever they are doing because it is a rlm@427: powerful constraint on what is probable and possible. In order to rlm@427: make use of this powerful empathy constraint, I need a system that rlm@427: can generate and make sense of sensory data from the many different rlm@427: senses that humans possess. The two key proprieties of such a system rlm@427: are \emph{embodiment} and \emph{imagination}. rlm@427: rlm@427: \subsection{What is imagination?} rlm@427: \label{sec-1-1} rlm@427: rlm@427: One kind of imagination is \emph{sympathetic} imagination: you imagine rlm@427: yourself in the position of something/someone you are rlm@427: observing. This type of imagination comes into play when you follow rlm@427: along visually when watching someone perform actions, or when you rlm@427: sympathetically grimace when someone hurts themselves. This type of rlm@427: imagination uses the constraints you have learned about your own rlm@427: body to highly constrain the possibilities in whatever you are rlm@427: seeing. It uses all your senses to including your senses of touch, rlm@427: proprioception, etc. Humans are flexible when it comes to "putting rlm@427: themselves in another's shoes," and can sympathetically understand rlm@427: not only other humans, but entities ranging from animals to cartoon rlm@427: characters to \href{http://www.youtube.com/watch?v=0jz4HcwTQmU}{single dots} on a screen! rlm@427: rlm@427: rlm@427: \begin{figure}[htb] rlm@427: \centering rlm@427: \includegraphics[width=5cm]{./images/cat-drinking.jpg} rlm@427: \caption{A cat drinking some water. Identifying this action is beyond the state of the art for computers.} rlm@427: \end{figure} rlm@427: rlm@427: rlm@427: This is a basic test for the vision system. It only tests the rlm@427: vision-pipeline and does not deal with loading eyes from a blender rlm@427: file. The code creates two videos of the same rotating cube from rlm@427: different angles. rlm@427: rlm@427: rlm@427: \begin{clojurecode} rlm@427: (in-ns 'cortex.test.vision) rlm@427: rlm@427: (defn test-pipeline rlm@427: "Testing vision: rlm@427: Tests the vision system by creating two views of the same rotating rlm@427: object from different angles and displaying both of those views in rlm@427: JFrames. rlm@427: rlm@427: You should see a rotating cube, and two windows, rlm@427: each displaying a different view of the cube." rlm@427: ([] (test-pipeline false)) rlm@427: ([record?] rlm@427: (let [candy rlm@427: (box 1 1 1 :physical? false :color ColorRGBA/Blue)] rlm@427: (world rlm@427: (doto (Node.) rlm@427: (.attachChild candy)) rlm@427: {} rlm@427: (fn [world] rlm@427: (let [cam (.clone (.getCamera world)) rlm@427: width (.getWidth cam) rlm@427: height (.getHeight cam)] rlm@427: (add-camera! world cam rlm@427: (comp rlm@427: (view-image rlm@427: (if record? rlm@427: (File. "/home/r/proj/cortex/render/vision/1"))) rlm@427: BufferedImage!)) rlm@427: (add-camera! world rlm@427: (doto (.clone cam) rlm@427: (.setLocation (Vector3f. -10 0 0)) rlm@427: (.lookAt Vector3f/ZERO Vector3f/UNIT_Y)) rlm@427: (comp rlm@427: (view-image rlm@427: (if record? rlm@427: (File. "/home/r/proj/cortex/render/vision/2"))) rlm@427: BufferedImage!)) rlm@427: (let [timer (IsoTimer. 60)] rlm@427: (.setTimer world timer) rlm@427: (display-dilated-time world timer)) rlm@427: ;; This is here to restore the main view rlm@427: ;; after the other views have completed processing rlm@427: (add-camera! world (.getCamera world) no-op))) rlm@427: (fn [world tpf] rlm@427: (.rotate candy (* tpf 0.2) 0 0)))))) rlm@427: \end{clojurecode} rlm@427: rlm@427: rlm@427: \begin{itemize} rlm@427: \item This is test1 \cite{Tappert77}. rlm@427: \end{itemize}