Mercurial > thoughts
view org/adelson-notes.org @ 66:eae81fa3a8e0
add camera timing idea.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Thu, 03 Oct 2013 17:42:48 -0400 |
parents | |
children | 036fe1b13120 |
line wrap: on
line source
1 #+title: Notes for "Special Topics in Computer Vision"2 #+author: Robert McIntyre3 #+email: rlm@mit.edu4 #+description:5 #+keywords:6 #+SETUPFILE: ../../aurellem/org/setup.org7 #+INCLUDE: ../../aurellem/org/level-0.org8 #+babel: :mkdirp yes :noweb yes :exports both10 * Fri Sep 27 201312 Lambertian surfaces are a special type of Matt surface. They reflect13 light in all directions equally. They have only one parameter, the14 amount of energy that is absorbed/re-emitted.16 [[../images/adelson-checkerboard.jpg]]17 #+caption: Lol checkerboard illusion.19 Look into Helmholtz' stuff, it might be interesting. It was the20 foundation of both vision and audition research. Seems to have took21 a sort of Baysean approach to inferring how vision/audition works.23 - Homomorphic filtering :: Oppenhiem, Schafer, Stockham, 1968. also24 look at Stockham, 1972.26 Edwin Land was Adelson's hero back in the day. He needed to create a27 color photo for the Polaroid camera. In order to process for28 automatic development of film, he had to get a good approximation for29 the illumination/reflectance decomposition that humans do, which he30 called Retinex.32 Cornsweet square wave grating is cool.34 - Retinex :: use derivatives to find illumination. Sort of35 implicitly deals with edges, etc. Can't deal with36 non-lambertian objects.39 Adelson introduces the problem as an "inverse" problem, where you40 try to "undo" the 3-d projection of the world on your retina.42 On the functional view of vision : "What it takes" is to build a43 model of the world in your head. The bare minimum to get success in44 life is to have a model of the world. Even at the level of a single45 cell, I think you still benefit from models.47 Spatial propagation is ABSOLUTELY required to separate embossed48 stuff from "painted" stuff. Edges, likewise, MUST have spatial49 context to disambiguate. The filters we use to deal with edges must50 have larger spatial context to work, and the spatial extent of this51 context must be the ENTIRE visual field in some cases!53 ------------------------------------------------------------55 ** Illumination, shape, reflectance all at once57 What if we tried to infer everything together? Some images are so58 ambiguous it requires propagation from all three qualities to59 resolve the ambiguity.61 Brain has a competing painter, sculptor, and gaffer which each try62 to "build" the things in the world. There is a cost to everything63 such as paints, lights, and material, and then you try to optmize64 some cost function using these primitives.67 Horn, technical report, 1970