Mercurial > thoughts
view org/adelson-notes.org @ 67:036fe1b13120
moar ideas.
author | Robert McIntyre <rlm@mit.edu> |
---|---|
date | Fri, 04 Oct 2013 14:49:04 -0400 |
parents | eae81fa3a8e0 |
children | dfcbbb3d4b9a |
line wrap: on
line source
1 #+title: Notes for "Special Topics in Computer Vision"2 #+author: Robert McIntyre3 #+email: rlm@mit.edu4 #+description:5 #+keywords:6 #+SETUPFILE: ../../aurellem/org/setup.org7 #+INCLUDE: ../../aurellem/org/level-0.org8 #+babel: :mkdirp yes :noweb yes :exports both10 * Fri Sep 27 201312 Lambertian surfaces are a special type of Matt surface. They reflect13 light in all directions equally. They have only one parameter, the14 amount of energy that is absorbed/re-emitted.16 [[../images/adelson-checkerboard.jpg]]17 #+caption: Lol checkerboard illusion.19 Look into Helmholtz' stuff, it might be interesting. It was the20 foundation of both vision and audition research. Seems to have took21 a sort of Baysean approach to inferring how vision/audition works.23 - Homomorphic filtering :: Oppenhiem, Schafer, Stockham, 1968. also24 look at Stockham, 1972.26 Edwin Land was Adelson's hero back in the day. He needed to create a27 color photo for the Polaroid camera. In order to process for28 automatic development of film, he had to get a good approximation for29 the illumination/reflectance decomposition that humans do, which he30 called Retinex.32 Cornsweet square wave grating is cool.34 - Retinex :: use derivatives to find illumination. Sort of35 implicitly deals with edges, etc. Can't deal with36 non-lambertian objects.39 Adelson introduces the problem as an "inverse" problem, where you40 try to "undo" the 3-d projection of the world on your retina.42 On the functional view of vision : "What it takes" is to build a43 model of the world in your head. The bare minimum to get success in44 life is to have a model of the world. Even at the level of a single45 cell, I think you still benefit from models.47 Spatial propagation is ABSOLUTELY required to separate embossed48 stuff from "painted" stuff. Edges, likewise, MUST have spatial49 context to disambiguate. The filters we use to deal with edges must50 have larger spatial context to work, and the spatial extent of this51 context must be the ENTIRE visual field in some cases!53 ------------------------------------------------------------55 ** Illumination, shape, reflectance all at once57 What if we tried to infer everything together? Some images are so58 ambiguous it requires propagation from all three qualities to59 resolve the ambiguity.61 Brain has a competing painter, sculptor, and gaffer which each try62 to "build" the things in the world. There is a cost to everything63 such as paints, lights, and material, and then you try to optmize64 some cost function using these primitives.67 Horn, technical report, 197071 * Fri Oct 4 201373 Student report. Talked about how you capture the appearance of a74 grape. It's actually quite compicated, involving gloss, spatial75 context, etc.77 Turbosquid seems interesting. They sell 3D models of stuff.79 BRDF -- bi-directional reflectance distribution function this shows80 how a surface will behave given lighting conditions. Lambertian is a81 simple parameterized instantiation of this.83 BSSRDF -- (SS = subsurface) 3D analogue of BRDF85 What would the 3D analiogue of texture be?87 (a : b : c) as (a + b + c : b + c : c) <-- this is just the golden88 ratio again!90 CURET BTF Database lol what's this92 This student went and gathered 1000 images of different large93 objects made of different materials. The images were gathered off of94 Flikr.