annotate org/adelson-notes.org @ 97:a7f628da26be

working on sussman's reading list.
author Robert McIntyre <rlm@mit.edu>
date Mon, 26 May 2014 01:56:08 -0400
parents dfcbbb3d4b9a
children
rev   line source
rlm@66 1 #+title: Notes for "Special Topics in Computer Vision"
rlm@66 2 #+author: Robert McIntyre
rlm@66 3 #+email: rlm@mit.edu
rlm@66 4 #+description:
rlm@66 5 #+keywords:
rlm@66 6 #+SETUPFILE: ../../aurellem/org/setup.org
rlm@66 7 #+INCLUDE: ../../aurellem/org/level-0.org
rlm@66 8 #+babel: :mkdirp yes :noweb yes :exports both
rlm@66 9
rlm@66 10 * Fri Sep 27 2013
rlm@66 11
rlm@66 12 Lambertian surfaces are a special type of Matt surface. They reflect
rlm@66 13 light in all directions equally. They have only one parameter, the
rlm@66 14 amount of energy that is absorbed/re-emitted.
rlm@66 15
rlm@66 16 [[../images/adelson-checkerboard.jpg]]
rlm@66 17 #+caption: Lol checkerboard illusion.
rlm@66 18
rlm@66 19 Look into Helmholtz' stuff, it might be interesting. It was the
rlm@66 20 foundation of both vision and audition research. Seems to have took
rlm@66 21 a sort of Baysean approach to inferring how vision/audition works.
rlm@66 22
rlm@66 23 - Homomorphic filtering :: Oppenhiem, Schafer, Stockham, 1968. also
rlm@66 24 look at Stockham, 1972.
rlm@66 25
rlm@66 26 Edwin Land was Adelson's hero back in the day. He needed to create a
rlm@66 27 color photo for the Polaroid camera. In order to process for
rlm@66 28 automatic development of film, he had to get a good approximation for
rlm@66 29 the illumination/reflectance decomposition that humans do, which he
rlm@66 30 called Retinex.
rlm@66 31
rlm@66 32 Cornsweet square wave grating is cool.
rlm@66 33
rlm@66 34 - Retinex :: use derivatives to find illumination. Sort of
rlm@66 35 implicitly deals with edges, etc. Can't deal with
rlm@66 36 non-lambertian objects.
rlm@66 37
rlm@66 38
rlm@66 39 Adelson introduces the problem as an "inverse" problem, where you
rlm@66 40 try to "undo" the 3-d projection of the world on your retina.
rlm@66 41
rlm@66 42 On the functional view of vision : "What it takes" is to build a
rlm@66 43 model of the world in your head. The bare minimum to get success in
rlm@66 44 life is to have a model of the world. Even at the level of a single
rlm@66 45 cell, I think you still benefit from models.
rlm@66 46
rlm@66 47 Spatial propagation is ABSOLUTELY required to separate embossed
rlm@66 48 stuff from "painted" stuff. Edges, likewise, MUST have spatial
rlm@66 49 context to disambiguate. The filters we use to deal with edges must
rlm@66 50 have larger spatial context to work, and the spatial extent of this
rlm@66 51 context must be the ENTIRE visual field in some cases!
rlm@66 52
rlm@66 53 ------------------------------------------------------------
rlm@66 54
rlm@66 55 ** Illumination, shape, reflectance all at once
rlm@66 56
rlm@66 57 What if we tried to infer everything together? Some images are so
rlm@66 58 ambiguous it requires propagation from all three qualities to
rlm@66 59 resolve the ambiguity.
rlm@66 60
rlm@66 61 Brain has a competing painter, sculptor, and gaffer which each try
rlm@66 62 to "build" the things in the world. There is a cost to everything
rlm@66 63 such as paints, lights, and material, and then you try to optmize
rlm@66 64 some cost function using these primitives.
rlm@66 65
rlm@66 66
rlm@67 67 Horn, technical report, 1970
rlm@67 68
rlm@67 69 * Fri Oct 4 2013
rlm@67 70
rlm@67 71 Student report. Talked about how you capture the appearance of a
rlm@67 72 grape. It's actually quite compicated, involving gloss, spatial
rlm@67 73 context, etc.
rlm@67 74
rlm@67 75 Turbosquid seems interesting. They sell 3D models of stuff.
rlm@67 76
rlm@67 77 BRDF -- bi-directional reflectance distribution function this shows
rlm@67 78 how a surface will behave given lighting conditions. Lambertian is a
rlm@67 79 simple parameterized instantiation of this.
rlm@67 80
rlm@67 81 BSSRDF -- (SS = subsurface) 3D analogue of BRDF
rlm@67 82
rlm@67 83 What would the 3D analiogue of texture be?
rlm@67 84
rlm@67 85 (a : b : c) as (a + b + c : b + c : c) <-- this is just the golden
rlm@67 86 ratio again!
rlm@67 87
rlm@67 88 CURET BTF Database lol what's this
rlm@67 89
rlm@67 90 This student went and gathered 1000 images of different large
rlm@67 91 objects made of different materials. The images were gathered off of
rlm@67 92 Flikr.
rlm@67 93
rlm@84 94 Then she gave another talk from someone else. It's about assigning
rlm@84 95 materials to objects and then rendering them. The choice of
rlm@84 96 materials is determined by some sort of expert system?
rlm@67 97
rlm@84 98 They have made a neat looking interface for human entry of texture
rlm@84 99 labeling or objects in scenes. The important elements were manual
rlm@84 100 labels, dynamic display of the current selection, and undo.
rlm@84 101
rlm@84 102 There are papers about mechanical turk engineering.
rlm@84 103
rlm@84 104 lol http://opensurfaces.cs.cornell.edu/
rlm@84 105
rlm@84 106 CUBAM is interesting 2010 "The multidimensional wisdom of crowds."
rlm@84 107 "Neural Information processing systems" some sort of voting scheme.
rlm@84 108
rlm@84 109 The point of this is apparently to do some kitchen makeover
rlm@84 110 thing. You would take a picture of your kitchen, or you would look
rlm@84 111 for a kitchen that looks like yours, and then you would be able to
rlm@84 112 investigate different textures for your own kitchen.
rlm@84 113
rlm@84 114 apparently "Label ME" has never been appropriately
rlm@84 115 crowdsourced. Turns out that you get better work if you don't use
rlm@84 116 idiots to do the work lol.