annotate MIT-media-projects.org @ 348:5405f369f4a0

added summary of each sense for the joint.
author Robert McIntyre <rlm@mit.edu>
date Mon, 23 Jul 2012 03:51:13 -0500
parents c264ebf683b4
children
rev   line source
rlm@334 1 *Machine Learning and Pattern Recognition with Multiple
rlm@334 2 Modalities Hyungil Ahn and Rosalind W. Picard
rlm@333 3
rlm@334 4 This project develops new theory and algorithms to enable
rlm@334 5 computers to make rapid and accurate inferences from
rlm@334 6 multiple modes of data, such as determining a person's
rlm@334 7 affective state from multiple sensors--video, mouse behavior,
rlm@334 8 chair pressure patterns, typed selections, or
rlm@334 9 physiology. Recent efforts focus on understanding the level
rlm@334 10 of a person's attention, useful for things such as
rlm@334 11 determining when to interrupt. Our approach is Bayesian:
rlm@334 12 formulating probabilistic models on the basis of domain
rlm@334 13 knowledge and training data, and then performing inference
rlm@334 14 according to the rules of probability theory. This type of
rlm@334 15 sensor fusion work is especially challenging due to problems
rlm@334 16 of sensor channel drop-out, different kinds of noise in
rlm@334 17 different channels, dependence between channels, scarce and
rlm@334 18 sometimes inaccurate labels, and patterns to detect that are
rlm@334 19 inherently time-varying. We have constructed a variety of
rlm@334 20 new algorithms for solving these problems and demonstrated
rlm@334 21 their performance gains over other state-of-the-art methods.
rlm@333 22
rlm@334 23 http://affect.media.mit.edu/projectpages/multimodal/
rlm@334 24