changeset 334:c264ebf683b4

cleanup.
author Robert McIntyre <rlm@mit.edu>
date Fri, 20 Jul 2012 11:22:21 -0500
parents f4ef73370da1
children 5dcd44576cbc
files MIT-media-projects.org
diffstat 1 files changed, 22 insertions(+), 19 deletions(-) [+]
line wrap: on
line diff
     1.1 --- a/MIT-media-projects.org	Fri Jul 20 11:21:04 2012 -0500
     1.2 +++ b/MIT-media-projects.org	Fri Jul 20 11:22:21 2012 -0500
     1.3 @@ -1,21 +1,24 @@
     1.4 -*Machine Learning and Pattern Recognition with Multiple Modalities
     1.5 -Hyungil Ahn and Rosalind W. Picard
     1.6 +*Machine Learning and Pattern Recognition with Multiple
     1.7 +Modalities Hyungil Ahn and Rosalind W. Picard
     1.8  
     1.9 -This project develops new theory and algorithms to enable computers to
    1.10 -make rapid and accurate inferences from multiple modes of data, such
    1.11 -as determining a person's affective state from multiple sensors—video,
    1.12 -mouse behavior, chair pressure patterns, typed selections, or
    1.13 -physiology. Recent efforts focus on understanding the level of a
    1.14 -person's attention, useful for things such as determining when to
    1.15 -interrupt. Our approach is Bayesian: formulating probabilistic models
    1.16 -on the basis of domain knowledge and training data, and then
    1.17 -performing inference according to the rules of probability
    1.18 -theory. This type of sensor fusion work is especially challenging due
    1.19 -to problems of sensor channel drop-out, different kinds of noise in
    1.20 -different channels, dependence between channels, scarce and sometimes
    1.21 -inaccurate labels, and patterns to detect that are inherently
    1.22 -time-varying. We have constructed a variety of new algorithms for
    1.23 -solving these problems and demonstrated their performance gains over
    1.24 -other state-of-the-art methods.
    1.25 +This project develops new theory and algorithms to enable
    1.26 +computers to make rapid and accurate inferences from
    1.27 +multiple modes of data, such as determining a person's
    1.28 +affective state from multiple sensors--video, mouse behavior,
    1.29 +chair pressure patterns, typed selections, or
    1.30 +physiology. Recent efforts focus on understanding the level
    1.31 +of a person's attention, useful for things such as
    1.32 +determining when to interrupt. Our approach is Bayesian:
    1.33 +formulating probabilistic models on the basis of domain
    1.34 +knowledge and training data, and then performing inference
    1.35 +according to the rules of probability theory. This type of
    1.36 +sensor fusion work is especially challenging due to problems
    1.37 +of sensor channel drop-out, different kinds of noise in
    1.38 +different channels, dependence between channels, scarce and
    1.39 +sometimes inaccurate labels, and patterns to detect that are
    1.40 +inherently time-varying. We have constructed a variety of
    1.41 +new algorithms for solving these problems and demonstrated
    1.42 +their performance gains over other state-of-the-art methods.
    1.43  
    1.44 -http://affect.media.mit.edu/projectpages/multimodal/
    1.45 \ No newline at end of file
    1.46 +http://affect.media.mit.edu/projectpages/multimodal/
    1.47 +