rlm@334: *Machine Learning and Pattern Recognition with Multiple rlm@334: Modalities Hyungil Ahn and Rosalind W. Picard rlm@333: rlm@334: This project develops new theory and algorithms to enable rlm@334: computers to make rapid and accurate inferences from rlm@334: multiple modes of data, such as determining a person's rlm@334: affective state from multiple sensors--video, mouse behavior, rlm@334: chair pressure patterns, typed selections, or rlm@334: physiology. Recent efforts focus on understanding the level rlm@334: of a person's attention, useful for things such as rlm@334: determining when to interrupt. Our approach is Bayesian: rlm@334: formulating probabilistic models on the basis of domain rlm@334: knowledge and training data, and then performing inference rlm@334: according to the rules of probability theory. This type of rlm@334: sensor fusion work is especially challenging due to problems rlm@334: of sensor channel drop-out, different kinds of noise in rlm@334: different channels, dependence between channels, scarce and rlm@334: sometimes inaccurate labels, and patterns to detect that are rlm@334: inherently time-varying. We have constructed a variety of rlm@334: new algorithms for solving these problems and demonstrated rlm@334: their performance gains over other state-of-the-art methods. rlm@333: rlm@334: http://affect.media.mit.edu/projectpages/multimodal/ rlm@334: