rlm@333: *Machine Learning and Pattern Recognition with Multiple Modalities rlm@333: Hyungil Ahn and Rosalind W. Picard rlm@333: rlm@333: This project develops new theory and algorithms to enable computers to rlm@333: make rapid and accurate inferences from multiple modes of data, such rlm@333: as determining a person's affective state from multiple sensors—video, rlm@333: mouse behavior, chair pressure patterns, typed selections, or rlm@333: physiology. Recent efforts focus on understanding the level of a rlm@333: person's attention, useful for things such as determining when to rlm@333: interrupt. Our approach is Bayesian: formulating probabilistic models rlm@333: on the basis of domain knowledge and training data, and then rlm@333: performing inference according to the rules of probability rlm@333: theory. This type of sensor fusion work is especially challenging due rlm@333: to problems of sensor channel drop-out, different kinds of noise in rlm@333: different channels, dependence between channels, scarce and sometimes rlm@333: inaccurate labels, and patterns to detect that are inherently rlm@333: time-varying. We have constructed a variety of new algorithms for rlm@333: solving these problems and demonstrated their performance gains over rlm@333: other state-of-the-art methods. rlm@333: rlm@333: http://affect.media.mit.edu/projectpages/multimodal/