The first authoritative resource on the dominant paradigm for new computer
interfaces: users input involving new media (speech, multi-touch, hand and body
gestures, facial expressions, writing) embedded in multimodal-multisensor interfaces
that often include biosignals. This second volume begins with multimodal signal
processing, architectures, and machine learning. It includes deep learning approaches
for processing multisensorial and multimodal
user data and interaction, as well as context-sensitivity. It focuses attention on how
interfaces are most likely to advance human
performance during the next decade.