During the last decade, cell phones with multimodal interfaces based on combined new media have become the dominant computer interface worldwide. Multimodal interfaces support mobility and expand the expressive power of human input to computers. They have shifted the fulcrum of human-computer interaction much closer to the human. This book explains the foundation of human-centered multimodal interaction and interface design, based on the cognitive and neurosciences, as well as the major benefits of multimodal interfaces for human cognition and performance. It describes the data-intensive methodologies used to envision, prototype, and evaluate new multimodal interfaces. From a system development viewpoint, this book outlines major approaches for multimodal signal processing, fusion, architectures, and techniques for robustly interpreting users' meaning. Multimodal interfaces have been commercialized extensively for field and mobile applications during the last decade. Research also is growing rapidly in areas like multimodal data analytics, affect recognition, accessible interfaces, embedded and robotic interfaces, machine learning and new hybrid processing approaches, and similar topics. The expansion of multimodal interfaces is part of the long-term evolution of more expressively powerful input to computers, a trend that will substantially improve support for human cognition and performance.Kinect software can track six people, two of them active, identifying up to 20 joint angles per active person. ... Kinect can also be added to standard Microsoft Windows for other non-gaming applications (for discussion, see Section 10.1). ... of face- and body-tracking improves for analyzing finger movements, manual gestures, and facial expressions relevant to usersa#39; communications and state ( see Sectionanbsp;...
|Title||:||The Paradigm Shift to Multimodality in Contemporary Computer Interfaces|
|Author||:||Sharon Oviatt, Philip R. Cohen|
|Publisher||:||Morgan & Claypool Publishers - 2015-04-01|