Paper
28 August 2000 Integrated multimodal human-computer interface and augmented reality for interactive display applications
Marius S. Vassiliou, Venkataraman Sundareswaran, S. Chen, Reinhold Behringer, Clement K. Tam, M. Chan, Phil T. Bangayan, Joshua H. McGee
Author Affiliations +
Abstract
We describe new systems for improved integrated multimodal human-computer interaction and augmented reality for a diverse array of applications, including future advanced cockpits, tactical operations centers, and others. We have developed an integrated display system featuring: speech recognition of multiple concurrent users equipped with both standard air- coupled microphones and novel throat-coupled sensors (developed at Army Research Labs for increased noise immunity); lip reading for improving speech recognition accuracy in noisy environments, three-dimensional spatialized audio for improved display of warnings, alerts, and other information; wireless, coordinated handheld-PC control of a large display; real-time display of data and inferences from wireless integrated networked sensors with on-board signal processing and discrimination; gesture control with disambiguated point-and-speak capability; head- and eye- tracking coupled with speech recognition for 'look-and-speak' interaction; and integrated tetherless augmented reality on a wearable computer. The various interaction modalities (speech recognition, 3D audio, eyetracking, etc.) are implemented a 'modality servers' in an Internet-based client-server architecture. Each modality server encapsulates and exposes commercial and research software packages, presenting a socket network interface that is abstracted to a high-level interface, minimizing both vendor dependencies and required changes on the client side as the server's technology improves.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Marius S. Vassiliou, Venkataraman Sundareswaran, S. Chen, Reinhold Behringer, Clement K. Tam, M. Chan, Phil T. Bangayan, and Joshua H. McGee "Integrated multimodal human-computer interface and augmented reality for interactive display applications", Proc. SPIE 4022, Cockpit Displays VII: Displays for Defense Applications, (28 August 2000); https://doi.org/10.1117/12.397779
Lens.org Logo
CITATIONS
Cited by 4 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Speech recognition

Sensors

Augmented reality

Multimedia

Human-machine interfaces

Laser induced plasma spectroscopy

Sensor networks

RELATED CONTENT


Back to Top