KEYWORDS: Acoustics, 3D volumetric display, Visualization, 3D volumetric displays, Particles, Particle systems, Holography, Plasmonics, 3D displays, 3D visualizations
Current display approaches, such as VR, allow us to get a glimpse of multimodal 3D experiences, but users need to wear headsets as well as other devices in order to trick our brains into believing that the content we are seeing, hearing or feeling is real. Light-field, holographic or volumetric displays avoid the use of headsets, but they constraint the user’s ability to interact with them (e.g. content is not reachable to user’s hands, user’s constrained to specific locations) and, most importantly, still cannot simultaneously deliver sound and touch. In this talk, we will present the Multimodal Acoustic Trapping Display (MATD): a mid-air volumetric display that can simultaneously deliver visual, tactile and audio content, using phased arrays of ultrasound transducers. The MATD makes use of ultrasound to trap, quickly move and colour a small particle in mid-air, to create coloured volumetric shapes visible to our naked eyes. Making use of the pressure delivered by the ultrasound waves, the MATD can also create points of high pressure that our bare hands can feel and induce air vibrations that create audible sound. The system demonstrates particle speeds of up to 8.75 m/s and 3.75 m/s in the vertical and horizontal directions, respectively. In addition, our technique offers opportunities for non-contact, highspeed manipulation of matter, with applications in computational fabrication and biomedicine.
A Multitouch screen is an obvious choice for a holographic optical tweezers interface, allowing multiple optical traps to be
controlled in real-time. In this paper we describe the user interface used for our original multitouch system and demonstrate
that, for the user tasks performed, the multitouch performs better than a simple point-and-click interface.
Block Truncation Coding is one of the oldest known forms of image compression algorithms, its main attraction being its simple underlying concepts and ease of implementation. In this paper we present a new Predictive Absolute Moment Block Truncation Coding scheme which improves the performance of the existing Block Truncation Coding schemes. The proposed scheme is based on selectively predicting the reconstruction values of a block based on the corresponding values in the neighboring blocks, as well as predicting the bitplane from the bitplanes of the corresponding blocks in other color components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.