OBJECTIVE: Effectively analyzing trends of temporal data becomes a critical task when the amount of data is large. Motion techniques (animation) for scatterplots make it possible to represent lots of data in a single view and make it easy to identify trends and highlight changes. These techniques have recently become very popular and to an extent successful in describing data in presentations. However, compared to static methods of visualization, scatterplot animations may be hard to perceive when the motions are complex. METHODS: This paper studies the effectiveness of interactive scatterplot animation as a visualization technique for data analysis of large data. We compared interactive animations with non-interactive (passive) animations where participants had no control over the animation. Both conditions were evaluated for specific as well as general comprehension of the data. RESULTS: While interactive animation was more effective for specific information analysis, it led to many misunderstandings in the overall comprehension due to the fragmentation of the animation. In general, participants felt that interactivity gave them more confidence and found it more enjoyable and exciting for data exploration. CONCLUSION: Interactive animation of trend visualizations proved to be an effective technique for exploratory data analysis and significantly more accurate than animation alone. With these findings we aim at supporting the use of interactivity to effectively enhance data exploration in animated visualizations.
In this keynote I will present some of the work from our virtual reality laboratory at the Max Planck Institute for Biological Cybernetics in Tübingen. Our research philosophy to understand the brain is to study human information processing in an experimental setting as close as possible to our natural environment. Using computer graphics and virtual reality technology we can now study perception not only in a well controlled natural setting but also in a closed perception-action loop, in which the action of the observer will also change the input to our senses. In psychophysical studies we could show that humans can integrate multimodal sensory information in a statistically optimal way, in which cues are weighted according to their reliability. A better understanding of multimodal sensor fusion will allow us to build new virtual reality platforms in which the design effort for visual, auditory, haptic, vestibular and proprioceptive simulation is influenced by the weight of each cue in multimodal sensor fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.