Body image/body schema (BIBS) is within the larger realm of embodied cognition. Its interdisciplinary literature can
inspire Virtual Reality (VR) researchers and designers to develop novel ideas and provide them with approaches to
human perception and experience. In this paper, we introduced six fundamental ideas in designing interactions in VR,
derived from BIBS literature that demonstrates how the mind is embodied. We discuss our own research, ranging from
two mature works to a prototype, to support explorations VR interaction design from a BIBS approach. Based on our
experiences, we argue that incorporating ideas of embodiment into design practices requires a shift in the perspective or
understanding of the human body, perception and experiences, all of which affect interaction design in unique ways. The
dynamic, interactive and distributed understanding of cognition guides our approach to interaction design, where the
interrelatedness and plasticity of BIBS play a crucial role.
In order to enhance analysis of synthetic health data of the IEEE VAST Challenge 2010, we introduce an interactive Visual Analytics tool called FilooT designed as a part of the Interactive Multi-genomic Analysis System (IMAS) project. In this paper, we describe different interactive views of FilooT: the Tabular View for exploring and comparing genetic sequences, the Matrix View for sorting sequences according to the values of different characteristics, the P-value View for finding the most important mutations across a family of sequences, the Graph View for finding related sequences and the Group View to group them for further investigation. We followed the Nested Process Model framework throughout the design process and the evaluation. To understand the tool's design capabilities for target domain analysts, we conducted a User Experience scenario-based study followed by an informal interview. The findings indicated how analysts employ each of the visualization and interaction designs in their Bioinformatics task-analysis process. The critical analysis of the results inspired design informing suggestions.
KEYWORDS: Analytical research, Visualization, Visual analytics, Zoom lenses, Data modeling, Data visualization, Data analysis, Data mining, Data integration, Computing systems
Analysts need to keep track of their analytic findings, observations, ideas, and hypotheses throughout the analysis process. While some visual analytics tools support such note-taking needs, these notes are often represented as objects separate from the data and in a workspace separate from the data visualizations. Representing notes the same way as the data and integrating them with data visualizations can enable analysts to build a more cohesive picture of the analytical process. We created a note-taking functionality called CZNotes within the visual analytics tool CZSaw for analyzing unstructured text documents. CZNotes are designed to use the same model as the data and can thus be visualized in CZSaw's existing data views. We conducted a preliminary case study to observe the use of CZNotes and observed that CZNotes has the potential to support progressive analysis, to act as a shortcut to the data, and supports creation of new data relationships.
OBJECTIVE: Effectively analyzing trends of temporal data becomes a critical task when the amount of data is large. Motion techniques (animation) for scatterplots make it possible to represent lots of data in a single view and make it easy to identify trends and highlight changes. These techniques have recently become very popular and to an extent successful in describing data in presentations. However, compared to static methods of visualization, scatterplot animations may be hard to perceive when the motions are complex. METHODS: This paper studies the effectiveness of interactive scatterplot animation as a visualization technique for data analysis of large data. We compared interactive animations with non-interactive (passive) animations where participants had no control over the animation. Both conditions were evaluated for specific as well as general comprehension of the data. RESULTS: While interactive animation was more effective for specific information analysis, it led to many misunderstandings in the overall comprehension due to the fragmentation of the animation. In general, participants felt that interactivity gave them more confidence and found it more enjoyable and exciting for data exploration. CONCLUSION: Interactive animation of trend visualizations proved to be an effective technique for exploratory data analysis and significantly more accurate than animation alone. With these findings we aim at supporting the use of interactivity to effectively enhance data exploration in animated visualizations.
KEYWORDS: Visualization, Visual analytics, Receivers, Social networks, Information visualization, Telecommunications, Visibility, Statistical analysis, Data analysis, Data mining
Although the discovery and analysis of communication patterns in large and complex email datasets are difficult tasks,
they can be a valuable source of information. We present EmailTime, a visual analysis tool of email correspondence
patterns over the course of time that interactively portrays personal and interpersonal networks using the correspondence
in the email dataset. Our approach is to put time as a primary variable of interest, and plot emails along a time line.
EmailTime helps email dataset explorers interpret archived messages by providing zooming, panning, filtering and
highlighting etc. To support analysis, it also measures and visualizes histograms, graph centrality and frequency on the
communication graph that can be induced from the email collection. This paper describes EmailTime's capabilities,
along with a large case study with Enron email dataset to explore the behaviors of email users within different
organizational positions from January 2000 to December 2001. We defined email behavior as the email activity level of
people regarding a series of measured metrics e.g. sent and received emails, numbers of email addresses, etc. These
metrics were calculated through EmailTime. Results showed specific patterns in the use email within different
organizational positions. We suggest that integrating both statistics and visualizations in order to display information
about the email datasets may simplify its evaluation.
The research program aims to explore and examine the fine balance necessary for maintaining the interplays between
technology and the immersant, including identifying qualities that contribute to creating and maintaining a sense of
"presence" and "immersion" in an immersive virtual reality (IVR) experience. Building upon and extending previous
work, we compare sitting meditation with walking meditation in a virtual environment (VE). The Virtual Meditative
Walk, a new work-in-progress, integrates VR and biofeedback technologies with a self-directed, uni-directional
treadmill. As immersants learn how to meditate while walking, robust, real-time biofeedback technology continuously
measures breathing, skin conductance and heart rate. The physiological states of the immersant will in turn affect the
audio and stereoscopic visual media through shutter glasses. We plan to test the potential benefits and limitations of this
physically active form of meditation with data from a sitting form of meditation. A mixed-methods approach to testing
user outcomes parallels the knowledge bases of the collaborative team: a physician, computer scientists and artists.
KEYWORDS: Visualization, Interfaces, Human-machine interfaces, Data analysis, Digital cameras, Databases, Visual analytics, Electronic imaging, Current controlled current source, Multimedia
Commercial websites offer many items to potential site users. However, most current websites display results of a search
in text lists, or as lists sorted on one or two single criteria. Finding the best item in a text list based on multi-priority
criteria is an exhausting task, especially for long lists. Visualizing search results and enabling users to perceive the
tradeoffs among the results based on multiple priorities may ease this process. To investigate this, two different
techniques for displaying and sorting search results are studied in this paper; Text, and XY Iconic Visualization. The
goal is to determine which technique for representing search results would be the most efficient one for a website user.
We conducted a user study to compare the usability of the two techniques. Collected data is in the form of participants'
task responses, a satisfaction questionnaire, qualitative observations, and participants' comments. According to the
results, iconic visualization is better for overview (it gives a good overview in a short amount of time) and search with
more than two criteria, while text-based performs better for displaying details.
This paper introduces an analysis-based zoomable visualization technique for displaying the location of genes across many related species of microbes. The purpose of this visualizatiuon is to enable a biologist to examine the layout of genes in the organism of interest with respect to the gene organization of related organisms. During the genomic annotation process, the ability to observe gene organization in common with previously annotated genomes can help a
biologist better confirm the structure and function of newly analyzed microbe DNA sequences. We have developed a visualization and analysis tool that enables the biologist to observe and examine gene organization among genomes, in the context of the primary sequence of interest. This paper describes the visualization and analysis steps, and presents a case study using a number of Rickettsia genomes.
Neuroscience has benefited from an explosion of new experimental techniques; many have only become feasible in the wake of improvements in computing speed and data storage. At the same time, these new computation-intensive techniques have led to a growing gulf between the data and the knowledge extracted from those data. That is, in the neurosciences there is a paucity of effective knowledge management techniques and an accelerating accumulation of
experimental data. The purpose of the project described in the present paper is to create a visualization of the knowledge
base of the neurosciences. At run-time, this 'BrainFrame' project accesses several web-based ontologies and generates a
semantically zoomable representation of any one of many levels of the human nervous system.
In this paper we introduce Musician Map, a web-based interactive tool for visualizing relationships among popular musicians who have released recordings since 1950. Musician Map accepts search terms from the user, and in turn uses these terms to retrieve data from MusicBrainz.org and AudioScrobbler.net, and visualizes the results. Musician Map visualizes relationships of various kinds between music groups and individual musicians, such as band membership, musical collaborations, and linkage to other artists that are generally regarded as being similar in musical style. These
relationships are plotted between artists using a new timeline-based visualization where a node in a traditional node-link diagram has been transformed into a Timeline-Node, which allows the visualization of an evolving entity over time, such as the membership in a band. This allows the user to pursue social trend queries such as "Do Hip-Hop artists collaborate differently than Rock artists".
KEYWORDS: Visualization, Data modeling, Radar, Data acquisition, Databases, 3D modeling, Sensors, Doppler effect, Visual process modeling, Human-machine interfaces
Over the past several years there has been a broad effort towards realizing the Digital Earth, which involves the digitization of all earth-related data and the organization of these data into common repositories for wide access. Recently the idea has been proposed to go beyond these first steps and produce a Visual Earth, where a main goal is a comprehensive visual query and data exploration system. Such a system could significantly widen access to Digital Earth data and improve its use. It could provide a common framework and a common picture for the disparate types of data available now and contemplated in the future. In particular mcuh future data will stream in continuously from a variety of ubiquitous, online sensors, such as weather sensors, traffic sensors, pollution gauges, and many others. The Visual Earth will be especially suited to the organization and display of these dynamic data. This paper lays the foundation and discusses first efforts towards building the Visual Earth. It shows that the goal of interactive visualization requires consideration of the whole process including data organization, query, preparation for rendering, and display. Indeed, visual query offers a set of guiding principles for the integrated organization, retrieval, and presentation of all types of geospatial data. These include terrain elevation and imagery data, buildings and urban models, maps and geographic information, geologic features, land cover and vegetation, dynamic atmospheric phenomena, and other types of data.
KEYWORDS: Buildings, Visualization, Optical spheres, Solid modeling, Data modeling, 3D modeling, Visual process modeling, 3D image processing, Computer aided design, Data storage
This paper describes an approach for the organization and simplification of high-resolution geometry and imagery data for 3D buildings for interactive city navigation. At the highest level of organization, building data are inserted into a global hierarchy that supports the large-scale storage of cities around the world. This structure also provides fast access to the data suitable for interactive visualization. At this level the structure and simplification algorithms deal with city blocks. An associated latitude and longitude coordinate for each block is used to place it in the hierarchy. Each block is decomposed into building facades. A facade is a texture-mapped polygonal mesh representing one side of a city block. Therefore, a block typically contains four facades, but it may contain more. The facades are partitioned into relatively flat surfaces called faces. A texture-mapped polygonal mesh represents the building facades. By simplifying the faces first instead of the facades, the dominant characteristics of the building geometry are maintained. At the lowest level of detail, each face is simplified into a single texture-mapped polygon. An algorithm is presented for the simplification transition between the high- and low-detail representations of the faces. Other techniques for the simplification of entire blocks and even cities are discussed.
KEYWORDS: Data acquisition, Visualization, Radar, Doppler effect, Volume rendering, Data storage, 3D optical data storage, 3D displays, Data archive systems, Data centers
In this paper 'real-time 3D data' refers to volumetric data that are acquired and used as they are produced. Large scale, real-time data are difficult to store and analyze, either visually or by some other means, within the time frames required. Yet this is often quite important to do when decision-makers must receive and quickly act on new information. An example is weather forecasting, where forecasters must act on information received on severe storm development and movement. To meet the real-time requirements crude heuristics are often used to gather information from the original data. This is in spite of the fact that better and better real-time data are becoming available, the full use of which could significantly improve decisions. The work reported here addresses these issues by providing comprehensive data acquisition, analysis, and storage components with time budgets for the data management of each component. These components are put into a global geospatial hierarchical structure. The volumetric data are placed into this global structure, and it is shown how levels of detail can be derived and used within this structure. A volumetric visualization procedure is developed that conforms to the hierarchical structure and uses the levels of detail. These general methods are focused on the specific case of the VGIS global hierarchical structure and rendering system,. The real-time data considered are from collections of time- dependent 3D Doppler radars although the methods described here apply more generally to time-dependent volumetric data. This paper reports on the design and construction of the above hierarchical structures and volumetric visualizations. It also reports result for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented results for the specific application of 3D Doppler radar displayed over photo textured terrain height fields. Results are presented for display of time-dependent fields as the user visually navigates and explores the geospatial database.
This paper describes the visualization of 3D Doppler radar with global, with high-resolution terrain. This is the first time such data have been displayed together in a real-time environment. Associated data such as buildings and maps are displayed along with the weather data and the terrain. Requirements for effective 3D visualization for weather forecasting are identified. The application presented in this paper meets most of these requirements. In particular the application provides end-to-end real-time capability, integrated browsing and analysis, and integration of relevant data in a combined visualization. The last capability will grow in importance as researchers develop sophisticated models of storm development that yield rules for how storms behave in the presence of hills or mountains and other features.
This paper describes a new technique for the multi-dimensional visualization of data through automatic procedural generation of glyph shapes based on mathematical functions. Our glyph- based Stereoscopic Field Analyzer (SFA) system allows the visualization of both regular and irregular grids of volumetric data. SFA uses a glyph's location, 3D size, color and opacity to encode up to 8 attributes of scalar data per glyph. We have extended SFA's capabilities to explore shape variation as a visualization attribute. We opted for a procedural approach, which allows flexibility, data abstraction, and freedom from specification of detailed shapes. Superquadrics are a natural choice to satisfy our goal of automatic and comprehensible mapping of data to shape. For our initial implementation we have chosen superellipses. We parameterize superquadrics to allow continuous control over the 'roundness' or 'pointiness' of the shape in the two major planes which intersect to form the shape, allowing a very simple, intuitive, abstract schema of shape specification.
There are many ways to produce the sense of `presence' or telepresence in the user of virtual reality. For example attempting to increase the realism of the visual environment is a commonly accepted strategy. In contrast, this paper explores a way for the user to feel present in an unrealistic virtual body. It investigates an unusual approach, proprioceptive illusions. Proprioceptive or body illusions are used to generate and explore the experience of virtuality and presence outside of the normal body limits. These projects are realized in art installations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.