KEYWORDS: Magnetic resonance imaging, Spine, Image processing, Pathology, Image segmentation, Image quality, Image processing algorithms and systems, Medical imaging, Computer aided diagnosis and therapy, Current controlled current source
The identification of key landmark points in an MR spine image is an important step for tasks such as
vertebra counting. In this paper, we propose a template matching based approach for automatic detection of two key
landmark points, namely the second cervical vertebra (C2) and the sacrum from sagittal MR images. The approach
is comprised of an approximate localization of vertebral column followed by matching with appropriate templates in
order to detect/localize the landmarks. A straightforward extension of the work described here is an automated
classification of spine section(s). It also serves as a useful building block for further automatic processing such as
extraction of regions of interest for subsequent image processing and also in aiding the counting of vertebra.
Pneumoconiosis, a lung disease caused by the inhalation of dust, is mainly diagnosed using chest radiographs. The
effects of using contralateral symmetric (CS) information present in chest radiographs in the diagnosis of
pneumoconiosis are studied using an eye tracking experimental study. The role of expertise and the influence of CS
information on the performance of readers with different expertise level are also of interest. Experimental subjects
ranging from novices & medical students to staff radiologists were presented with 17 double and 16 single lung images,
and were asked to give profusion ratings for each lung zone. Eye movements and the time for their diagnosis were also
recorded. Kruskal-Wallis test (χ2(6) = 13.38, p = .038), showed that the observer error (average sum of absolute
differences) in double lung images differed significantly across the different expertise categories when considering all
the participants. Wilcoxon-signed rank test indicated that the observer error was significantly higher for single-lung
images (Z = 3.13, p < .001) than for the double-lung images for all the participants. Mann-Whitney test (U = 28, p =
.038) showed that the differential error between single and double lung images is significantly higher in doctors [staff &
residents] than in non-doctors [others]. Thus, Expertise & CS information plays a significant role in the diagnosis of
pneumoconiosis. CS information helps in diagnosing pneumoconiosis by reducing the general tendency of giving less
profusion ratings. Training and experience appear to play important roles in learning to use the CS information present in
the chest radiographs.
KEYWORDS: Human-machine interfaces, Virtual reality, Visualization, 3D acquisition, Optical tracking, Mirrors, Error analysis, Medical imaging, 3D modeling, Data modeling
An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems.
Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain
viable within 3D environments. In order to establish this a new user interface was created that applied
various understood principles of interface design. A user study was then performed where it was compared
with an earlier interface for a series of medical visualization tasks.
As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of
how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality
systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is
not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality
environment can be visualized and how this can allow us to gain greater insight into the process of
interaction/learning in these systems. Also explored is the possibility of using this method to improve
understanding and management of ergonomic issues within an interface.
Volume rendering has high utility in visualization of segmented datasets. However, volume rendering of the segmented labels along with the original data causes undesirable intermixing/bleeding artifacts arising from interpolation at the sharp boundaries. This issue is further amplified in 3D textures based volume rendering due to the inaccessibility of the interpolation stage. We present an approach which helps minimize intermixing artifacts while maintaining the high performance of 3D texture based volume rendering - both of which are critical for intra-operative visualization. Our approach uses a 2D transfer function based classification scheme where label distinction is achieved through an encoding that generates unique gradient values for labels. This helps ensure that labelled voxels always map to distinct regions in the 2D transfer function, irrespective of interpolation. In contrast to previously reported algorithms, our algorithm does not require multiple passes for rendering and supports greater than 4 masks. It also allows for real-time modification of the colors/opacities of the segmented structures along with the original data. Additionally, these capabilities are available with minimal texture memory requirements amongst comparable algorithms. Results are presented on clinical and phantom data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.