Tracking technology is often necessary for image-guided surgical interventions. Optical tracking is one the options,
but it suffers from line of sight and workspace limitations. Optical tracking is accomplished by attaching a rigid body
marker, having a pattern for pose detection, onto a tool or device. A larger rigid body results in more accurate tracking,
but at the same time large size limits its usage in a crowded surgical workspace. This work presents a prototype of a
novel optical tracking method using a virtual rigid body (VRB). We define the VRB as a 3D rigid body marker in the
form of pattern on a surface generated from a light source. Its pose can be recovered by observing the projected pattern
with a stereo-camera system. The rigid body's size is no longer physically limited as we can manufacture small size
light sources. Conventional optical tracking also requires line of sight to the rigid body. VRB overcomes these
limitations by detecting a pattern projected onto the surface. We can project the pattern onto a region of interest,
allowing the pattern to always be in the view of the optical tracker. This helps to decrease the occurrence of occlusions.
This manuscript describes the method and results compared with conventional optical tracking in an experiment setup
using known motions. The experiments are done using an optical tracker and a linear-stage, resulting in targeting
errors of 0.38mm±0.28mm with our method compared to 0.23mm±0.22mm with conventional optical markers.
Another experiment that replaced the linear stage with a robot arm resulted in rotational errors of 0.50±0.31° and
2.68±2.20° and the translation errors of 0.18±0.10 mm and 0.03±0.02 mm respectively.
The concern with interstitial ablative therapy for a treatment of hepatic tumors has been growing. In spite of advances in
these therapies, there are several technical challenges due to tissue deformation and target motion: localization of the
tumor and monitoring for ablator's tip and thermal dose in heated tissue. In the previous work, a steerable acoustic
ablator, called ACUSITT, for targeting of ablation tip accurately into tumor area has been developed. However, real-time
monitoring techniques for providing image feedback of the ablation tip positioning and thermal dose deposited in the
tissue by heating are still needed. In this paper, a new software framework for real-time monitoring ablative therapy
during pre- and intra-operation is presented. The software framework provides ultrasound Brightness Mode (B-Mode)
image and elastography simultaneously and with real-time. A position of ablator's tip and a region of heated tissue are
monitored on B-Mode image, because the image represents tissue morphology. Furthermore, ultrasound elasticity image
is used for finding a boundary and region of tumor on pre-ablation, and monitoring thermal dose in tissue during ablation.
By providing B-Mode image and elastography at the same time, reliable information for monitoring thermal therapy can
be offered.
Natural orifice transluminal endoscopic surgery (N.O.T.E.S) is a minimally invasive surgical technique that could benefit
greatly from additional methods for intraoperative detection of tissue malignancies (using elastography) along with more
precise control of surgical tools. Ultrasound elastography has proven itself as an invaluable imaging modality. However,
elasticity images typically suffer from low contrast when imaging organs from the surface of the body. In addition, the
palpation motions needed to generate elastography images useful for identifying clinically significant changes in tissue
properties are difficult to produce because they require precise axial displacements along the imaging plane.
Improvements in elasticity imaging necessitate an approach that simultaneously removes the need for imaging from the
body surface while providing more precise palpation motions. As a first step toward performing N.O.T.E.S in-vivo, we
integrated a phased ultrasonic micro-array with a flexible snake-like robot. The integrated system is used to create
elastography images of a spherical isoechoic lesion (approximately 5mm in cross-section) in a tissue-mimicking
phantom. Images are obtained by performing robotic palpation of the phantom at the location of the lesion.
Surgical robots provide many advantages for surgery, including minimal invasiveness, precise motion, high dexterity,
and crisp stereovision. One limitation of current robotic procedures, compared to open surgery, is the loss of haptic
information for such purposes as palpation, which can be very important in minimally invasive tumor resection.
Numerous studies have reported the use of real-time ultrasound elastography, in conjunction with conventional B-mode
ultrasound, to differentiate malignant from benign lesions. Several groups (including our own) have reported integration
of ultrasound with the da Vinci robot, and ultrasound elastography is a very promising image guidance method for robotassisted
procedures that will further enable the role of robots in interventions where precise knowledge of sub-surface
anatomical features is crucial. We present a novel robot-assisted real-time ultrasound elastography system for minimally
invasive robot-assisted interventions. Our system combines a da Vinci surgical robot with a non-clinical experimental
software interface, a robotically articulated laparoscopic ultrasound probe, and our GPU-based elastography system.
Elasticity and B-mode ultrasound images are displayed as picture-in-picture overlays in the da Vinci console. Our system
minimizes dependence on human performance factors by incorporating computer-assisted motion control that
automatically generates the tissue palpation required for elastography imaging, while leaving high-level control in the
hands of the user. In addition to ensuring consistent strain imaging, the elastography assistance mode avoids the
cognitive burden of tedious manual palpation. Preliminary tests of the system with an elasticity phantom demonstrate the
ability to differentiate simulated lesions of varied stiffness and to clearly delineate lesion boundaries.
KEYWORDS: Ultrasonography, 3D image processing, Tissues, Thermography, Acoustics, Data acquisition, 3D acquisition, Stereoscopy, Visualization, Liver cancer
Three dimensional heat-induced echo-strain imaging is a potentially useful tool for monitoring the formation of thermal
lesions during ablative therapy. Heat-induced echo-strain, known as thermal strain, is due to the changes in the speed of
propagating ultrasound signals and to tissue expansion during heat deposition. This paper presents a complete system for
targeting and intraoperative monitoring of thermal ablation by high intensity focused acoustic applicators. A special
software interface has been developed to enable motor motion control of 3D mechanical probes and rapid acquisition of
3D-RF data (ultrasound raw data after the beam-forming unit). Ex-vivo phantom and tissue studies were performed in a
controlled laboratory environment. While B-mode ultrasound does not clearly identify the development of either necrotic
lesions or the deposited thermal dose, the proposed 3D echo-strain imaging can visualize these changes, demonstrating
agreement with temperature sensor readings and gross-pathology. Current results also demonstrate feasibility for realtime
computation through a parallelized implementation for the algorithm used. Typically, 125 frames per volume can
be processed in less than a second. We also demonstrate motion compensation that can account for shift within frames
due to either tissue movement or positional error in the US 3D imaging probe.
Modern speech understanding systems merge interdisciplinary technologies from Signal Processing, Pattern Recognition, Natural Language, and Linguistics into a unified statistical framework. These systems, which have applications in a wide range of signal processing problems, represent a revolution in Digital Signal Processing (DSP). Once a field dominated by vector-oriented processors and linear algebra-based mathematics, the current generation of DSP-based systems rely on sophisticated statistical models implemented using a complex software paradigm. Such systems are now capable of understanding continuous speech input for vocabularies of several thousand words in operational environments. The current generation of deployed systems, based on small vocabularies of isolated words, will soon be replaced by a new technology offering natural language access to vast information resources such as the Internet, and provide completely automated voice interfaces for mundane tasks such as travel planning and directory assistance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.