SignificanceALA-PpIX and second-window indocyanine green (ICG) have been studied widely for guiding the resection of high-grade gliomas. These agents have different mechanisms of action and uptake characteristics, which can affect their performance as surgical guidance agents. Elucidating these differences in animal models that approach the size and anatomy of the human brain would help guide the use of these agents. Herein, we report on the use of a new pig glioma model and fluorescence cryotomography to evaluate the 3D distributions of both agents throughout the whole brain.AimWe aim to assess and compare the 3D spatial distributions of ALA-PpIX and second-window ICG in a glioma-bearing pig brain using fluorescence cryotomography.ApproachA glioma was induced in the brain of a transgenic Oncopig via adeno-associated virus delivery of Cre-recombinase plasmids. After tumor induction, the pro-drug 5-ALA and ICG were administered to the animal 3 and 24 h prior to brain harvest, respectively. The harvested brain was imaged using fluorescence cryotomography. The fluorescence distributions of both agents were evaluated in 3D in the whole brain using various spatial distribution and contrast performance metrics.ResultsSignificant differences in the spatial distributions of both agents were observed. Indocyanine green accumulated within the tumor core, whereas ALA-PpIX appeared more toward the tumor periphery. Both ALA-PpIX and second-window ICG provided elevated tumor-to-background contrast (13 and 23, respectively).ConclusionsThis study is the first to demonstrate the use of a new glioma model and large-specimen fluorescence cryotomography to evaluate and compare imaging agent distribution at high resolution in 3D.
Ever since the early 1980s, surgical robotics/robot assisted surgeries have gained more of a foothold in modern treatments. The extra guidance/precision that surgical robotics have provided in operations has been indispensable. The next step for surgical robotics is MRI compatibility to provide close to real time intraoperative imaging for space constrained operations. The robotic component artifact influence over the region of surgical interest (ROSI) is to be mitigated for complications in addition to providing accurate guidance for the surgeon. This study defines a large MRI phantom design for specimen submersion to verify/quantify artifact generation from robotic components as well as provide a better visualization platform for robotic performance during preliminary testing and evaluation. The main topics of focus for the phantom design are fluid selection, phantom shape, phantom containment material, and 3D printed artifact measurement evaluation grids. The MRI scans were conducted using a 3T Magnetom Prisma MRI. Three scan types were selected: T1 weighted, T2 weighted, and SGE (spoiled gradient echo). The investigated phantom fluids were a solution of nickel chloride and sodium chloride (the ACR phantom, 10mM NiCl2 75mM NaCl), two salt doped distilled water (13g, 26g), and food grade mineral oil. The oil and ACR phantom outperformed the doped water with similar SNR/CNR returns (SGE: SNR/CNR 250/240/57, PIU 83/60/85). The phantom containment material was inconclusive due to motion artifact production and will be rerun with alternative fluid. The 3D printed artifact measurement grid was printed in PETG as a cost-effective substitute as PLA started warping the grid after extended water exposure (0.2mm). After N4 implementation, the image uniformity was determined through the ACR method while the SNR/CNR values were calculated in Fiji. The results illustrated the preferred environmental constraints according to the main topics: food grade mineral oil, cylindrical, motion artifact interference, and PETG 3D printed grid.
Image guided spinal procedure accuracy is dynamic during surgery. Intervertebral motion during surgery drastically effects the temporal accuracy of a procedure. A hand-held stereovision (HHS) system has been employed in previous studies for intraoperative data collection. These data can be used to deform a CT scan to re ect the current spinal posture. These methods are criticized due to the large exposure required for data collection. Currently, to collect HHS data the spine is exposed out to the lateral boundary of the posterior surface of the transverse process. In modern pedicle screw placements, and laminectomies the exposure is smaller. For this method to remain contemporary, a more robust data collection scheme using a smaller exposure should be employed. In this study, simulated narrow exposures were created by manually segmenting HHS data from a cadaver pig. These 3 datasets are created to drive an existing level-wise registration model to generate 3 updated CTs (uCT). The 3 HHS datasets were manually segmented in the following ways: out to the transverse process, out to the facet joints, and out to the lamina. A fiducial registration error was computed from manually identified mini-screw fiducials in each uCT. The mean values for L2 norms for the transverse process segmentation data, facet segmentation data, and lamina segmentation are 2:04 ± 1:10mm,3:18 ± 2:18mm, and 4:59 ± 2:28mm respectively. Median values are 1:82mm, 2:25mm, 4:35mm respectively. This data shows the need for a more robust deformation model, and HHS system if we wish to have sub 2mm registration accuracy with narrow exposures.
Tracked intraoperative ultrasound (iUS) is growing in use. Accurate spatial calibration is essential to enable iUS navigation. Utilizing sterilizable probes introduces new challenges that can be solved by time-of-surgery calibration that is robust, efficient and user independent performed within the sterile field. This study demonstrates a smart line detection scheme to perform calibration based on video acquisition data and investigates the effect of pose variation on the accuracy of a plane-based calibration. A user-independent US video is collected of a calibration phantom and a smart line detection and tracking filter applied to the video-tracking data pairs to remove poor calibration candidates. A localized point target phantom is imaged to provide a TRE assessment of the calibration. The tracking data is decoupled into 6 degrees of freedom and these ranges iteratively reduced to study the effect on the spatial calibration accuracy in order to indicate the sufficient amount of pose variation required during video acquisition to maintain high TRE accuracy. This work facilitates a larger development toward user-independent, video based iUS calibration at the time of surgery.
Introduction In image-guided open cranial surgeries, brain deformation may compromise the accuracy of image guidance immediately following the opening of the dura. A biomechanical model has been developed to update pre-operative MR images to match intraoperative stereovision (iSV), and maintain the accuracy of image guidance. Current methods necessitate manual segmentation of the cortical surface from iSV, a process that demands expertise and prolongs computational time . Methods In this study, we adopted the Fast Segment Anything Model (FastSAM), a newly developed deep learning model that automatically can segment the cortical surface from iSV after dural opening without customized training. We evaluated its performance against manual segmentation as well as a U-Net model. In one patient case, FastSAM was applied to segment the cortical surface with an automatic box prompt, and the segmentation was used for image updating. We compared the three cortical surface segmentation methods in terms of segmentation accuracy (Dice Similarity Coefficient; DSC) and image updating accuracy (target registration errors; TRE). Results All three segmentation methods demonstrated high DSC (>0.95). FastSAM and manual segmentation produced similar performance in terms of image updating efficiency and TRE (~2.2 mm). Conclusion In summary, the performance of FastSAM was consistent with manual segmentation in terms of segmentation accuracy and image updating accuracy. The results suggest FastSAM can be employed in the image updating process to replace manual segmentation to improve efficiency and reduce user dependency.
Pre-operative MRI with gadolinium-based contrast agents (Gd-MRI) is a central feature in surgical planning and intra-surgical navigation of glioma, yet brain movement during the surgical procedure can degrade the accuracy of these pre-operative images. Fluorescence guided neurosurgery is a technique which can complement MRI guidance by providing direct visualization of the tumor during surgery, and several agents either used routinely or under clinical development have shown effective tumor discrimination and impact on surgical outcomes. We have built a multi-spectral kinetic imaging system to acquire behavior of fluorophores overtime in animal models. Here, we exhibit this fluorescence kinetic imaging system and report its performance with tissue-simulating phantoms with multiple fluorophores. Also reported is our first experience with multiple fluorescent contrast agents in a novel oncopig model.
Fluorescence cryo-imaging is a high-resolution optical imaging technique that produces 3-D whole-body biodistributions of fluorescent molecules within an animal specimen. To accomplish this, animal specimens are administered a fluorescent molecule or reporter and are frozen to be autonomously sectioned and imaged at a temperature of -20°C or below. Thus, to apply this technique effectively, administered fluorescent molecules should be relatively invariant to low temperature conditions for cryo-imaging and ideally the fluorescence intensity should be stable and consistent in both physiological and cryo-imaging conditions. Herein, we assessed the mean fluorescence intensity of 11 fluorescent contrast agents as they are frozen in a tissue-simulating phantom experiment and show an example of a tested fluorescent contrast agent in a cryo-imaged whole pig brain. Most fluorescent contrast agents were stable within ~25% except for FITC and PEGylated FITC derivatives, which showed a dramatic decrease in fluorescence intensity when frozen.
In open cranial procedures, the accuracy of image guidance using preoperative MR (pMR) images can be degraded by intraoperative brain deformation. Intraoperative stereovision (iSV) has been used to acquire 3D surface profile of the exposed cortex at different surgical stages, and surface displacements can be extracted to drive a biomechanical model as sparse data to provide updated MR (uMR) images that match the surgical scene. In previous studies, we have employed an Optical Flow (OF) based registration technique to register iSV surfaces acquired from different surgical stages and estimate cortical surface shift throughout surgery. The technique was efficient and accurate but required manually selected Regions of Interest (ROI) in each image after resection began. In this study, we present a registration technique based on Scale Invariant Feature Transform (SIFT) algorithm and illustrate the methods using an example patient case. Stereovision images of the cortical surface were acquired and reconstructed at different time points during surgery. Both SIFT and OF based registration techniques were used to estimate cortical shift, and extracted displacements were compared against ground truth data. Results show that the overall errors of SIFT and OF based techniques were 0.65±0.53 mm and 2.18±1.35 mm in magnitude, respectively, on the intact cortical surface. The OF-based technique generated inaccurate sparse data near the resection cavity region, whereas SIFT-based technique only generated accurate sparse data. The computational efficiency was ⪅0.5 s and ⪆20 s for SIFT and OF based techniques, respectively. Thus, the SIFT-based registration technique shows promise for OR applications.
Registration of preoperative or intraoperative imaging is necessary to facilitate surgical navigation in spine surgery. After image acquisition, intervertebral motion and spine pose changes can occur during surgery from instrumentation, decompression, physician manipulation or correction. This causes deviations from the reference imaging reducing the navigation accuracy. To evaluate the ability to use the registration between stereovision surfaces in order to account for this intraoperative spine motion through a simulation study. Co-registered CT and stereovision surface data were obtained of a swine cadaver’s exposed lumbar spine in the prone position. Data was segmented and labeled by vertebral level. A simulation of biomechanically bounded motion was applied to each vertebral level to move the prone spine to a new position. A reduced surface data set was then registered level-wise back to the prone spines original position. The average surface to surface distance was recorded between simulated and prone positions. Localized targets on these surfaces were used for a calculation of target registration error. Target registration error increases with distance between surfaces. Movement exceeding 2.43 cm between stereovision acquisitions exceeds registration accuracy of 2mm. Lateral bending of the spine contributes most to this effect compared to axial rotation and flexion-extension. In conclusion, the viability of using stereovision-to-stereovision registration to account for interoperative motion of the spine is shown through this simulation. It is suggested the distance of spine movement between corresponding points does not surpass 2.43 cm between stereovision acquisitions.
Miniature Screws, often used for fiducials, are currently localized on DICOM images manually. This time-consuming process can add tens of minutes to the computational process for registration, or error analysis. Through a series of morphological operations, this localization task can be completed in a time much less than a second when performed on a standard laptop. Two image sets were analyzed. The first data set consisted of six intraoperative CT (iCT) scans of the lumbar spine of both cadaver and live porcine samples. This dataset includes not only implanted mini-screws, but other metal instrumentation. The second dataset consists of 6 semi-rigidly deformed CT (uCT) scans of the lumbar spine of the same animals. This dataset has been intensity down sampled from 16 bits to eight bits as a pre-processing step. Also, due to other deformation steps, other artifacts are apparent. Both datasets show at least 18 mini-screws which were rigidly implanted in the lumbar vertebrae. Each vertebra has at least three mini-screws implanted. These images were processed as follows: projection image forming via maximum row values, thresholding, opening, non-circular regions were removed, and circular regions were eroded. Leaving voxel locations of the center of each mini-screw. The aforementioned steps can be completed with a mean computational efficiency of .0365 seconds. Which is an unobtainable time for manual localization. Even by the most skilled. The true positive rates of the iCT and uCT datasets were 96.
Background: Successful navigation in spine surgeries relies on accurate representation of the spine’s interoperative pose. However, its position can move between preoperative imaging and instrumentation. A measure of this motion is a preoperative-to-intraoperative change in lordosis. Objective: To investigate the effect this change has on navigation accuracy and the degree to which an interoperative stereovision system (iSV) for intraoperative patient registration can account for this motion. Methods: For six live pig specimens, a preoperative CT (pCT) was obtained of the lumbar spine in supine position and an interoperative CT in the prone position. Five to six iSV images were intraoperatively acquired of the exposed levels. A fiducial-based registration was performed on a navigation system with the pCT. Separately, the pCT was deformed to match iSV surface data to generate an updated CT (uCT). Navigational accuracy of both the commercial navigation and iSV systems was determined by tracked fiducials and landmarks. Change in lordosis Cobb angle between supine and prone positions was calculated representing preoperative-to-interoperative change in spine pose. Results: The preoperative-to-interoperative change ranged from 12 to 41°. Registration accuracy varied by 4.8 and 1.5 mm for the commercial system (6.2+-1.9 mm) and iSV (3.0+0.6 mm) respectively. Rank correlation shows strong association between increased registration error and positional change for the commercial system (correlation of 0.94, P=0.02) while minimal association for iSV (0.09, P=0.92). Conclusion: Change in spinal pose effects navigational accuracy of commercial systems. iSV shows promise in accounting for these changes given its accuracy is uncorrelated with pose change.
KEYWORDS: Video, Data acquisition, Calibration, Cameras, Camera shutters, Image-guided intervention, Image acquisition, Data processing, Video processing, Image processing
Hand-held stereovision (HHS) has been implemented as an accurate, affordable, and non-ionizing method for image updating, and marker-less registration. Studies using HHS have collected images via individual snapshots. This study compares the accuracy of data acquisition from snapshots compared to a video stream data acquisition. An HHS system was calibrated via a tracked checkerboard to an accuracy of 0:88±0:24mm. A cadaver pig was used to measure accuracy of ducial locations with ducials placed on each spinous, and transverse process of L2 through L6. The system was then used to collect images of the surgical field. The images were collected 5 times. There was one snapshot image acquisition, and 4 video streams. After, the ducial locations were localized via a tracked stylus probe. The error was then computed between the stylus ducial locations and the ducial locations found on the stereovision depth map. The corresponding mean errors for the snapshots and 4 video acquisitions were as follows: 1:03 ± 0:24mm, 1:13 ± 0:34mm, 1:18 ± 0:59mm, 1:19 ± 0:47mm, and 3:23 ± 2:27mm respectively. While collecting the video streams the data were collected at what was perceived a constant speed. Each sequential video stream was collected at a higher speed. These speeds were calculated to be 7:22±1:95mm/s, 12:73±7:75mm/s, 19:19±12:74mm/s, and 30:14±13:33mm/s respectively. These data show that image acquisition via video streams at relatively low speeds have accuracy comparable to that of snapshot image acquisition.
As rapidly accelerating technology, fluorescence guided surgery (FGS) has the potential to place molecular information directly into the surgeon’s field of view by imaging administered fluorescent contrast agents in real time, circumnavigating pre-operative MR registration challenges with brain deformation. The most successful implementation of FGS is 5-ALAPpIX guided glioma resection which has been linked to improved patient outcomes. While FGS may offer direct in-field guidance, fluorescent contrast agent distributions are not as familiar to the surgical community as Gd-MRI uptake, and may provide discordant information from previous Gd-MRI guidance. Thus, a method to assess and validate consistency between fluorescence-labeled tumor regions and Gd-enhanced tumor regions could aid in understanding the correlation between optical agent fluorescence and Gd-enhancement. Herein, we present an approach for comparing whole-brain fluorescence biodistributions with Gd-enhancement patterns on a voxel-by-voxel basis using co-registered fluorescent cryo-volumes and Gd-MRI volumes. In this initial study, a porcine-human glioma xenograft model was administered 5-ALA-PpIX, imaged with MRI, and euthanized 22 hours following 5-ALA administration. Following euthanization, the extracted brain was imaged with the cryo-macrotome system. After image processing steps and non-rigid, point-based registration, the fluorescence cryo-volume and Gd-MRI volume were compared for similarity metrics including: image similarity, tumor shape similarity, and classification similarity. This study serves as a proof-of-principle in validating our screening approach for quantitatively comparing 3D biodistributions between optical agents and Gd-based agents.
Preoperative CT (pCT) images acquired in a supine position can be inaccurate for intraoperative image guidance in open spine surgery due to alignment change between supine pCT and intraoperative prone positioning. We have developed a level-wise registration framework to compensate for the alignment change using intraoperative stereovision (iSV) data of the exposed spine. A hand-held stereovision system was developed, but the field of view was limited to 1-2 segments per image. Although multiple iSV surfaces can be combined to capture the full field based on tracking information, acquisition is limited to one snapshot at a time with minimized hand motion due to asynchronization between image and tracking data. In this study, we developed methods to concatenate iSV surfaces without relying on tracking information, and illustrate the methods using data acquired from a pig spine. To register two iSV surfaces, the 2D texture maps were registered using an optical flow algorithm, and the 3D point cloud of the second iSV surface was registered with the first using 3D spatial information of each pixel. The two registered iSV surfaces were then merged to form one composite iSV surface. Multiple iSV surfaces were stitched sequentially. Results from 4 image pairs show that 2D and 3D registration accuracy was 2.7±0.6 pixels and 1.0±0.1 mm, respectively, across 8 landmarks. The overall accuracy of the final composite surface was 0.9±0.4 mm. These preliminary data show that iSV can potentially be acquired at video rate to improve efficiency to recover the full surgical field.
Intraoperative stereovision (iSV) systems are used for image data acquisition of the surgical field to facilitate image updating in image-guided surgery. We have developed an optically-tracked hand-held stereovision (HHS) system, but the calibration process was lengthy and the accuracy can be greatly affected by localization errors. This study shows the efficacy of using a rigid calibration checkerboard contained within a custom-built tracking frame for semi-automated iSV calibration. First, stereo camera parameters were calculated with a mean reprojection error of 0.28 pixels using 46 image pairs. Then, the checkerboard was calibrated to optically track all corner points. Specifically, locations of 4 outermost corner points on the checkerboard pattern were collected 93 times using a tracked stylus, and internal corner points were interpolated. Computed spacing between corner points was 15.01±0.04 mm and 15.05±0.08 mm in x and y directions, respectively, compared to the advertised spacing of 15 mm for both x and y directions. Next, we captured 169 image pairs of the tracked calibration checkerboard to reconstruct the 3D coordinates of corner points in the camera space. After, we calculated a rigid transformation between the reconstructed points and their tracked locations with a mean error of 0.31±0.02 mm. Finally, another 50 image pairs of the tracked checkerboard were used to verify the calibration accuracy and showed a mean error of 0.88±0.24 mm. These results show promise to automate the calibration process for future units and settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.