KEYWORDS: Deep learning, Tumors, Surgery, Neural networks, Hyperspectral imaging, RGB color model, Tissues, Cameras, Brain, Real time optical diagnostics
Surgery for gliomas (intrinsic brain tumors), especially when low-grade, is challenging due to the infiltrative nature of the lesion. Currently, no real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors. While marker-based methods exist for the high-grade glioma case, there is no convenient solution available for the low-grade case; thus, marker-free optical techniques represent an attractive option. Although RGB imaging is a standard tool in surgical microscopes, it does not contain sufficient information for tissue differentiation. We leverage the richer information from hyperspectral imaging (HSI), acquired with a snapscan camera in the 468 − 787 nm range, coupled to a surgical microscope, to build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance. However, the main limitation of the HSI snapscan camera is the image acquisition time, limiting its widespread deployment in the operation theater. Here, we investigate the effect of HSI channel reduction and pre-selection to scope the design space for the development of cheaper and faster sensors. Neural networks are used to identify the most important spectral channels for tumor tissue differentiation, optimizing the trade-off between the number of channels and precision to enable real-time intra-surgical application. We evaluate the performance of our method on a clinical dataset that was acquired during surgery on five patients. By demonstrating the possibility to efficiently detect low-grade glioma, these results can lead to better cancer resection demarcations, potentially improving treatment effectiveness and patient outcome.
Label-free tissue identification is the new frontier of image guided surgery. One of the most promising modalities is hyperspectral imaging (HSI). Until now, the use of HSI has, however, been limited due to the challenges of integration into the existing clinical workflow. Research to reduce the implementation effort and simplifying the clinical approval procedure is ongoing, especially for the acquisition of feasibility datasets to evaluate HSI methods for specific clinical applications. Here, we successfully demonstrate how an HSI system can interface with a clinically approved surgical microscope making use of the microscope’s existing optics. We outline the HSI system adaptations, the data pre-processing methods, perform a spectral and functional system level validation and integration into the clinical workflow. Data were acquired using an imec snapscan VNIR 150 camera enabling hyperspectral measurement in 150 channels in the 470-900 nm range, assembled on a ZEISS OPMI Pentero 900 surgical microscope. The spectral range of the camera was adapted to match the intrinsic illumination of the microscope resulting in 104 channels in the range of 470-787 nm. The system’s spectral performance was validated using reflectance wavelength calibration standards. We integrated the HSI system into the clinical workflow of a brain surgery, specifically for resections of low-grade gliomas (LGG). During the study, but out of scope of this paper, the acquired dataset was used to train an AI algorithm to successfully detect LGG in unseen data. Furthermore, dominant spectral channels were identified enabling the future development of a real-time surgical guidance system.
This paper presents the latest advances on Imec snapshot multispectral imagers based on either 3x3, 4x4 and 5x5 mosaic filter patterning on industry ready VIS/NIR and SWIR detectors. The mosaic patterns are implemented by means of high-transmission Fabry-Pérot interferometers processed using thin-film technology. Our snapshot multispectral imagers offer a spatial resolution of 640x480 pixels (SWIR) and 2048x1088 (VIS/NIR) down sampled according to the mosaic pattern to acquire data in nine (3x3), 16 (4x4) of 25 (5x5) spectral bands respectively. To achieve imaging at the native spatial resolution of the sensor, super resolution methods are available post-acquisition. Moreover, our compact USB-3 cameras of 260 gr (SWIR) and 27 gr (VNIR), without lens, reach an acquisition speed of up to 120 multispectral cubes/second and are therefore suitable for dynamic applications, high-speed inspection such as a conveyor belt or UAV inspection. The potential for snapshot cameras in a wide range of applications is showcased in this paper. We first show how applications on industrial quality inspection (chocolate gloss estimation) and precision agriculture (plant disease detection) achieve good discrimination potential in the VNIR range. Specifically, UAV inspection benefits from our compact camera size, low weight, and video capabilities. We then demonstrate the potential for plastic and textile recycling in the SWIR range and the benefit brought by both VNIR and SWIR ranges for people tracking under low visibility conditions. Finally, an application involving the joint use of a microscope and a multispectral camera system is presented for particle contamination exposure assessment. The suitable range is in this case application dependent.
KEYWORDS: Cameras, Imaging systems, Sensors, Unmanned aerial vehicles, Control systems, System integration, Data processing, Visualization, Data acquisition, UAV imaging systems
Multispectral imaging technology analyzes for each pixel a wide spectrum of light and provides more spectral
information compared to traditional RGB images. Most current Unmanned Aerial Vehicles (UAV) camera systems are
limited by the number of spectral bands (≤10 bands) and are usually not fully integrated with the ground controller to
provide a live view of the spectral data.
We have developed a compact multispectral camera system which has two CMV2K 4x4 snapshot mosaic sensors
internally, providing 31 bands in total covering the visible and near-infrared spectral range (460-860nm). It is compatible
with (but not limited to) the DJI M600 and can be easily mounted to the drone. Our system is fully integrated with the
drone, providing stable and consistent communication between the flight controller, the drone/UAV, and our camera
payload. With our camera control application on an Android tablet connected to the flight controller, users can easily
control the camera system with a live view of the data and many useful information including histogram, sensor
temperature, etc. The system acquires images at a maximum framerate of 2x20 fps and saves them on an internal storage
of 1T Byte. The GPS data from the drone is logged with our system automatically. After the flight, data can be easily
transferred to an external hard disk. Then the data can be visualized and processed using our software into single
multispectral cubes and one stitched multispectral cube with a data quality report and a stitching report.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.