Laser powder bed fusion (LPBF) is an emerging technology that is gaining more and more market share in the overall additive manufacturing market. One crucial component of the manufacturing process is a possibly perfect flat and homogeneous powder bed. If the powder recoating runs into problems, e.g., because of recoater defects or missing powder, or if the previously printed layers generate powder bed defects, this requirement cannot be fulfilled. Defects in the powder bed can lead to defects in the final part. Hence, the powder bed for each new layer should be inspected for anomalies. Doing so, the generation of defects, e.g., de-lamination, can be detected in early stages. We present a specialized machine vision system which is capable to detect and partly classify all powder bed anomalies. Machine vision in the building chamber of LPBF machines is challenging. One needs oblique observation because the centered view is blocked by the laser(s). One needs many pixels and optimized imaging to catch all the defects. The field of view is large. In order not to increase the production time of a layer, the imaging and powder bed assessment should work fast. All relevant information about the defects must be captured by the imaging process. The main advantage of our approach is, that we acquire not only one image, but four images, each illuminated with an individual light source at the ceiling of the building chamber. We will show, how these four images can be used to find, distinguish, and classify defects. This includes classical image processing as well as machine learning techniques.
As in other imaging modalities, noise decreases image quality in optical coherence tomography (OCT), which is especially problematic in real-time intra-surgical application, where multi-frame averaging is not available. In this work, we present an adapted self-supervised training approach to train a blind-spot denoising network for OCT data. With the proposed method, the stability of the method is improved, avoiding the occurrence of artifacts by increasing realism of training data. We show that using this approach, the quality of two-dimensional B-scans can be improved qualitatively and quantitatively even without paired training data. This improvement is also translated into live volumetric renderings composed of denoised two-dimensional scans, even when using only very small network complexities due to harsh time constraints.
Noise decreases image quality in optical coherence tomography (OCT) and can obscure important features in real-time visualizations. In this work, we show that a neural network can be applied to denoise volumetric OCT data for intra-surgical visualization in real-time. We adapt a self-supervised training approach, not requiring any paired data for training. Several optimizations and trade-offs in deployment are required, with which we achieved processing times of only few milliseconds. While still being limited by the real-time requirements, denoising in this scenario can enhance surface visibility, and therefore allow guidance for more precise intra-surgical maneuvers.
KEYWORDS: Deep learning, Tumors, Surgery, Neural networks, Hyperspectral imaging, RGB color model, Tissues, Cameras, Brain, Real time optical diagnostics
Surgery for gliomas (intrinsic brain tumors), especially when low-grade, is challenging due to the infiltrative nature of the lesion. Currently, no real-time, intra-operative, label-free and wide-field tool is available to assist and guide the surgeon to find the relevant demarcations for these tumors. While marker-based methods exist for the high-grade glioma case, there is no convenient solution available for the low-grade case; thus, marker-free optical techniques represent an attractive option. Although RGB imaging is a standard tool in surgical microscopes, it does not contain sufficient information for tissue differentiation. We leverage the richer information from hyperspectral imaging (HSI), acquired with a snapscan camera in the 468 − 787 nm range, coupled to a surgical microscope, to build a deep-learning-based diagnostic tool for cancer resection with potential for intra-operative guidance. However, the main limitation of the HSI snapscan camera is the image acquisition time, limiting its widespread deployment in the operation theater. Here, we investigate the effect of HSI channel reduction and pre-selection to scope the design space for the development of cheaper and faster sensors. Neural networks are used to identify the most important spectral channels for tumor tissue differentiation, optimizing the trade-off between the number of channels and precision to enable real-time intra-surgical application. We evaluate the performance of our method on a clinical dataset that was acquired during surgery on five patients. By demonstrating the possibility to efficiently detect low-grade glioma, these results can lead to better cancer resection demarcations, potentially improving treatment effectiveness and patient outcome.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.