The persistent need for more qualified personnel in operating theatres exacerbates the remaining staff’s workload. This increased burden can result in substantial complications during surgical procedures. To address this issue, this research project works on a comprehensive operating theatre system. The system offers real-time monitoring of all surgical instruments in the operating theatre, aiming to alleviate the problem. The foundation of this endeavor involves a neural network trained to classify and identify eight distinct instruments belonging to four distinct surgical instrument groups. A novel aspect of this study lies in the approach taken to select and generate the training and validation data sets. The data sets used in this study consist of synthetically generated image data rather than real image data. Additionally, three virtual scenes were designed to serve as the background for a generation algorithm. This algorithm randomly positions the instruments within these scenes, producing annotated rendered RGB images of the generated scenes. To assess the efficacy of this approach, a separate real data set was also created for testing the neural network. Surprisingly, it was discovered that neural networks trained solely on synthetic data performed well when applied to real data. This research paper shows that it is possible to train neural networks with purely synthetically generated data and use them to recognize surgical instruments in real images.
External Fabry-Perot resonators are widely used in the field of optics and are well established in areas such as frequency selection and spectroscopy. However, fine tuning and thus most efficient coupling of these resonators into the optical path is a time-consuming task, which is usually performed manually by trained personnel. The state of the art includes many different approaches for automatic alignment, which, however, are designed for special optical configurations and cannot be generalized. However, these approaches are only valid for individually designed optical systems and are not universally applicable. Moreover, none of these approaches address the identification of the spatial degrees of freedom of the resonator. Knowledge of this exact pose information can generally be integrated into the alignment process and has great potential for automation. In this work, convolutional neural networks (CNNs) are applied to identify the sensitive spatial degrees of freedom of a FabryPerot resonator in a simulation environment. For this purpose, well established CNN architectures, which are typically used for feature extraction, are adapted to this regression problem. The input of the CNNs was chosen to be the intensity profiles of the transversal modes, which can be obtained from the transmitted power behind the resonator. These modes are known to be highly correlated with the coupling quality and thus with the spatial location of resonators. To achieve an exact pose estimation, the CNN input consists of several images of mode profiles, which are propagated through an encoder structure followed by fully-connected layers providing the four spatial parameters as the network output. For training and evaluation, intensity images as well as resonator poses are obtained from a simulation of a free spectral range of a resonator. Finally, different encoder structures including a memory efficient, small self-developed network architecture are evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.