PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12101, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With fast-growing computing power and large amounts of data availability, deep learning (DL) algorithms are achieving unprecedented success across different fields. One of its great achievements in health care is medical imaging. Medical image segmentation, such as lung cancer segmentation, is an important tool that facilitates clinical decision systems. U-Net, a recent innovative DL architecture, has shown great promise in segmenting regions of interest in medical images. One of the key advantages of U-Net is that it effectively constructs contracting and expanding paths with symmetric network connection which allows for capturing context information and enabling precise localization in a single network. Although U-Net and its variants have been widely adopted in the medical image segmentation task, there are some limitations that need to be addressed to meet specific requirements, including hardware memory consumption and segmentation accuracy. In this work, we propose a new U-Net based DL architecture, U-PEN (U-net with Progressively Expanded Neuron), that requires less memory on hardware while achieving highly accurate segmentation. The underlying hypothesis behind the proposed architecture is that this model, when compared to existing models, can efficiently capture image context via incorporating nonlinear functions into hidden neurons expansion in the encoder path. The proposed network can eliminate additional convolutional layers thus producing less trainable parameters compared to U-Net. The proposed DL model was tested on two benchmark datasets, namely DRIVE and CHASH DB1, for different medical image segmentation problems. The experimental results show that the suggested architecture is effective, yielding better performance over U-Net and Residual U-Net in most of the experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Substantial research has explored methods to optimize convolutional neural networks (CNNs) for tasks such as image classification and object detection, but research into the image quality drivers of computer vision performance has been limited. Additionally, there are indications that image degradations such as blur and noise affect human visual interpretation and machine interpretation differently. The General Image Quality Equation (GIQE) predicts overhead image quality for human analysis using the the National Image Interpretability Rating Scale (NIIRS), but no such model exists to predict image quality for interpretation by CNNs. Here, we assess the relationship between image quality variables and convolutional neural network (CNN) performance using an information-theoretic framework. Specifically, we examine the impacts of resolution, blur, and noise on CNN performance for models trained with in-distribution and out-of-distribution distortions. We explore the relationships between CNN performance and image information content as measured by standard Shannon entropy and a similar metric based on image gradients. Using two datasets, we observe that while generalization remains a significant challenge for CNNs faced with out-of-distribution image distortions, CNN performance against low visual quality images remains strong with appropriate training, indicating the potential to expand the design trade space for sensors providing data to computer vision systems
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated monitoring of low resolution, deep-space objects in wide field of view (WFOV) imaging systems can benefit from the improved performance of deep learning object detectors. The PANDORA sensor array, located in Maui at the Air Force Maui Optical and Supercomputing Site, is an exemplar of a scalable imaging architecture that can detect dim deep-space objects while maintaining a WFOV. The PANDORA system captures 20°×120° images of the night sky oriented along the GEO belt at a rate of two frames per minute. Prior work has established a baseline performance for the detection of Geosynchronous Earth Orbit (GEO) satellite objects using classical, feature-based detectors. This work extends GEO object detection and tracking methodologies by implementing a spatio-temporal deep learning architecture (GEO-SPANN), further improving the state of the art in GEO satellite object detection and tracking. GEO-SPANN consists of a learned spatial detector coupled with a tracking algorithm to detect and re-identify space objects in temporal sequences. We present the detection and tracking results of GEO-SPANN on an annotated PANDORA dataset, reporting an overall maximum F1 point of 0.814, corresponding to 0.766 precision and 0.868 recall. GEO-SPANN advances strategies for autonomous detection and tracking of GEO satellites, enabling the PANDORA sensor system to be leveraged for satellite orbit catalog maintenance and anomaly detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For many smart road applications, objects detection and recognition are one of the most important components. Indeed, precise detection of road objects is a critical task for autonomous urban driving and robotics technologies. In this paper, we describe our real-time smart system that consists in detecting and blurring undesirable road objects to anonymize and secure road users. Indeed, our proposed method is divided into three steps. The first step concerns the acquisition of images using the VIAPIX® system [1] developed by the ACTRIS company [2]. The second step is based on a neuronal approach for objects detection, namely vehicles, persons, road signs, etc. The third step allows to blur among the various objects detected only those which are undesirable on the road, i.e., person's faces, license plate. The obtained results demonstrate the efficiency of our robust approach in terms of good detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a deep learning approach to image classification in satellite imagery based on late fusion in conjunction with pre-trained networks. The pre-trained models are especially useful for image classification and can be used as the backbone for transfer learning. The intuition behind transfer learning is that these pre-trained models will effectively serve as a generic model of the visual world. This paper addresses the problem of object classification in representative data limited environment and exploits the pre-trained networks in conjunction with late fusion to perform classification on satellite images. Interestingly, the pre-trained networks namely ResNet50 and VGG16 trained on ImageNet (a large collection of photographs), and yet yield results with high accuracy on satellite images. The experimental results show that the late fusion method outperforms the other competing approaches buy a considerable margin of over 10 percentage points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cleaning data is one of the most important tasks in data science and machine learning. It solves many problems in datasets, such as time complexity, added noise, and so on. In a huge datasets, outliers are extreme values that deviate from an overall pattern on a sample. Usually, they indicate variability in measurements or experimental errors. Depending on whether the entity is numeric or categorical, we can use different techniques to study its distribution to detect outliers. Like histogram, box plot and z-score, etc. This work aims to develop a modelbased method to detect undesirable points in a 3D point cloud representing a building. Our proposed method relies on the Z-score concept for filtering outliers which is well known in statistics as the standard score. The idea behind the use of this concept is to help to understand if the data value is above or below average and at what distance. More specifically, the Z-score indicates how many standard deviations away a data point is from the mean.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Cleaning a point cloud building is challenging issue, it is crucial for a better representation of the scan-to-BIM 3D model. During the scan, the point cloud is in generally influenced by several factors. The scanner can provide false data due to reflections on reflective surfaces like mirrors, windows, etc. The false points can form a whole bunch of disturbing data which is not easy to detect. In this work, we use a statistical method called box plot to clean the data from false points. This method is a developed method of reading histograms. We test the proposed method on private database containing four point cloud buildings specifically designed for building information modeling (BIM) application. The experimental results are satisfying and our method detect most of the false points in the database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Handwritten word segmentation is one of the important components in the offline handwritten word recognition systems and has attracted many researchers. In this paper, we propose a Voronoi diagram approach to segment a handwritten word into stroke elements for handwritten word recognitions. Different from the conventional thinning techniques, the Voronoi diagram approach is proposed to create the segmentation path. Then the Chebyshev distance transform is suggested for implementing the Voronoi diagram. Test results and theoretical analysis show that the proposed method is capable to create similar quality handwritten word segmentation as the conventional thinning approach with much lower computational cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Project management plays a fundamental role in national development and economic improvement. Schedule management is also one of the knowledge areas of project management. This paper deals with the Resource-Constrained Project Scheduling Problem (RCPSP), which is a part of schedule management. The objective is to optimize and minimize the project duration while constraining the amount of resources during project scheduling. In this problem, resource constraints and precedence relationships of activities are known as important constraints for project scheduling. Many methods such as exact, heuristic, and meta-heuristic have been proposed by researchers to solve the problem, but there is a lack of investigation of the problem using new methods such as neural networks and machine learning. In this context, we investigate the function of a feed-forward neural network on the standard single-mode RCPSP. The artificial neural network learns based on the scheduling level characterized by parameters, namely network complexity, resource factor, resource strength, etc., calculated at each stage of project scheduling and identified priority rules. Therefore, after the learning process, the developed artificial neural network can automatically select an appropriate priority rule to filter out an unscheduled activity from the list of eligible activities and schedule all activities of the project in accordance with the specified project constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion capture systems are widely used to measure athletic performance and as a diagnostic tool in sports medicine. Standard motion capture systems record body movement using: (1) a set of cameras to localize body segments; or (2) specialized suits in which inertial measurement units are directly attached to body segments. The major drawbacks of these systems are limited portability, affordability, and accessibility. This contribution presents a markerless motion capture system using a commercially available sports camera and the OpenPose human pose estimation algorithm. We have validated the proposed markerless system by analyzing the human biometrics during running and jumping movements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Commercial deep learning capabilities are available for many applications such as computer vision processing and intelligent chat bots. The Google Cloud Platform product Google Dialogflow provides lifelike conversational artificial intelligence (AI) using machine learning (ML) to generate natural conversations between computers and humans. This ML utilizes natural language understanding (NLU) to recognize a user’s intent and extracts key information into a form of entities. We have developed a user-friendly application through understanding the hazardous material database, first aid safety guidelines and observing the process of first responders who access this information in the field. We created the Trusted and Explainable Artificial Intelligence for Saving Lives (TruePAL) virtual assistant using Dialogflow1 and TensorFlow2 paired with EasyOCR.3 The chatbot supports first responders by providing voice interaction which helps limit additional steps such as browsing through multiple categories when searching for information. Using feedback from our field interviews, the voice interface has been developed to enable the first responder to focus on the immediate emergency. With less distractions, the first responder is able to engage the incident more effectively. The partial hands-free TruePAL chatbot assistant improves the accessibility to the correct guidance by an average of 1.9 seconds compared to the widely used application, NIH WISER, which requires full attention to operate. We combined this intelligent chatbot with a separate visual processing capability to produce hazardous signage analysis and generate the proper guidance for first responders. With the evolving functionality of AI tools, the use of virtual assistants in first responder technology will be an advancement, benefiting the safety of both first responders and civilians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of an AI assistant, Trusted and Explainable Artificial Intelligence for Saving Lives (TruePAL), to provide real-time warning of risks of potential crashes to the first responders. The TruePAL system employs an AI and deep learning technology for saving first responders and roadside crews lives in and around active traffic. A deep neural network (DNN) and a Non-Axiomatic Reasoning System (NARS) are implemented as an AI system. A mobile app with AI interface is developed to perform verbal communication with the first responders. The TruePAL team has developed an explainable AI approach by opening up the DNN blackbox to extract the activation filters of various features and parts of the targeted objects. The combination of DNN and NARS makes the TruePAL system explainable to the users. TruePAL ingests on-board cameras, radar, and other sensor signals, analyzes the environment and traffic patterns to generate timely warning to drivers and roadside crews to avoid crashes. The TruePAL team, in collaboration with the Miami/Dade Police Dept., has designed five use cases and multiple subscenarios in a CARLA driving simulator to test the capability of TruePAL in timely warning to the first responder drivers in potential crash scenarios. We have successfully demonstrated its capability of timely warning in over a dozen scenarios based on the use cases. The preliminary test simulation results show that TruePAL could provide the drivers and crew members advanced warning before a crash occurs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is an urgent need for streamlining radiology Quality Assurance (QA) programs to make them better and faster. Here, we present a novel approach, Artificial Intelligence (AI)-Based Quality Assurance by Restricted Investigation of Unequal Scores (AQUARIUS), for re-defining radiology QA, which reduces human effort by up to several orders of magnitude over existing approaches. AQUARIUS typically includes automatic comparison of AI-based image analysis with natural language processing (NLP) on radiology reports. Only the usually small subset of cases with discordant reads is subsequently reviewed by human experts. To demonstrate the clinical applicability of AQUARIUS, we performed a clinical QA study on Intracranial Hemorrhage (ICH) detection in 1936 head CT scans from a large academic hospital. Immediately following image acquisition, scans were automatically analyzed for ICH using a commercially available software (Aidoc, Tel Aviv, Israel). Cases rated positive for ICH by AI (ICH-AI+) were automatically flagged in radiologists’ reading worklists, where flagging was randomly switched off with probability 50%. Using AQUARIUS with NLP on final radiology reports and targeted expert neuroradiology review of only 29 discordantly classified cases reduced the human QA effort by 98.5%, where we found a total of six non-reported true ICH+ cases, with radiologists’ missed ICH detection rates of 0.52% and 2.5% for flagged and non-flagged cases, respectively. We conclude that AQUARIUS, by combining AI-based image analysis with NLP-based pre-selection of cases for targeted human expert review, can efficiently identify missed findings in radiology studies and significantly expedite radiology QA programs in a hybrid human-machine interoperability approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As generative-adversarial-networks (GANs) continue to show progress in generating realistic imagery, there is a need to develop methods for distinguishing fake images from real images. This paper reviews state-ofthe- art methods for detecting real vs. GAN-generated images of faces. The methods used are NoiseScope, Resynthesis, Attribution Network, CNNDetector and DFT-based detection. Most methods are based on deeplearning architectures, except for the one using Discrete Fourier Transform (DFT) and a simple classifier based on azimuthal averaging of the image spectrum. While one might expect the deep-learning based methods to perform better, our initial experiments show that the DFT-based classifier performed the best and was the fastest and simplest to implement. These results illustrate that sometimes simpler methods can achieve better results, when comparing computation speed and performance, and point to the usefulness of considering a variety of approaches for the detection of fake imagery. The robustness of the methods were also assessed by adding different types of noise to the GAN generated images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep learning alone has achieved state-of-the-art results in many areas, from complex gameplay to predicting protein structures. In particular, in image classification and recognition, deep learning models have achieved much higher accuracy than humans. But sometimes it can be very difficult to debug if the deep learning model doesn't work. Deep learning models can be vulnerable and are very sensitive to changes in data distribution. Here, we combine deep learning-based object recognition and tracking with an adaptive neurosymbolic network agent, called the non-axiomatic reasoning system, that can adapt to its environment by building a concept based on perceptual sequences. We achieved an improved intersection-over-union (IOU) object recognition performance of 0.65 in the adaptive retraining model compared to IOU 0.31 in the COCO data pre-trained model. We improved the object detection limits using RADAR sensors in a simulated environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Earthquakes, and their cascading threats to economic and social sustainability, are a common problem between China and Chile. In such emergencies, automatic image recognition systems have become critical tools for preventing and reducing civilian casualties. Human crowd detection and estimation are fundamental for automatic recognition under life-threatening natural disasters. However, detecting and estimating crowds in scenes is nontrivial due to occlusion, complex behaviors, posture changes, and camera angles, among other issues. This paper presents the first steps in developing an intelligent Earthquake Early Warning System (EEWS) between China and Chile. The EEWS exploits the ability of deep learning architectures to properly model different spatial scales of people and the varying degrees of crowd densities. We propose an autoencoder architecture for crowd detection and estimation because it creates compressed representations for the original crowd input images in its latent space. The proposed architecture considers two cascaded autoencoders. The first performs reconstructive masking of the input images, while the second generates Focal Inverse Distance Transform (FIDT) maps. Thus, the cascaded autoencoders improve the ability of the network to locate people and crowds, thereby generating high-quality crowd maps and more reliable count estimates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face recognition has grown rapidly in the past several years due to advances in deep learning. More and more applications have emerged as this technology becomes more mature. However, face recognition under uncontrolled conditions is still quite challenging. For example, real-world applications usually encounter the issue of non-frontal standing pose which causes the face recognition system to degrade or even fail. Thus, this research work studies the issue of non-ideal facial pose in face recognition and propose to addresses this problem via pose-aware quality assessment and judgement. We first implement a standard face recognition system, consisting of an MTCNN face detection stage and a FaceNet face recognition stage. Then, we introduce a Quality Assessment and Judgement (QAJ) stage between the face detection stage and the face recognition stage. The QAJ stage conducts facial pose estimation which is realized through a DNN. Given a facial input, the QAJ stage assesses the facial pose and judges if the input is satisfactory in terms of quality. Inputs of poor quality will be screened and dropped out while inputs of high quality will be passed to the subsequent face recognition stage to output a final recognized identity. In the experiments, we compare the face recognition rates with and without the QAJ stage. Using a pose threshold of 15°, we find out that the recognition rate is improved by 2.83%, which is a significant improvement on the recognition performance and justifies our proposed technique of QAJ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of artificial intelligence, the field of sentiment analysis can be used in various industries such as computer-human interaction technology, personal status monitoring, criminal investigation, and entertainment. In the field of sentiment analysis, various methods such as facial expression, voice, EEG signal, and text are being studied. Among these methods, facial expression recognition is one of the approaches being actively studied because it has the advantage of being relatively easy to collect learning data and easy to apply to real life compared to other methods. Recently, research on facial expression recognition using deep learning has been actively conducted, and it shows relatively high performance. The method using deep learning has advantage of being easy to apply to a variety of data, but there is a limitation in large deviation in accuracy depending on the effect of occlusion, pose, and illumination in extracting feature points. In addition, in the case of expression recognition, similar objects such as face always exist in the data, and only some specific regions such as eyes, nose, and mouth have necessary information for learning, and remaining regions such as background and hair is considered as insignificant part of the data. Therefore learning all the features in the data not only takes a long time to learn, but also uses computing resources inefficiently. Therefore, we propose a convolutional neural network algorithm combined with a spatial transformation network which helps facial expression recognition by focusing on a specific part of the face.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Crowd counting has been a popular research topic in the field of computer vision due to the variation of human head scales and the interference of background noise. Some existing methods use multi-level feature fusion to solve scale variation, but the problem of background noise interference may be more serious due to the involvement of shallow features in the feature fusion process. In this paper, we propose a Multilevel Information Sharing Network based on Residual Attention(RA-MISNet) to solve this problem. The RA-MISNet consists of a feature extraction component, an information sharing module and a residual attention density map estimator. On the basis of solving the multi-scale problem, the residual attention mechanism is adopted by our proposed method to refine the population distribution information in sharing features at all levels, which can reduce the interference of complex texture background on density map regression. Furthermore, owing to the severe label noise interference problem in high-density crowd areas, we design a Regional Multi-level Segmentation Loss (RMS Loss) to divide the multi-level density regions with different label noise rates in a single crowd image and apply the corresponding granularity supervision constraints for each density level region. Extensive experiments on three crowd counting datasets (ShanghaiTech, UCF CC 50, UCF-QNRF) demonstrate the effectiveness and superiority of the proposed methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article discusses the problem of constructing an algorithm for the automated detection of operator fatigue and monitoring its state by analyzing data obtained by machine vision systems in the visible range. As information parameters, a combined model is used, which includes an analysis of the speed of movement of the pupils and the degree of scattering of motion eyes. A multi-criteria smoothing method is used to identify the trend curve. The deviation of the scatter of displacements of the focus of view relative to the center of the object also indicates the degree of operator involvement in the technological or controlled process. The speed of movement of the pupils and the spread in the displacements of the focus of view relative to the center of the main large object were recorded. The work contains tables and graphs fixing the result of detecting deviations relative to the values obtained in the first minutes of the operator's work, from the time of tracking the test video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.