Identifying defective builds early on during Additive Manufacturing (AM) processes is a cost-effective way to reduce scrap and ensure that machine time is utilized efficiently. In this paper, we present an automated method to classify 3Dprinted polymer parts as either good or defective based on images captured during Fused Filament Fabrication (FFF), using independent machine learning and deep learning approaches. Either of these approaches could be potentially useful for manufacturers and hobbyists alike. Machine learning is implemented via Principal Component Analysis (PCA) and a Support Vector Machine (SVM), whereas deep learning is implemented using a Convolutional Neural Network (CNN). We capture videos of the FFF process on a small selection of polymer parts and label each frame as good or defective (2674 good frames and 620 defective frames). We divide this dataset for holdout validation by using 70% of images belonging to each class for training, leaving the rest for blind testing purposes. We obtain an overall accuracy of 98.2% and 99.5% for the classification of polymer parts using machine learning and deep learning techniques, respectively.
The Open Standard for Unattended Sensors (OSUS) was developed by DIA and ARL to provide a plug-n-play platform for sensor interoperability. Our objective is to use the standardized data produced by OSUS in performing data analytics on information obtained from various sensors. Data analytics can be integrated in one of three ways: within an asset itself; as an independent plug-in designed for one type of asset (i.e. camera or seismic sensor); or as an independent plug-in designed to incorporate data from multiple assets. As a proof-of-concept, we develop a model that can be used in the second of these types – an independent component for camera images. The dataset used was collected as part of a demonstration and test of OSUS capabilities. The image data includes images of empty outdoor scenes and scenes with human or vehicle activity. We design, test, and train a convolution neural network (CNN) to analyze these images and assess the presence of activity in the image. The resulting classifier labels input images as empty or activity with 86.93% accuracy, demonstrating the promising opportunities for deep learning, machine learning, and predictive analytics as an extension of OSUS’s already robust suite of capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.