Presentation + Paper
10 May 2019 Reducing the cost of visual DL datasets
Philip R. Osteen, Jason L. Owens, Brian Kaukeinen
Author Affiliations +
Abstract
Intelligent military systems require perception capabilities that are flexible, dynamic, and robust to unstructured environments and new situations. However, current state-of-the-art algorithms are based on deep learning, require large amounts of data, and require a proportionally large human effort in collection and annotation. To help improve this situation, we define a method of comparing 3D environment reconstructions without ground truth based on the exploitation of available reflexive information, and use the method to evaluate existing RGBD mapping algorithms in an effort to generate a large, fully-annotated data set for visual learning tasks. In addition, we describe algorithms and software that support rapid manual annotation of these reconstructed 3D environments for a variety of vision tasks. Our results show that we can use existing data sets as well as synthetic data to bootstrap tools that allow us to quickly and efficiently label larger data sets without ground truth, maximizing human effort without requiring crowd sourcing techniques.
Conference Presentation
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Philip R. Osteen, Jason L. Owens, and Brian Kaukeinen "Reducing the cost of visual DL datasets", Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 110060F (10 May 2019); https://doi.org/10.1117/12.2519114
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Reconstruction algorithms

Image segmentation

Sensors

Visualization

Clouds

3D modeling

Detection and tracking algorithms

Back to Top