Commonly, data exploitation for single sensors utilizes two-dimensional (2D) imagery. To best combine information from multiple sensing modalities, each with their own fundamental differences, we utilize sensor fusion to capture and leverage the inherent weaknesses from different sensing modalities. When fusing multiple sensor modalities together, this approach quickly becomes intractable as each sensor has unique projection planes and resolution. In this work, we present and analyze a data-driven approach for fusing multiple modalities by extracting data representations for each sensor into three-dimensional (3D) space, supporting sensor fusion natively in a common frame of reference. Photogrammetry and computer vision methods for recovering point clouds, such as structure from motion and multi-view stereo, from 2D electro-optical imagery has shown promising results. Additionally, 3D data representations can also be derived from interferometric synthetic aperture radar (IFSAR) and lidar sensors. We use point cloud representations for all three modalities, which allow us to leverage each sensing modality’s individual strengths and weaknesses. Given our data-driven focus, we emphasize fusing the point cloud data in controlled scenarios with known parameters. We also conduct an error analysis for each sensor modality based upon sensor position, resolution, and noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.