With the development of airborne synthetic aperture radar (SAR) measurement techniques, 3D SAR image formation has become prevalent in the SAR community. Conventional backprojection algorithms have difficulty mapping scatters to a voxel grid in 3D space due to a myriad of differential ranges in the height dimension. The inaccurate mapping of range profiles is commonly seen in layover defects. To address this issue while using limited sensor aspects, this study utilizes tomographic techniques. Interferometric SAR is leveraged to yield height estimate surfaces and applied to the 3D backprojection images as a spatial filter. Fusion across a swath of aspects in azimuth for a fixed elevation bin are utilized to resolve shadowed and non-resolved features in the surface reconstructions. Height estimations are applied to the 3D image grid corresponding to the range and cross-range voxels. Multiple height estimate algorithms are studied and yield results on a feature level basis of targets accurate within inches for X-Band synthetically generated data.
Reconstructing 3D data of objects from limited SAR imagery is of interest due to SARs ability to actively sense targets from a far stand-off range. SAR imagery is non-literal and may not capture the same features as a passive EO camera. However, EO imagery has been shown to be a promising candidate for low-view 3D reconstruction. Thus, a common technique for SAR 3D reconstruction is to first translate a SAR image to an EO image. The structural similarity (SSIM) metric has been shown to be an effective loss function in the techniques used to translate SAR to EO. However, SSIM has several components that can be tuned to achieve optimal performance. This work addresses (i) the parameterization of SSIM for the SAR to EO translation problem and (ii) the ability to reconstruct 3D objects from SAR images after said translations. A parametric sweep is conducted to find optimal parameterization on several matched SAR and EO datasets.
KEYWORDS: Synthetic aperture radar, Education and training, Scattering, Light sources and illumination, Data modeling, Sensors, Performance modeling, 3D modeling, Point clouds, Electroluminescence
Image-to-image translation methods aim to convert an image from its native source domain to a target domain. This is a common technique when the target domain’s phenomenology is more amenable to a certain task than the source domain. An example of this practice is Synthetic Aperture Radar (SAR) to electro-optical (EO) translation for 3D reconstruction. Techniques in 3D reconstruction have been shown to be effective on EO imagery. A common practice is to translate SAR imagery to the EO domain in order to form 3D reconstructions from SAR imagery. The translation algorithms ultimately map specular SAR responses to diffuse EO responses. While previous work supports the effectiveness of deep neural networks for such a translation, the black-box nature of the trained models does not offer explainability towards the effectiveness of the SAR to EO translations. This work aims to offer explainability for SAR to EO translations via direct comparison of facet responses found in ray-tracing based simulations given equivalent target and sensor geometry. Further analysis of these target responses is conducted in order to understand scenarios where SAR to EO translations is expected to be effective and ineffective.
Commonly, data exploitation for single sensors utilizes two-dimensional (2D) imagery. To best combine information from multiple sensing modalities, each with their own fundamental differences, we utilize sensor fusion to capture and leverage the inherent weaknesses from different sensing modalities. When fusing multiple sensor modalities together, this approach quickly becomes intractable as each sensor has unique projection planes and resolution. In this work, we present and analyze a data-driven approach for fusing multiple modalities by extracting data representations for each sensor into three-dimensional (3D) space, supporting sensor fusion natively in a common frame of reference. Photogrammetry and computer vision methods for recovering point clouds, such as structure from motion and multi-view stereo, from 2D electro-optical imagery has shown promising results. Additionally, 3D data representations can also be derived from interferometric synthetic aperture radar (IFSAR) and lidar sensors. We use point cloud representations for all three modalities, which allow us to leverage each sensing modality’s individual strengths and weaknesses. Given our data-driven focus, we emphasize fusing the point cloud data in controlled scenarios with known parameters. We also conduct an error analysis for each sensor modality based upon sensor position, resolution, and noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.