Deep Learning based architectures such as Convolutional Neural Networks (CNNs) have become quite efficient in recent years at detecting camouflaged objects that would be easily overlooked by a human observer. Consequently, countermeasures have been developed in the form of adversarial attack patterns which can confuse CNNs by causing false classifications while maintaining the original camouflage properties in the visible spectrum. In this paper, we describe the various steps in generating suitable adversarial camouflage patterns based on the Dual Attribute Adversarial Camouflage (DAAC) technique for evading the detection by artificial intelligence as well as human observers which was proposed in [Wang et al. 2021]. The aim here is to develop an efficient camouflage with the added ability to confuse more than a single network without compromising camouflage against human observers. In order to achieve this, two different approaches are suggested and the results of first tests are presented.
Due to the enormous development in the field of artificial intelligence, especially in the area of reconnaissance, detection and recognition, it has become absolutely necessary to think about methods of concealing one's own military units from this new threat. This publication aims to provide an overview of counter ai approaches against enemy reconnaissance, and the possibilities to assess the effectiveness of these methods. It will focus on explainable AI and the camouflaging of key features as well as the possibility of dual attribute adversarial attack camouflage. These are mathematically optimised patterns that drive an AI-based classifier to an incorrect classification or simply suppress the correct classification. We also discuss the robustness of these patterns.
Deep Learning based architectures such as Convolutional Neural Networks (CNNs) have become quite efficient in recent years at detecting camouflaged objects that would be easily overlooked by a human observer. Consequently, countermeasures have been developed in the form of adversarial attack patterns which can confuse CNNs by causing false classifications while maintaining the original camouflage properties in the visible spectrum. In this paper, we describe the various steps in generating suitable adversarial camouflage patterns based on the Dual Attribute Adversarial Camouflage (DAAC) technique for evading the detection by artificial intelligence as well as human observers which was proposed in [Wang et al. 2021]. The aim here is to develop an efficient camouflage with the added ability to confuse more than a single network without compromising camouflage against human observers. In order to achieve this, two different approaches are suggested and the results of first tests are presented.
In order to mitigate atmospheric turbulence effects such as increased blur, reduced contrast, and image motion as well as geometric deformations, a wide variety of reconstruction techniques has been developed. Such techniques have proven reasonably successful in mitigating one or several turbulence effects but frequently at the cost of introducing unwanted artefacts such as ringing or noise amplification, depending on the algorithm's properties. The application of image quality metrics (IQMs) as a means of comparing the results of various reconstruction algorithms as objectively as possible is a widely used practice. However, added noise and artefacts affect IQMs which rely on information like high frequency components, disproportionately, since noisy results are invariably interpreted as "higher quality". The underlying goal of this article is to define a methodology for comparing the performance of structurally differing algorithms by a combination of select quality metrics. As different metrics will likely yield different ratings for the same algorithm's performance, a combination of suitable metrics is proposed. Therefore the main focus here is foremost on a survey of current methods for assessing image quality in general and on appraising their suitability for evaluating the quality of images processed by turbulence mitigation algorithms in particular.
Atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion and is responsible for geometric distortions in image data. The approaches that can be found in literature for correcting turbulence-related impairments of image quality are just as varied as the prevailing environmental and turbulence conditions can be. There is a variety of conceivable applications for such correction methods. Consequently, it is difficult to determine a suitable taxonomy that can cover all possible application cases. Therefore, in this paper a tabular approach is proposed on the basis of which similar assumptions can be summarized in order to make algorithms for typical scenarios comparable with each other. A profiling scheme is introduced for this purpose where points are assigned to a selected (and variable) number of criteria according to their priority in a given context. This point system has a dual function, enabling a given application to be systematically described in terms of its requirements as well as a given algorithm to be characterized in terms of its performance parameters. Thus, corresponding (point) profiles are obtained for applications as well as for correction methods, which can be used for meaningful comparison and methodological evaluation of the respective correction results.
Autonomous Underwater Vehicles (AUVs) equipped with high-resolution side-scan sonars (SSS) are used to carry out preliminary surveys of potentially hazardous areas in order to counter the threat posed by naval mines and reduce risk to personnel. The detection and classification of mine-like objects is conducted offline, after a scan has been completed while the actual identification and neutralization of potential targets is executed in a separate minehunting operation. In this paper the various influences on the imaging sonar system and, moreover, the resulting sonar imagery are assessed with regard to affecting the Probability of Detection and Classification (PDC). Image quality, sharpness and Signal-to-Noise Ratio (SNR) are among the more obvious and straightforwardly quantifiable factors. The complexity of a sonar image, however, can have a significant impact as well. Image lacunarity is used to characterize the seafloor in order to assess the corresponding minehunting difficulty. Additional factors under consideration are the heading angle of the AUV at any given measurement position as well as horizontal spreading and potential overlapping of successive sonic pulses.
As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).
In mid- to long-range horizontal imaging applications it is quite often atmospheric turbulence which limits the performance of an electro-optical system rather than the design and quality of the system itself. Even weak or moderate turbulence conditions can suffice to cause significant image degradation, the predominant effects being image dancing and blurring. To mitigate these effects many different methods have been proposed, most of which use either a hardware approach, such as adaptive optics, or a software approach. A great number of these methods are highly specialized with regard to input data, e.g. aiming exclusively at very short exposure images or at infrared data. So far, only a very limited number of these methods are concerned specifically with the restoration of RGB colour video. Beside motion compensation and deblurring, contrast enhancement plays a vital part in many turbulence mitigation schemes. While most contrast enhancement techniques, such as Contrast Limited Adaptive Histogram Equalization (CLAHE) work quite well on monochrome data or single colour frames, they tend to amplify noise in a colour video stream disproportionately, especially in scenes with low contrast. Therefore, in this paper the impact of different colour spaces (RGB, LAB, HSV) on the application of such typical image enhancement techniques is discussed and evaluated with regard to suppressing temporal noise as well as to their suitability for use in software-based turbulence mitigation algorithms.
Atmospheric turbulence is a well-known phenomenon that diminishes the recognition range in visual and infrared image sequences. There exist many different methods to compensate for the effects of turbulence. This paper focuses on the performance of two software-based methods to mitigate the effects of low- and medium turbulence conditions. Both methods are capable of processing static and dynamic scenes. The first method consists of local registration, frame selection, blur estimation and deconvolution. The second method consists of local motion compensation, fore- /background segmentation and weighted iterative blind deconvolution. A comparative evaluation using quantitative measures is done on some representative sequences captured during a NATO SET 165 trial in Dayton. The amount of blurring and tilt in the imagery seem to be relevant measures for such an evaluation. It is shown that both methods improve the imagery by reducing the blurring and tilt and therefore enlarge the recognition range. Furthermore, results of a recognition experiment using simulated data are presented that show that turbulence mitigation using the first method improves the recognition range up to 25% for an operational optical system.
Remote sensing applications are generally concerned with observing objects over long distances. When imaging over long horizontal paths, image resolution is limited by the atmosphere rather than by the design and quality of the optical system being used. Atmospheric turbulence can cause quite severe image degradation, the foremost effects being blurring and image motion. Recently, interest in image processing solutions has been rising, not least of all because of the comparatively low cost of computational power, and also due to an increasing number of imaging applications that require the correction of extended objects rather than point-like sources only. At present, the majority of these image processing methods aim exclusively at the restoration of static scenes. But there is a growing interest in enhancing turbulence mitigation methods to include moving objects as well. However, an unbiased qualitative evaluation of the respective restoration results proves difficult if little or no additional information on the "true image" is available. Therefore, in this paper synthetic ground truth data containing moving vehicles were generated and a first-order atmospheric propagation simulation was implemented in order to test such algorithms. The simulation employs only one phase screen and assumes isoplanatic conditions (only global image motion) while scintillation effects are ignored.
Many remote sensing applications are concerned with observing objects over long horizontal paths and often the
atmosphere between observer and object is quite turbulent, especially in arid or semi-arid regions. Depending on the
degree of turbulence, atmospheric turbulence can cause quite severe image degradation, the foremost effects being
temporal and spatial blurring. And since the observed objects are not necessarily stationary, motion blurring can also
factor in the degradation process. At present, the majority of these image processing methods aim exclusively at the
restoration of static scenes. But there is a growing interest in enhancing turbulence mitigation methods to include moving
objects as well. Therefore, the approach in this paper is to employ block-matching as motion detection algorithm to
detect and estimate object motion in order to separate directed movement from turbulence-induced undirected motion.
This enables a segmentation of static scene elements and moving objects, provided that the object movement exceeds the
turbulence motion. Local image stacking is carried out for the moving elements, thus effectively reducing motion blur
created by averaging and improving the overall final image restoration by means of blind deconvolution.
The degree of image degradation due to atmospheric turbulence is particularly severe when imaging over long horizontal
paths since the turbulence is strongest close to the ground. The most pronounced effects include image blurring and
image dancing and in case of strong turbulence image distortion as well. To mitigate these effects a number of methods
from the field of image processing have been proposed most of which aim exclusively at the restoration of static scenes.
But there is also an increasing interest in advancing turbulence mitigation to encompass moving objects as well.
Therefore, in this paper a procedure is described that employs block-matching for the segmentation of static scene
elements and moving objects such that image restoration can be carried out for both separately. This way motion blurring
is taken into account in addition to atmospheric blurring, effectively reducing motion artefacts and improving the overall
restoration result. Motion-compensated averaging with subsequent blind deconvolution is used for the actual image
restoration.
Motion-Compensated Averaging (MCA) with blind deconvolution has proven successful in mitigating turbulence effects
like image dancing and blurring. In this paper an image quality control according to the "Lucky Imaging" principle is
combined with the MCA-procedure, weighting good frames more heavily than bad ones, skipping a given percentage of
extremely degraded frames entirely. To account for local isoplanatism, when image dancing will effect local
displacements between consecutive frames rather than global shifts only, a locally operating MCA variant with block
matching, proposed in earlier work, is employed. In order to reduce loss of detail due to normal averaging, various
combinations of temporal mode, median and mean are tested as reference image. The respective restoration results by
means of a weighted blind deconvolution algorithm are presented and evaluated.
In imaging applications the prevalent effects of atmospheric turbulence comprise image dancing and image blurring.
Suggestions from the field of image processing to compensate for these turbulence effects and restore degraded imagery
include Motion-Compensated Averaging (MCA) for image sequences. In isoplanatic conditions, such an averaged image
can be considered as a non-distorted image that has been blurred by an unknown Point Spread Function (PSF) of the
same size as the pixel motions due to the turbulence and a blind deconvolution algorithm can be employed for the final
image restoration. However, when imaging over a long horizontal path close to the ground, conditions are likely to be
anisoplanatic and image dancing will effect local image displacements between consecutive frames rather than global
shifts only. Therefore, in this paper, a locally operating variant of the MCA-procedure is proposed, utilizing Block
Matching (BM) in order to identify and re-arrange uniformly displaced image parts. For the final restoration a multistage
blind deconvolution algorithm is used and the corresponding deconvolution results are presented and evaluated.
In the field on blind image deconvolution a new promising algorithm, based on the Principal Component Analysis
(PCA), has been recently proposed in the literature. The main advantages of the algorithm are the following:
computational complexity is generally lower than other deconvolution techniques (e.g., the widely used Iterative Blind
Deconvolution - IBD - method); it is robust to white noise; only the blurring point spread function support is required to
perform the single-observation deconvolution (i.e., a single degraded observation of a scene is available), while the
multiple-observation one is completely unsupervised (i.e., multiple degraded observations of a scene are available). The
effectiveness of the PCA-based restoration algorithm has been only confirmed by visual inspection and, to the best of our
knowledge, no objective image quality assessment has been performed. In this paper a generalization of the original
algorithm version is proposed; then the previous unexplored issue is considered and the achieved results are compared
with that of the IBD method, which is used as benchmark.
Suggestions from the field of image processing to compensate for turbulence effects and restore degraded images include
motion-compensated image integration after which the image can be considered as a non-distorted image that has been
blurred with a point spread function (PSF) the same size as the pixel motions due to the turbulence. Since this PSF is
unknown, a blind deconvolution is still necessary to restore the image. By utilising different blind deconvolution
algorithms along with the motion-compensated image integration, several variants of this turbulence compensation
method are created. In this paper we discuss the differences of the various blind deconvolution algorithms employed and
give a qualitative analysis of the turbulence compensation variants by comparing their respective restoration results. This
is done by visual inspection as well as by means of different image quality metrics that analyse the high frequency
components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.