Camouflage is the art of deception which is often used in the animal world. It is also used on the battlefield to hide military assets. Camouflaged objects hide within their environments by taking on colors and textures that are similar to their surroundings. In this work, we explore the classification and localization of camouflaged enemy assets including soldiers. In this paper we address two major challenges: a) how to overcome the paucity of domain-specific labeled data and b) how to perform camouflage object detection using edge devices. To address the first challenge, we develop a deep neural style transfer model that blends content images of objects such as soldiers, tanks, and mines/improvised explosive devices with style images depicting deserts, jungles, and snow-covered regions. To address the second challenge, we develop combined depth-guided deep neural network models that combine image features with depth features. Previous research suggests that depth features not only contain local information about object geometry but also provide information on the position, and shape for camouflaged object identification and localization. In this work, we use precomputed monocular method for the generation of the depth maps. The novel fusion-based architecture provides an efficient representation learning space for object detection. In addition, we perform ablation studies to measure the performance of depth versus RGB in detecting camouflaged objects. We also demonstrate how such as model can be deployed in edge devices for real-time object identification and localization.
|