Paper
10 April 2018 An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features
Chao Zhang, Qian Zhang, Chi Zheng, Guoping Qiu
Author Affiliations +
Proceedings Volume 10615, Ninth International Conference on Graphic and Image Processing (ICGIP 2017); 1061529 (2018) https://doi.org/10.1117/12.2303460
Event: Ninth International Conference on Graphic and Image Processing, 2017, Qingdao, China
Abstract
Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Chao Zhang, Qian Zhang, Chi Zheng, and Guoping Qiu "An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features", Proc. SPIE 10615, Ninth International Conference on Graphic and Image Processing (ICGIP 2017), 1061529 (10 April 2018); https://doi.org/10.1117/12.2303460
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Image segmentation

Optical flow

Video processing

Image processing

Motion models

Motion estimation

Back to Top