Paper
19 January 2009 Depth-from-trajectories for uncalibrated multiview video
Paul A. Ardis, Amit Singhal, Christopher M. Brown
Author Affiliations +
Proceedings Volume 7252, Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques; 725209 (2009) https://doi.org/10.1117/12.806816
Event: IS&T/SPIE Electronic Imaging, 2009, San Jose, California, United States
Abstract
We propose a method for efficiently determining qualitative depth maps for multiple monoscopic videos of the same scene without explicitly solving for stereo or calibrating any of the cameras involved. By tracking a small number of feature points and determining trajectory correspondence, it is possible to determine correct temporal alignment as well as establish a similarity metric for fundamental matrices relating each trajectory. Modeling of matrix relations with a weighted digraph and performing Markov clustering results in a determination of emergent depth layers for feature points. Finally, pixels are segmented into depth layers based upon motion similarity to feature point trajectories. Initial experimental results are demonstrated on stereo benchmark and consumer data.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Paul A. Ardis, Amit Singhal, and Christopher M. Brown "Depth-from-trajectories for uncalibrated multiview video", Proc. SPIE 7252, Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques, 725209 (19 January 2009); https://doi.org/10.1117/12.806816
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Cameras

Motion models

Matrices

Calibration

Optimization (mathematics)

Computer vision technology

Back to Top