In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to
extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion,
luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space
is conducted separately. We tested this approach on different video sequences with different monocular properties. The
results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered
using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of
rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video
codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side
information to the color video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.