Paper
14 March 2005 Analysis of motion-compensated temporal filtering versus motion-compensated prediction
Author Affiliations +
Proceedings Volume 5685, Image and Video Communications and Processing 2005; (2005) https://doi.org/10.1117/12.584175
Event: Electronic Imaging 2005, 2005, San Jose, California, United States
Abstract
In previous work, a performance bound for multi-hypothesis motion-compensated prediction (MCP) has been derived based on a video signal model with independent Gaussian displacement errors. A simplified form of the result is derived in this work. A performance bound for optimal motion-compensated temporal filtering (MCTF) has also been proposed based on a signal model with correlated Gaussian displacement errors. In this previous work, the optimal MCTF (KLT) was found to perform better than one-hypothesis MCP but not better than infinite-hypothesis MCP. In this work, we derive the performance of multi-hypothesis MCP again based on the signal model with correlated Gaussian displacement errors. Now with the same signal model, we find that optimal MCTF has the same performance as that of infinite-hypothesis MCP.
© (2005) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Yongjun Wu and John W. Woods "Analysis of motion-compensated temporal filtering versus motion-compensated prediction", Proc. SPIE 5685, Image and Video Communications and Processing 2005, (14 March 2005); https://doi.org/10.1117/12.584175
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Microchannel plates

Performance modeling

Motion models

Filtering (signal processing)

Video

Error analysis

Statistical analysis

Back to Top