A zoom lens calibration consists of hundreds of monofocal calibrations, each of which takes considerable time and effort with conventional methods. We present a practical calibration method that consists of two separate procedures-zoom calibration and focus calibration. The zoom calibration regards each zoom setting as a monofocal camera, and takes advantage of both the pattern-based and the rotation-based approaches. A rotation sensor is utilized to overcome the ill-posedness caused by the many parameter dimensions. The zoom calibration is followed by the focus calibration process, which is fully automatic and available even at a defocused setting where the pattern detection is not possible. The focus calibration drastically reduces the number of required manual calibrations, from N2 to N times. The experimental results are compared with the lens data sheets provided by the lens manufacturer. The overall calibration procedure is very quick compared to the conventional methods, owing to the proposed zoom and focus calibrations, and shows small enough parameter errors in the effective zoom-focus range.
We present a novel method for automatically correcting the radial lens distortion in a zoom lens video camera system. We first define the zoom lens distortion model using an inherent characteristic of the zoom lens. Next, we sample some video frames with different focal lengths and estimate their radial distortion parameters and focal lengths. We then optimize the zoom lens distortion model with preestimated parameter pairs using the least-squares method. For more robust optimization, we divide the sample images into two groups according to distortion types (i.e., barrel and pincushion) and then separately optimize the zoom lens distortion models with respect to divided groups. Our results show that the zoom lens distortion model can accurately represent the radial distortion of a zoom lens.
KEYWORDS: Video, 3D video compression, Visualization, Video compression, Cameras, 3D acquisition, 3D displays, Image quality, Quality measurement, Televisions
In this paper, we propose a depth map quality metric for three-dimensional videos which include stereoscopic videos and
autostereoscopic videos. Recently, a number of researches have been done to figure out the relationship of perceptual
quality and video impairment caused by various compression methods. However, we consider non-compression issues
which are induced during acquisition and displaying. For instance, using multiple cameras structure may cause
impairment such as misalignment. We demonstrate that the depth map can be a useful tool to find out the implied
impairments. The proposed quality metrics using depth map are depth range, vertical misalignment, temporal
consistency. The depth map is acquired by solving corresponding problems from stereoscopic video, widely known as
disparity estimation. After disparity estimation, the proposed metrics are calculated and integrated into one value which
indicates estimated visual fatigue based on the results of subjective assessment. We measure the correlation between
objective quality metrics and subjective quality results to validate our metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.