This course explains mobile computational photography and how it differs from traditional off-line computational photography. In mobile computational photography the camera sensor, display, a computational unit, and devices (such as lens, flash, and sensors including accelerometers or gyroscopes) work jointly with a user in an interactive loop, where several images are taken quickly with changing camera parameters. A mobile computational photography system then creates in real- or near-real-time new images that could not have been obtained from a single shot camera. Examples include high dynamic range (HDR) and panoramic photography.
In this course we explain the typical image processing pipeline from the image sensor, through the image signal processor to images that can be displayed and stored. We also present the FCam architecture for precise, deterministic, and fast camera control, which is key for many computational photography tasks. We describe several algorithms such as denoising, demosaicking, auto-focusing, auto-exposure, and auto-whitebalance, and how some of them need to be modified for burst photography as they were originally developed for single-image photography. Finally, we discuss various tools available for image processing: both in the form of libraries and domain specific programming languages.