In this paper, we present a CMOS digital intra-oral sensor for x-ray radiography. The sensor system consists of a custom
CMOS imager, custom scintillator/fiber optics plate, camera timing and digital control electronics, and direct USB
communication. The CMOS imager contains 1700 x 1346 pixels. The pixel size is 19.5um x 19.5um. The imager was
fabricated with a 0.18um CMOS imaging process. The sensor and CMOS imager design features chamfered corners for
patient comfort. All camera functions were integrated within the sensor housing and a standard USB cable was used to
directly connect the intra-oral sensor to the host computer. The sensor demonstrated wide dynamic range from 5uGy to
1300uGy and high image quality with a SNR of greater than 160 at 400uGy dose. The sensor has a spatial resolution
more than 20 lp/mm.
As the number of imaging pixels in camera phones increases, users expect camera phone image quality to be comparable to digital still cameras. The mobile imaging industry is aware, however, that simply packing more pixels into the very limited camera module size need not improve image quality. When the size of a sensor array is fixed, increasing the number of imaging pixels decreases pixel size and thus photon count. Attempts to compensate for the reduction in light sensitivity by increasing exposure durations increase the amount of handheld camera motion blur which effectively reduces spatial resolution. Perversely, what started as an attempt to increase spatial resolution by increasing the number of imaging pixels, may result in a reduction of effective spatial resolution. In this paper, we evaluate how the performance of mobile imaging systems changes with shrinking pixel size, and we propose to replace the widely misused "physical pixel count" with a new metric that we refer to as the "effective pixel count" (EPC). We use this new metric to analyze design tradeoffs for four different pixel sizes (2.8um, 2.2um, 1.75um and 1.4um) and two different imaging arrays (1/3.2 and 1/8 inch). We show that optical diffraction and camera motion make 1.4 um pixels less perceptually effective than larger pixels and that this problem is exacerbated by the introduction of zoom optics. Image stabilization optics can increase the effective pixel count and are, therefore, important features to include in a mobile imaging system.
KEYWORDS: Cameras, Digital cameras, Motion models, Photography, Digital photography, Imaging systems, Motion estimation, Digital imaging, Signal to noise ratio, Sensors
Due to the demanding size and cost constraints of camera phones, the mobile imaging industry needs to address several key challenges in order to achieve the quality of a digital still camera. Minimizing camera-motion introduced image blur is one of them. Film photographers have long used a rule-of-thumb that a hand held 35mm format film camera should have an exposure in seconds that is not longer than the inverse of the focal length in millimeters. Due to the lack of scientific studies on camera-motion, it is still an open question how to generalize this rule-of-thumb to digital still cameras as well as camera phones. In this paper, we first propose a generalized rule-of-thumb with the original rule-of-thumb as a special case when camera-motion can be approximated by a linear motion at 1.667 °/sec. We then use a gyroscope-based system to measure camera-motion patterns for two camera phones (one held with one hand and the other held in two hands) and one digital still camera. The results show that effective camera-motion function can be approximated very well by a linear function for exposure durations less than 100ms. While the effective camera-motion speed for camera phones (5.95 °/sec and 4.39 °/sec respectively) is significantly higher than that of digital still cameras (2.18 °/sec), it was found that holding a camera phone with two hands while taking pictures does reduce the amount of camera motion. It was also found that camera-motion not only varies significantly across subjects but also across captures for the same subject. Since camera phones have significantly higher motion and longer exposure durations than 35mm format film cameras and most digital still cameras, it is expected that many of the pictures taken by camera phones today will not meet the sharpness criteria used in 35mm film print. The mobile imaging industry is aggressively pursuing a smaller and smaller pixel size in order to meet the digital still camera's performance in terms of total pixels while retaining the small size needed for the mobile industry. This makes it increasingly more important to address the camera-motion challenge associated with smaller pixel size.
When the size of a CMOS imaging sensor array is fixed, the only way to increase sampling density and spatial resolution is to reduce pixel size. But reducing pixel size reduces the light sensitivity. Hence, under these constraints, there is a tradeoff between spatial resolution and light sensitivity. Because this tradeoff involves the interaction of many different system components, we used a full system simulation to characterize performance. This paper describes system simulations that predict the output of imaging sensors with the same dye size but different pixel sizes and presents metrics that quantify the spatial resolution and light sensitivity for these different imaging sensors.
KEYWORDS: Cameras, Manufacturing, RGB color model, Colorimetry, Digital cameras, Color reproduction, Image quality, Sensors, Visual process modeling, Data analysis
As the fastest-growing consumer electronics device in history, the camera phone has evolved from a toy into a real camera that competes with the compact digital camera in image quality. Due to severe constraints in cost and size, one key question that remains unanswered for camera phones is: how good does the image quality need to be so that resource can be allocated most efficiently. In this paper, we have tried to find the color processing tolerance through a study of 24 digital cameras from six manufacturers under five different light sources. We measured both the inter-brand (across manufacturers) and intra-brand (within manufacturers) mean and standard deviation for white balance and color reproduction. The white balance results showed that most cameras didn’t follow the complete white balance model. The difference between the captured white patch and the display white point increased when the correlated color temperature (CCT) of the illuminant was further away from 6500K. The standard deviation of the red/green and blue/green ratios for the white patch also increased when the illuminant was further away from 6500K. The color reproduction results revealed a similar trend for the inter-brand and intra-brand chromatic difference of the color patches. The average inter-brand chromatic difference increased from 3.87 ΔE units for the Δ65 light (6500K) to 10.13 ΔE units for the Horizon light (2300K).
KEYWORDS: Signal to noise ratio, LCDs, Digital Light Processing, Visibility, Image quality, Image processing, Cameras, High dynamic range imaging, Digital cameras, Sensors
In many imaging applications, there is a tradeoff between sensor spatial resolution and dynamic range. Increasing sampling density by reducing pixel size decreases the number of photons each pixel can capture before saturation. Hence, imagers with small pixels operate at levels where photon noise limits image quality. To understand the impact of these noise sources on image quality we conducted a series of psychophysical experiments. The data revealed two general principles. First, the luminance amplitude of the noise standard deviation predicts threshold, independent of color. Second, this threshold is 3-5% of the mean background luminance across a wide range of background luminance levels (ranging from 8 cd/m2 to 5594 cd/m2). The relatively constant noise threshold across a wide range of conditions has specific implications for the imaging sensor design and image process pipeline. An ideal image capture device, limited only by photon noise, must capture at least 1000 photons/pixel (1/sqrt(103) ~= 3%) to render photon noise invisible. The ideal capture device should also be able to achieve this SNR or higher across the whole dynamic range.
The Image Systems Evaluation Toolkit (ISET) is an integrated suite of software routines that simulate the capture and processing of visual scenes. ISET includes a graphical user interface (GUI) for users to control the physical characteristics of the scene and many parameters of the optics, sensor electronics and image processing-pipeline. ISET also includes color tools and metrics based on international standards (chromaticity coordinates, CIELAB and others) that assist the engineer in evaluating the color accuracy and quality of the rendered image.
KEYWORDS: RGB color model, Sensors, Cameras, Space sensors, Calibration, Visualization, Tungsten, Light sources and illumination, Digital photography, Photography
When rendering photographs, it is important to preserve the gray tones despite variations in the ambient illumination. When the illuminant is known, white balancing that preserves gray tones can be performed in many different color spaces; the choice of color space influences the renderings of other colors. In this behavioral study, we ask whether users have a preference for the color space where white balancing is performed. Subjects compared images using a white balancing transformation that preserved gray tones, but the transformation was applied in one of the four different color spaces: XYZ, Bradford, a camera sensor RGB and the sharpened RGB color space. We used six scenes types (four portraits, fruit, and toys) acquired under three calibrated illumination environments (fluorescent, tungsten, and flash). For all subjects, transformations applied in XYZ and sharpened RGB were preferred to those applied in Bradford and device color space.
With the development of high-speed CMOS imagers, it is possible to acquire and process multiple images within the imager, prior output. We refer to an imaging architecture that acquires a collection of images and produces a single result as multiple capture single image (MCSI). In this paper we describe some applications of the MCSI architecture using a monochrome sensor and modulation light sources. By using active light sources. By using active light sources, it is possible to measure object information in a manner that is independent of the passive illuminant. To study this architecture, we have implemented a test system using a monochrome e CMOS sensor and several arrays of color LEDs whose temporal modulation can be precisely controlled. First, we report on experimental measurement that evaluate how well the active and passive illuminant can be separated as a function of experimental variables, including passive illuminant intensity, temporal sampling rate and modulation amplitude. Second, we describe two applications of this technique: (a) creating a color image from a monochrome sensor, and (b) measuring the spatial distribution of the passive illuminant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.