PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
To record fast physical phenomena that occur in microscopic time scales requires an imaging system that can accurately dissect the event and prove a spatial and temporally resolved record that allows critical interrogation. The luxury of having a dedicated photographer to capture the type of event, which necessitates a high-speed camera, has passed into history. Consequently, the imaging system is regarded as a peripheral of the experimental procedure and needs to be user friendly in its operation. This, allied to the modern researcher's expectations, dictates that it must be computer controlled and produce records that can be analyzed using software that readily provides quantitative data. To satisfy a wide range of research conditions the camera has to be immune to external influences and operate in widely diverse environmental conditions. To accommodate the wide spectrum of applications the system must be flexible, reliable and produce trustworthy result in reasonable timescales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For medical application, we are interested in estimation of optical flow on the face and particularly on area around the eyes. Among the methods of optical flow estimation, gradient estimation and block matching are the main ones. However the gradient based approach can only be applied for small displacement. Generally the process of block matching gives only good result if the searching strategy is judiciously selected. Our approach is based on a Markov random field model combined with an algorithm of block matching in a multiresolution scheme. The multiresolution approach leads to detect a large range of displacement amplitude. The large displacements are detected on the coarse scales and the small ones will be detected successively on finer scales in a coarse scales and the small ones will be detected successively on finer scales. The tracking of motion is achieved by a block matching algorithm. This method gives the otpical flow whatever the amplitude of the motion is, if included in the range defined by the multiresolution approach. The result show clearly the complementary of Markov random fields estimation and block matching across the scales.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The target detection can be carried out with a statistical matched filter. The construction of the matched filter needs the information on the background clutter statistics as well as on the shape of the target. In military IR search and tracker systems, the intensity of the target is usually assumed to have the 2D Gaussian shape. This Gaussian assumption is valid only for a head-on approaching missile in the far distance. More often than not, however, the homing enemy missile takes the 'lead angle' trajectory, and the IR target shape may be an ellipse with a time-varying eccentricity. In such a case, the matched filter tuned to the Gaussian-shaped target either fails to detect the target, or result in a high false alarm rate. To overcome this difficulty, we propose a new extended-target detection algorithm which can adapt to the time-varying target shapes. We estimate the attitude of the target using an extended Kalman filter from a sequence of image frames. The estimated target attitude is, then used to predict the projected shape of the target image. Using the predicted target shape, we can construct a better-tuned matched filter for the detection of the target in the next image frame. The proposed algorithm has been tested with 8-12 micrometers IR image frames, and we observe that the false alarm rate has ben reduced by the order of magnitude in comparison with the simple method filtering method with the Gaussian-shaped target assumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Standard CMOS technologies offer great flexibility in the design of image sensors, which is a big advantage especially for high framerate system. For this application we have integrated an active pixel sensor with 256 X 256 pixel using a standard 0.5 micrometers CMOS technologies. With 16 analog outputs and a clockrate of 25-30 MHz per output, a continuous framerate of more than 50000 Hz is achieved. A global synchronous shutter is provided, but it required a more complex pixel circuit of five transistors and a special pixel layout to get a good optical fill factor. The active area of the photodiode is 9 X 9 micrometers . These square diodes are arranged in a chess pattern, while the remaining space is used for the electronic circuit. FIll factor is nearly 50 percent. The sensor is embedded in a high-speed camera system with 16 ADCs, 256Mbyte dynamic RAM, FPGAs for high-speed real time image processing, and a PC for user interface, data archive and network operation. Fixed pattern noise, which is always a problem of CMOS sensor, and the mismatching of the 16 analog channels is removed by a pixelwise gain-offset correction. After this, the chess pattern requires a reconstruction of all the 'missing' pixels, which can be done by a special edge sensitive algorithm. So a high quality 512 X 256 image with low remaining noise can be displayed. Sensor, architecture and processing are also suitable for color imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronic charge-coupled device (CCD) cameras equipped with image intensifiers are increasingly being used for radiographic applications. These systems may be used to replace film recording for static imaging, or at other times CCDs coupled with electro-optical shutters may be used for static or dynamic radiography. Image intensifiers provide precise shuttering and signal gain. We have developed a set of performance measures to calibrate systems, compare one system to another, and to predict experimental performance. The performance measures discussed in this paper are concerned with image quality parameters that relate to resolution and signal-to-noise ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An integrated, high-speed photographic system combining a high-repetition rate, pulsed ruby laser and a high-framing rate CCD camera has been demonstrated. Individually, the laser and camera have been discussed previously and each was developed under the Small Business Innovative Research sponsorship through the Air Force Research Lab. This paper present for the first met digital images of dynamic ballistic events captured at 500 kHz using the two elements integrated as a high-speed digital imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a real time computation of rotation and heading direction from the deformation of an active contour in the image plane as a basis to recover the epipolar geometry. The method works on sequences recorded by a freely moving calibrated camera and also independent motions in the scene. It is different from the common techniques that computed displacement or velocity fields as the unique basis for further computation. In fact, the difficulties in determining the displacement or velocity fields make the new trend presented in this paper a very attractive alternative.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We shall show how to extract various important information on the motion of a target from a sequence of its image frames obtained from a single staring imaging sensor which is mounted on an aircraft. Specifically, we present an algorithm which estimates following 15 parameters of an enemy aircraft from its image sequence; position, linear velocity, attitude, instantaneous angular velocity and acceleration. The attitude information of the aircraft is obtained through the image matching operation. In simulations, we show that the proposed algorithm is superior to the conventional algorithms which do not use the attitude information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an approach for directly depth from multiple camera images based on directly estimating the scene in 3D space. We provide a framework that can include any arrangement of camera system, ranging from isolated sensor of very few light detectors to arrays of conventional pixellated cameras. We consider the resolution limits achievable by different camera configurations. Incorporating prior information about the 3D world improves our surface estimates allows us to reconstruct parts of a viewed scene which are partially occluded. Results for real imagery are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The imagers in tethered digital video cameras usually produce color interlaced data output in a line by line format. To transfer and convert the data into computer in a meaningful format that software applications could handle, various algorithms and their implementations have been employed. This paper discusses the tradeoff between data throughput and image quality in tethered camera design, and proposes a system structure and its compression algorithm for low-end PC video conferencing cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows the new approach results in analyzing and classifying test images focusing on the differences among the existing spatial frame sequence modelings obtained from each region candidate or class. The used tool combination applied to analyze the classify the mosaic images consists of a bank of Gabor filters for decomposing the image and Gaussian filters for building the multi-resolution image representation on the filter bank outputs, and two classifiers: a Bayesian and a low-resolution Bhattacharyya distance RCE neural network classifiers. The training set of textures consists of Brodatz and synthetic patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If a display system uses pulse-width modulation (PWM) to create grayscale, presentation of the RGB value of a pixel may occupy a substantial portion of the refresh period and thus attenuate high temporal frequencies in the display image. When the image depicts object or selfmotion, some ofthe attenuated spatiotemporal frequency components are likely to be part of the spectrum of the original image. Prior research suggests that, during tracking of a moving target, modification of the original-image spectrum will result in nonveridical perception of the chromoluminance pattern that defines that target. We developed a procedure for predicting an observer's spatial percept while tracking the target in a pulse-width modulated display of a constant-velocity motion sequence. The results of a target identification task indicate that displayable versions of such patterns were well matched to observers' percepts and provide strong support for the view that observers perceive the spatial pattern that moves in accord with the sampled velocity rather than the spatial pattern of the target. The perceived pattern is equivalent to that which would be repetitively painted on the retina if the velocity of smooth pursuit were exactly matched to the sampled velocity
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The set and makeup play important roles in achieving a realistic feel in television dramas. In historical drama sets in the days of the samurai, the degree to which wigs appear natural can particularly affect the overall quality of a program. This is an issue of special concern in the production of high-definition TV programs. The detailed and true to life reproduction that are the special features of HDTV actually make a natural look for wigs more difficult to achieve. This problem is currently addressed by meticulous work in the wigs that requires much effort and expense, and in some cases, restricts the freedom of the actor's performance. In response to this situation, the authors have been investigating a technique for correcting places in the video image that look unnatural by processing the image after the source video has been recorded. This paper is a report on the favorable results that have been obtained in computer simulations of the technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image sequence analysis system for analysis of object movement from film rolls or tapes is developed in our lab. The system hardware consists of a film winding apparatus, a CCD camera with an image card and a PC system. The main features of the software include correlation tracking, several pattern recognition tracking, trajectory estimation and lens distortion calibration, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Speckle noise and phase errors are two major a sources of quality degradation for synthetic aperture radar imageries. In this work, we address this problem with the proposal of a spatio-temporal metric to benchmark these degradations by analyzing an azimuthal image sequence. Preliminary results of the metrics with and without multiresolutional formulation are reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing in transform domain is a much interesting area in recently because compressed image and video data are not only becoming widely available as the data format over MPEG or JPEG but also manipulating in transform domain require smaller data quantities, and lower computation complexity than in spatial domain. However, processing in compressed domain could be suffered form local effects such as blocking artifact. In this paper, image processing is performed by weighting coefficients in compressed domain, i.e., filtering coefficients are moderately selected according to the processing. Since we find the appropriate factors according to global image enhancement, blocking artifacts are reduced between inter-blocks. Experimental results show that the proposed technique has the advantages of simple computation and easy implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.