KEYWORDS: 3D metrology, 3D image processing, Cameras, Forensic science, Image segmentation, 3D modeling, Photogrammetry, Image processing, Retina, Video
In forensic applications of photogrammetric techniques any measurable information is a precious commodity. In this paper, we describe the development of a new geometrical approach to photogrammetric measurements. While classical photogrammetry requires knowledge of point measurements, in this approach we exploit other geometrical constraints that are often available in photo and video forensic evidence. In Part I paper, the first in the series, we introduce line constraints and demonstrate algorithms that combine point and line measurements.
We present the technical steps involved in a new method of photogrammetry, requiring much fewer measurements in the observed scene, no availability of the original camera’s calibration, and no prior knowledge of its position and orientation. The practical setup involves minimal humans measuring intervention. This largely automatic configuration permits to get accurate lengths, angles, and areas measurements in the original scene without prospecting the 3-D Cartesian coordinates for known points. The crucial observation is that the two snapshots, even if not simultaneous and from different cameras, provide a stereo system. Therefore the correspondence of 8 points in the views gives the epipolar geometry of the stereo setup, and as one of the camera is calibrated, the calibration can be “transferred” to the other camera through the epipolar matrix. This transfer yields a calibration of the original camera (internal parameters and position in the scene) even if it is not available anymore, its settings have changed, its orientation is different or it was moved. Thus we replace the technically difficult, time-consuming, and potentially error prone data collection by the epipolar registration and some rudimentary scene measurements. This new technique can also be applied to the task of photo-comparison.
KEYWORDS: Video, Video surveillance, Analog electronics, Composites, Signal to noise ratio, Image processing, Image quality standards, Fourier transforms, Quantization, Video processing
Recently, the law enforcement community with professional interests in applications of image/video processing technology, has been exposed to scientifically flawed salesmanship assertions regarding the advantages and disadvantages of various hardware image acquisition devices (video digitizing cards). These assertions state a necessity of using SMPTE CCIR-601 standard when digitizing NTSC composite video signals from surveillance videotapes. In particular, it would imply that the pixel-sampling rate of 720*486 is absolutely required to capture all the available video information encoded in the composite video signal. Fortunately, these erroneous statements can be directly analyzed within the strict mathematical context of Shannon's Sampling Theory. Here we apply the classical Shannon-Nyquist results to the process of digitizing composite analog video from videotapes to dispel the theoretically unfounded, wrong assertions.
KEYWORDS: Video, Video surveillance, Cameras, Multiplexing, Multiplexers, Information security, Algorithm development, Error analysis, Image processing algorithms and systems, Signal generators
Software-based grouping of multiplexed video based on video content, as opposed to the signal generated by multiplexers, is described. The method is based on energy minimization approach. The algorithm automatically determines the amount of multiplexed camera views, and then the frames are grouped with respect to camera views. The algorithm is free of any threshold differences between camera views, and does not depend on the presence of quiet zones. The method also compensates for interference noise, local and global motion, are contrast changes.
A parameter-free and a priori-information-free preprocessing of sonar images is proposed, which permits a ranking of local extrema in the image according to their likelihood to be amine-like objects. It is shown that an acceptable fully automatic detection algorithm can be built on a variational method which estimates shape information of the possible mines. This algorithm does not use any a priori information on the type of mine or range distance or background type and works without any change on both sonar databases we had available. It therefore can be used as a detection algorithm without any information request the use or designer. Its results could be fed into a classification algorithm like the one proposed. We also think that the features computed by this variational method could serve for both the detection step and the classification step, thus reducing the number of designer's parameters and opening the way to a parameter-free detection-classification algorithm.
We define a variational method to perform frame-fusion. The process is in three steps: we first estimate the velocities and occlusions using optical flow and spatial constraint on the velocities based on the L1 norm of the divergence. We then collect non-occluded points from the sequence, and estimate their locations at a chosen time, on which we perform the fusion. From this list of points, we reconstruct the super-frame by minimizing a total variation energy which forces the super-frame to look like each frame of the sequence (after shifting) and select among the least oscillatory solutions. We display some examples.
The development and use of Cognitech's measure package is described. The Measure program allows the user to enter the locations and dimensions of known points in an image, and through the established principles of photogrammetry, recover dimensions and positions of objects that are unknown. Several cases in which Measure was employed in the investigative process are discussed.
A new language, compiler, and user interface has been developed, which facilitates the research and development of image processing algorithms. Algorithms are written in a high- level language specifically designed for image processing, and are compiled into machine code that performs as well as algorithms hand-cooled in the C language. The new language and compiler are introduced, and many examples are presented. Directions for future work relating to the user interface and support for parallel processing are proposed.
Recent work has proven the feasibility and utility of 2-Normal segmentation as an image analysis tool. This technique, and the algorithm on which it is based, have shown dramatic and promising results for a number of image processing problems, such as clutter suppression, texture segmentation, motion analysis and object tracking. These capacities will be useful for clutter suppression, target location and classification, platform motion compensation, target tracking, vehicle guidance, battlefield mapping and target damage assessment. These and all applications of 2-Normal segmentation could easily employ and integrate multi-sensor data and /or multi-scale image preprocessing for analysis in two dimensions, three dimensions, or time sequence.
This paper tests a new, fully automated image segmentation algorithm and compares its results with conventional threshold-based edge detection techniques. A CT phantom-based method is used to measure the precision and accuracy of the new algorithm in comparison to two edge detection variants. These algorithms offer a high degree of noise and differential lighting immunity and allow multi-channel image data, making them ideal candidates for multi-echo MRI sequences. The algorithm considered in this paper employs a fast numerical method for energy minimization of the free boundary problem that can incorporate regional image characteristics such as texture or other scale-specific features. It relies on a recursive region merge operation, thus providing a series of nested segmentations. In addition to the phantom testing, we discuss the results of this fast, multiscale, pyramidal segmentation algorithm applied to MRI images. The CT phantom segmentation is measured by the geometric fidelity of the extracted measurements to the geometry of the original bone components. The algorithm performed well in phantom experiments, demonstrating an average four-fold reduction in the error associated with estimating the radius of a small bone although the standard deviation of the estimate was almost twice that of the edge detection techniques. Modifications are proposed which further improve the geometric measurements. Finally, the results on soft-tissue discrimination are promising, and we are continuing to enhance the core formulation to improve the segmentation of complex shaped regions.
Two new filters for image enhancement are developed, extending the early work of the authors. One filter uses a new nonlinear time dependent partial differential equation and its discretization, the second uses a discretization which constrains the backwards heat equation and keeps it variation bounded. The evolution of the initial image as t increases through U(x,y,t) is the filtering process. The processed image is piecewise smooth, nonoscillatory and apparently an accurate reconstruction. The algorithms are fast and easy to program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.