Geometrically aligning two images is a basic image process in the machine vision field. In general, the geometric transformation model is selected according to the complexity of the geometric transformation between images, and the type of camera that takes the image is ignored. However, there are essential differences in the imaging mechanism between a line-scan camera and an area array camera, so the geometric transformation model of the area array image does not conform to the geometric transformation of the line-scan image. Therefore, according to the imaging model of the line-scan camera, we established the geometric transformation model of line-scan images. A line-scan image acquisition system was built, and planar objects were imaged by different line-scan camera poses. Then, taking the proposed model as the geometric transformation model, the line-scan images collected in this paper and the line-scan images of realistic application were respectively registered. As a contrast experiment, the homography of the area array image was adopted as the geometric transformation model to align these images again. Through the comparison of registration results, the correctness of the proposed geometric transformation model of line-scan images was verified, and the accuracy of the line-scan image registration was improved.
In this paper, we consider direct image registration problem which estimate the geometric and photometric transformations between two images. The efficient second-order minimization method (ESM) is based on a second-order Taylor series of image differences without computing the Hessian under brightness constancy assumption. This can be done due to the fact that the considered geometric transformations is Lie group and can be parameterized by its Lie algebra. In order to deal with lighting changes, we extend ESM to the compositional dual efficient second-order minimization method (CDESM). In our approach, the photometric transformations is parameterized by its Lie algebra with compositional operation, which is similar to that of geometric transformations. Our algorithm can give a second-order approximation of image differences with respect to geometric and photometric parameters. The geometric and photometric parameters are simultaneously obtained by non-linear least-square optimization. Our algorithm preserves the advantages of the original ESM method which has high convergence rate and large capture radius. Experimental results show that our algorithm is more robust to lighting changes and has higher registration accuracy compared to previous algorithms.
Shape matching and recognition is a challenging task due to geometric distortions and occlusions. A novel shape
matching approach based on Grassmann manifold is proposed that affine transformations and partial occlusions are both
considered. An affine invariant Grassmann shape descriptor is employed which projects one shape contour to a point on
Grassmann manifold and gives the similarity measurement between two contours based on the geodesic distance on the
manifold. At first, shape contours are parameterized by affine length and accordingly divided into local affine-invariant
shape segments, which are represented by the Grassmann shape descriptor, according to their curvature scale space
images. Then the Smith-Waterman algorithm is employed to find the common parts of two shapes’ segment sequences,
and get the local similarity of shapes. The global similarity is given by the found common parts, and finally the shape
recognition accomplished by the weighted sum of local similarity and global similarity. The robustness of the Grassmann
shape descriptor is analyzed through subspace perturbation analysis theory. Retrieval experiments show that our
approach is effective and robust under affine transformations and partial occlusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.