Open Access
4 July 2022 Measurement method and recent progress of vision-based deflection measurement of bridges: a technical review
Jinke Huang, Xinxing Shao, Fujun Yang, Jianguo Zhu, Xiaoyuan He
Author Affiliations +
Abstract

The deflection that can reflect the vertical stiffness of a bridge plays an important role in the structural evaluation and health monitoring of bridges. In the past 20 years, the bridge deflection measurement methods based on computer vision and photogrammetry have been gradually applied to the field measurement due to the advantages of noncontact measurement, simple experimental setup, and easy installation. The technical research progress of vision-based bridge deflection measurement is reported from four aspects: basic principles, measurement methods, influencing factors, and applications. Basic principles mainly include camera calibration, three-dimensional (3D) stereo vision, photogrammetry, feature detection, and matching. For measurement methods, the single-camera two-dimensional measurement, the dual-camera 3D measurement, the quasistatic measurement based on photogrammetry, the multipoint dynamic measurement based on the displacement-relay videometrics and the deflection measurement based on UAV platform are introduced, respectively. In the section of influencing factors, this part summarizes the work of many researchers on the effects of camera imaging factors, calibration factors, algorithm factors, and environmental factors on measurement results. The field measurement results at different measurement distances and measurement accuracy based on these are presented in terms of applications. Finally, the future development trends of vision-based bridge deflection measurement are expected.

1.

Introduction

In the acceptance of new bridges and the health monitoring of bridges in service, the deflection is usually considered as the basic parameter that must be measured because it is closely related to the bearing capacity of bridges. The existing bridge deflection measurement methods mainly rely on displacement transducers, dial gauges, connecting pipes, precision levels, digital levels, inclinometers, total stations, microwave radars, global positioning system (GPS), and other measuring equipment, as shown in Fig. 1. Table 1 introduces the typical application scenarios, advantages, and limitations of these deflection measurement methods. It can be seen from Table 1 that the traditional contact measurement methods have certain limitations, which is difficult to achieve dynamic deflection measurement and meet the engineering requirements of real-time measurement and long-term monitoring. In addition, although the accelerometer can also measure the dynamic deflection of the bridge, it has a large error by integrating the acceleration twice to obtain the displacement.1,2

Fig. 1

Several commonly used bridge deflection measurement equipment.

OE_61_7_070901_f001.png

Table 1

Commonly used bridge deflection measurement methods.

Measuring equipmentTypical application scenariosAdvantagesLimitations
Displacement transducersA bridge under which a fixed platform may be built- High accuracy- Cannot be applied to bridges across rivers or deep valleys
- Both static deflection and dynamic deflection can be measured
Dial gaugesA bridge under which a fixed platform may be built- High accuracy- Dynamic deflection cannot be measured
- Cannot be applied to bridges across rivers or deep valleys
Communicating pipesDeflection measurement of medium and short span bridges with small deflection range- High accuracy- Cannot be applied to the deflection measurement of bridges with large longitudinal slope or deflection variation because of the measurement range limitation
- Simple principle
- Automatic measurement can be realized
Precision levelsStatic deflection measurement of medium- and short-span bridges with small deflection range- High accuracy- Dynamic deflection cannot be measured
Digital levelsStatic deflection measurement of medium- and short-span bridges with small deflection range- High accuracy- Not suitable for long-term monitoring due to the influence of light
- High efficiency
- No artificial reading errors
- Both static deflection and dynamic deflection can be measured
- High cost
InclinometersLong-term deflection monitoring of medium- and short-span bridges- Low cost- High requirements for transient response and zero drift in dynamic deflection measurement
- High accuracy
- No fixed reference points are required
- Cannot be applied to measure a large-span bridge with wide angle variation range, such as large suspension bridge and cable-stayed bridge due to the limited measurement range
- Not affected by the weather
- Both static deflection and dynamic deflection can be measured
Manual total stationsStatic deflection measurement of medium- and short-span bridges- High accuracy- Dynamic deflection cannot be measured
Automatic total stationsLong-term deflection monitoring of medium- and short-span bridges- High accuracy- High cost
- Both static deflection and dynamic deflection can be measured
Microwave radarsReal-time measurement of bridge deflection- High accuracy- High cost
- Less affected by the weather
- Real-time contactless measurement
GPSDeflection measurement of extra-large-span suspension bridge and cable-stayed bridge with large deflection deformation- Large measurement range- Low accuracy
- The distance between reference point and measuring point is not restricted
- Real-time contactless measurement

For some noncontact measuring equipment, such as laser Doppler vibrometer,3,4 GPS5,6 and radar interferometry,7 although real-time measurement can be achieved, the total cost of the measurement system is often very high. Besides, the low measurement accuracy of GPS prevents it from being applied to bridges with large stiffness and small deflection.

To break through the limitations of the existing bridge deflection measurement methods, the vision-based measurement methods based on computer vision and photogrammetry are gradually applied to the bridge deflection measurement.8 By detecting and tracking the corresponding points on the two images before and after the deformation, the bridge deflection can be determined.

In past years, several researchers have reviewed the methods and applications of vision-based structural health monitoring (SHM) and condition assessment for civil infrastructure. Xu and Brownjohn9 primarily reviewed the vision-based displacement measurement methods from the perspective of video-processing procedures, which are composed of camera calibration, target tracking, and structural displacement calculation. Ye et al.10 provided a review of the applications of machine vision-based technology in the field of SHM of civil infrastructure. Feng et al.11 mainly summarized the general principles of vision-based sensor systems and discussed the measurement error sources and mitigation methods in laboratory and field experimentations. In addition, Jiang et al.12 only reviewed the development and applications of close-range photogrammetry in bridge deformation and geometry measurement. However, these reviews912 mainly summarized from various application fields, with a little introduction to different measurement methods, and the advantages and disadvantages of these methods haven’t been compared and discussed. The purpose of this paper is to fill this gap and give a technical review of vision-based bridge deflection measurement from the perspectives of basic principles, measurement methods, influencing factors, and applications.

2.

Basic Principles

The basic components of the vision-based bridge deflection measurement system are shown in Fig. 2, which include the measuring equipment part, software algorithm part, data processing part, and data transmission part. Data processing is generally conducted by computers and data transmission can be divided into wired transmission and wireless transmission. According to different measurement methods, the measuring equipment has different components and the software algorithm is mainly composed of camera calibration, three-dimensional (3D) stereo vision, photogrammetry, feature detection, and feature matching. In software algorithm part, camera calibration and feature detection and matching are the two most important parts because the relationship between image coordinate system and world coordinate system can be established only after the accurate camera calibration,1316 and the calculation of bridge deflection must be based on the correct detection and matching results of bridge surface features.17

Fig. 2

Basic components of vision-based deflection measurement of bridges.

OE_61_7_070901_f002.png

2.1.

Camera Calibration

Camera calibration is a process of determining the corresponding relation between the image coordinate system and the world coordinate system and correcting the distorted images. For industrial camera lenses, the original image will be distorted due to manufacturing error, assembly deviation, and some other factors, which makes the measurement accuracy cannot be guaranteed. In camera lens distortion, radial distortion and tangential distortion are usually considered.18 As shown in Figs. 3 and 4, radial distortion causes pixel points to be close to or far from the center of the image plane and tangential distortion causes tangential deviation of pixel points. Usually, lens distortion can be calibrated at the same time as the camera’s intrinsic parameters.1921 For the complex lens distortion that cannot be described by the distortion model, the lens distortion can be calibrated separately in advance.22,23 To correct lens distortion, the distortion coefficient needs to be defined to describe the mapping relationship between the point with and without distortion. After the calibration of lens distortion, the pixel position without distortion can be derived from the distorted pixel position according to the mapping relationship, and then the displacement and deformation without distortion can be calculated. For dual-camera 3D measurement, it is also necessary to use the pixel position after distortion correction to carry out 3D reconstruction.

Fig. 3

Radial distortions: (a) barrel distortion; (b) pillow distortion.

OE_61_7_070901_f003.png

Fig. 4

Tangential distortions. Solid lines: no distortion; dashed lines: with tangential distortion.18

OE_61_7_070901_f004.png

2.1.1.

Single camera calibration of a simplified model

Due to the large field of view in bridge deflection measurement, the calibration board cannot be used to calibrate extrinsic parameters. Therefore, the single-camera calibration method based on a simplified model is often used. Figure 5 is the schematic diagram of the simplified pinhole camera model. When the optical axis of the camera is perpendicular to the target plane, then

Eq. (1)

l=lSf,
where f is the image distance; S is the object distance; l is the displacement of a point in the target plane; l is the displacement of a point on the corresponding image plane. For long-distance measurement of bridge deflection, the image distance is generally approximately equal to the focal length of lens, and the object distance can be measured by a laser rangefinder.

Fig. 5

Simplified pinhole camera model.

OE_61_7_070901_f005.png

2.1.2.

Single-camera calibration

The main purpose of single-camera calibration is to calibrate the camera’s intrinsic parameters and lens distortion. Figure 6 is the schematic diagram of the pinhole camera model.

Fig. 6

Pinhole camera model.24

OE_61_7_070901_f006.png

The coordinate transformations shown in Fig. 6 represents the process from the world coordinate system to the camera coordinate system, and finally to the image coordinate system. If the point M in the world coordinate system can be expressed as [X,Y,Z,1]T, and the corresponding point m in the image coordinate system can be expressed as [x,y,1]T, the relationship between the two points can be expressed as follows:

Eq. (2)

s(xy1)=P[XYZ1]=A[R|t][XYZ1]=A[R11R12R13txR21R22R23tyR31R32R33tz][XYZ1],
where s is the scale factor; P is the camera projection matrix; R and t are the extrinsic parameters of the camera, which respectively represent the rotation and translation of the world coordinate system relative to the camera coordinate system; A is the camera intrinsic parameters matrix, which can be expressed as

Eq. (3)

A=[fxfscx0fycy001],
where fx and fy respectively represent the equivalent focal length of the lens on the x and y axes of the image, parameters fs representing the degree of skewness of the x and y axes of the image, cx and cy represent the coordinates of the principle point.

For the calibration of camera intrinsic parameters, the checkerboard calibration method proposed by Zhang is often used.25 Since it is a plane calibration (Z=0), the Eq. (2) can be rewritten as

Eq. (4)

s(xy1)=H[XY1]=[h11h12h13h21h22h23h31h32h33][XY1].
In this formula, H is called as the homography matrix. Although the matrix has nine unknown parameters, only eight independent parameters need to be solved if the parameters in the matrix are regularized to h33.

To solve the homography matrix, four pairs of corresponding points in the image coordinate system and the world coordinate system should be known for each calibration board position. If there are three calibration boards’ attitudes, the relative extrinsic parameters between camera coordinate system and plane calibration board coordinate system in the camera can be solved.

Lens distortion is usually calculated simultaneously with extrinsic parameters of calibration board at different attitudes and intrinsic parameters of the camera by the nonlinear optimization method. In addition to the known pattern calibration method, self-calibration methods2628 can also be used for camera intrinsic parameter calibration. It should be noted that the camera’s intrinsic parameters are generally considered to remain almost unchanged after calibration, so the camera with calibrated intrinsic parameters can be brought to the experimental site for measurement.

2.1.3.

Extrinsic parameter calibration of two cameras

If intrinsic parameters of the two cameras have been calibrated and the image distortions have been corrected, it can be seen from Eq. (2) that there are s1m1=P1M and s2m2=P2M for camera 1 and camera 2, respectively. According to the epipolar constraint, m1 and m2 satisfy the following correspondence:

Eq. (5)

m2TFm1=0,
where F is a 3×3 fundamental matrix. If the world coordinate system is aligned with the camera coordinate system of camera 1, then P1=A1[I|0]. Remember that P2=A2[R|t], t=[t1t2t3]T, and S=[0t3t2t30t1t2t10]. Then the fundamental matrix F satisfies

Eq. (6)

F=A2-TEA1-1=A2-TRSA1-1.

After the calibration of camera intrinsic parameters, the essential matrix E can be calculated by solving the fundamental matrix F, and then the relative extrinsic parameters of camera 2 to camera 1 can be calculated by using the singular value decomposition method. Nister points out that if there are at least 5 pairs of corresponding points in the image coordinate system of two cameras, the fundamental matrix F of the camera can be solved.29 In Ref. 30, it was proposed that higher solving accuracy can be achieved through nonlinear iterative optimization of relative extrinsic parameters between dual cameras solved by speckle matching and coplanar equation. After the relative extrinsic parameters are calculated, the scale information of the translation vector can be determined by a scale factor.

2.2.

Three-Dimensional Stereo Vision

After image distortion is corrected, according to Eq. (2), the relationship between a point in a world coordinate system and its corresponding image coordinate system can be expressed by two linear equations. If the distortion model is defined on distorted coordinates, the inverse mapping can be used to correct the image distortion. If a point is seen by two cameras at the same time and the world coordinate system is established on the camera coordinate system of the left camera, the relationship between the point and the corresponding points in the image coordinate systems of the two cameras can be expressed by four equations. As shown in Eq. (7), where the subscript 1 and 2 represent camera 1 and camera 2, respectively. In Eq. (7), the intrinsic and extrinsic parameters of the camera can be determined by calibration, and the coordinates of image points can be obtained by matching the images of two cameras. Therefore, the 3D coordinates of the point can be directly solved by the four equations

Eq. (7)

[fx1fs1cx1x10fy1cy1y1R11fx2+R21fs2+R31(cx2x2)R12fx2+R22fs2+R32(cx2x2)R13fx2+R23fs2+R33(cx2x2)R21fy2+R31(cy2y2)R22fy2+R32(cy2y2)R23fy2+R33(cy2y2)][XYZ]=[00(txfx2+tyfs2+tz(cx2x2))(tyfy2+tz(cy2y2))],
where (cx,cy) are the coordinates of principle point in the image coordinate, (fx,fy) are the scale factors in image axes, fs is the parameters describing the skewness of the two image axes, [tx,ty,tz]T is the translation vector and R is the rotation matrix from left camera coordinate system to right camera coordinate system and the subscripts 1 and 2 denote left and right cameras, [u1,v1]T and [u2,v2]T are the image coordinates of matched points, [X,Y,Z]T are the reconstructed 3D coordinates in the left camera coordinate system.

2.3.

Monocular Photogrammetry

Monocular photogrammetry can realize the reconstruction of the 3D coordinates of the mark points,31 which is a method to accurately determine the position of the target in 3D space by using digital image processing and photogrammetry technology to process the images of the mark points taken from different positions and angles. Its main steps include image preprocessing, relative orientation, and bundle adjustment.

A monocular photogrammetry system is generally composed of a digital camera, marking points, optical reference scales, wireless transmission devices, and computing software. As shown in Fig. 7, based on the principles of multiview vision, the control points in different views can be matched with the help of epipolar constraint and then be reconstructed. The original parameters of the digital camera and lens can be taken as the initial values of the intrinsic parameters of the camera. Based on the extrinsic parameter calibration method in Sec. 2.1.3, relative extrinsic parameters between camera coordinate systems at different angles can be determined if there are some coded marking points fixed on the target surface and some of the same marking points can be seen in a series of images taken in any position and angle of the camera. With the extrinsic parameters and the original intrinsic parameters of the camera, the 3D coordinates of the marking points can be reconstructed. In addition, by using the self-calibration bundle adjustment method, with each bundle as the basic adjustment unit and the coordinates of image points as the observation values, the optimization objective function can be listed as follows according to the collinear condition equation:

Eq. (8)

mini,jd[mi,j,m(A,D,Rj,tj,Mi)]2,
where i represents the number of coded marking points; j represents different camera angle; mi,j represents the image coordinates of i’th coded marking point under the j’th viewing angle; m(A,D,Rj,tj,Mi) represents the image coordinates of coded marking points transformed by projection relationship. The 3D coordinates of the spatial points with high precision can be obtained after the adjustment process is carried out in the whole region, which means optimizing and solving the intrinsic and extrinsic parameters of the camera as well as the coordinates of spatial points.

Fig. 7

Principles of multiview vision.

OE_61_7_070901_f007.png

2.4.

Feature Detection and Matching

The feature detection and matching of images are the key to realize deflection measurement. The commonly used image features include gray feature, feature point, gradient feature and geometric feature. For geometric feature, some scholars have proposed deflection measurement method based on sampling Moiré pattern.32,33 Gray feature has certain requirements on the gray information of the measured target surface. The most representative technology using gray feature is the digital image correlation (DIC) technology, which has the advantages of subpixel displacement positioning and high accuracy. Compared with the gray feature, the feature point has lower requirements for the gray information of the object surface and has the advantages of scale invariance and rotation invariance, so it performs well in the measurement of complex deformation. By detecting the image features and matching the corresponding features of each frame, the deflection can be calculated. For the field measurement of bridge deflection, the pixel displacement will not be too large, but the feature detection and matching in the actual measurement should also consider a series of factors such as rain, snow and fog shielding, and airflow disturbance.

2.4.1.

Grayscale features and matching based on digital image correlation

At present, the commonly used feature detection and matching algorithm is template matching, whose main steps are to select the matching region in the reference image first, and then use the correlation function to match the template with the target region in the deformation image. Tong compared several correlation functions and pointed out that the zero-normalized sum of squared difference correlation function has the best robustness and reliability34

Eq. (9)

CZNSSD=y=MMx=MM[f(W(x,y;Δp))fmfsg(W(x,y;p))gmgs]2,
fs=y=MMx=MM[f(W(x,y;0))fm]2,gs=x=MMy=MM[g(W(x,y;p))gm]2,
where 2M+1 refers to the width of the template; f and g represent the gray value of pixel points (x,y) in the reference and deformation image templates, respectively; fm and gm represent the average gray value in the reference and deformation image templates, respectively; W(x,y;p) is used to represent the shape function of the deformation template relative to the reference template. For bridge deflection measurement, the zero or first order the shape function is enough to meet the measurement requirements.35

Matching can be divided into integer pixel matching and sub-pixel matching. For bridge deflection measurement, the displacement between adjacent images is usually not particularly large, so the conventional integer pixel matching algorithm can meet the requirements to provide accurate initial guess for subpixel registration. Pan et al.17 compared several commonly used subpixel matching search methods and pointed out that the Newton–Raphson iterative method has a high precision. Due to the advantages of computational efficiency and robustness, the inverse compositional Gauss-Newton algorithm (IC-GN) is generally used for matching.3638

2.4.2.

Feature detection and matching based on feature points

Harris,39 SIFT,40 and SURF41 are widely used in feature detection. After the feature points are detected, Euclid distance is used to measure the similarity.40 The smaller the Euclid distance, the higher the similarity. The matching process can select the optimal feature point by traversing all feature points, but the speed is slow. To improve the matching speed, it is widely used to extract the nearest neighbor points for matching and then select the optimal matching point.40 However, there are often abnormal matching results that need to be removed. The random sample consensus method (RANSAC)42 is commonly used to eliminate abnormal matching.

2.4.3.

Orientation code matching based on gradient features

Orientation code matching (OCM) calculates the orientation code of each pixel by matching gradient information, which is proved to have rotation invariance and brightness invariance.33 Assuming that the gray value of each point in an image is represented by f(x,y) and its partial derivatives are fx=f/x and fy=f/y, then the gradient angle corresponding to the point with the pixel coordinate (x,y) is θx,y=tan1(fy/fx) and its corresponding orientation code is defined as follows:

Eq. (10)

cf(x,y)={[θx,yΔθ],|fx|+|fy|ΓN=2πΔθ,otherwise,
where Δθ is the preset sector width and its value is generally π/8; Γ is a threshold value used to ignore the low-contrast region, whose purpose is to suppress the interference of low-contrast region on matching. However, if its value is too large, the gradient feature information will be lost, so it is necessary to control its range reasonably.

After calculating the orientation codes, histograms of orientation codes are employed for approximating the rotation angle of the object and then rotate the object template by the estimated angle to realize template matching. To reduce the error, the bilinear interpolation method was also proposed to interpolate the obtained gradient angles to achieve subpixel accuracy.43

2.4.4.

Feature detection and matching based on geometric feature

The sampling Moiré method has been extensively used for displacement measurements of railway bridges. The displacement information can be further used to evaluate the deflection.32 Using 2D grids, the displacement of the research object can be measured in two directions simultaneously. With the help of high-speed cameras, dynamic displacement curves over time can be obtained by this method. Besides, using the DIC-aided SM method,33 displacements exceeding half of the grating pitch can also be measured correctly. In addition to sampling Moiré method, the circular maker localization, corner diction, and cross detection can also be used.

3.

Measurement Methods

3.1.

Two-Dimensional Measurement with a Single Camera

After calibration of lens distortion, the vision-based measurement of bridge deflection can be achieved by matching the image feature based on the calibration method in Sec. 2.1.1. This method is a widely used visual bridge deflection measurement method for its simplicity and practicality.4448 For night measurement, the street lights on the bridge can also be used as feature matching points. To reduce the impact of ambient light, high-brightness red LED lamps can also be installed on bridges, which can achieve deflection measurement with active illumination by installing a coupled bandpass optical filter in front of the lens.33 The measurement resolution of this method is mainly limited by the measurement field of view and imaging resolution. For long-span bridges, multiple single-camera systems can also be used to achieve segmental measurement to ensure the measurement resolution. Figure 8 shows that multiple single-camera 2D measurement systems are used to measure the overall deflection of a long-span bridge in Jiangxi Province.

Fig. 8

Deflection measurement of long-span bridge using multicamera 2D measuring systems.

OE_61_7_070901_f008.png

In the field measurement of bridge deflection, it is difficult to ensure that the optical axis of camera is perpendicular to the side surface of the bridge to be measured. Due to the particularity of bridge deflection measurement (only the vertical displacement needs to be measured), the yaw angle has no influence on the measured results and the influence of the rolling angle can be eliminated by calculating the total displacement of the pixels. Therefore, only the influence of pitch angle needs to be considered. The correction method is given as follows:49

Eq. (11)

VL[(xxc)2+(yyc)2]lps2+f2vlpscosβ,
where β is the pitch angle of the camera; (x,y) is the coordinates of a pixel point; (xc,yc) is the principal point coordinates of optical center; L is the distance between the camera and the side surface of the bridge to be measured; f is the focal length of the lens; v is the displacement of pixel. Since the horizontal displacement of the measured point on the bridge is a small quantity of higher order relative to the vertical displacement, it is assumed that the displacement of pixel is only caused by the bridge deflection. V is the deflection of the point to be measured; lps is the actual physical size of a single pixel.

Although the single-camera two-dimensional (2D) deflection measurement method is simple, practical, and quick, its application in a large field of view is limited when there are multiple measurement points in the field of view or when full-field measurement is required. It is noteworthy that there are two reasons for this problem. First, most off-axis single-camera measurement methods can only calculate a limited number of points, which cannot measure the deflection of the entire bridge. Moreover, the object distance of a single point measured by a laser rangefinder cannot be applied to the whole field of view, which means point-by-point measurements are required to obtain accurate deflection information. Therefore, the preparation work is tedious if there are many measuring points. Although Tian et al.50 proposed a full-field deflection measurement method under an essential assumption that all the points in the region of interest (ROI) of the bridge span are on a spatial straight-line line, the measurement area can only be limited to a narrow band.

To solve the above problems, a multipoint single-camera bridge deflection measurement method with self-calibration of full-field scale information30,51 was proposed, as shown in Fig. 9. Assuming that the intrinsic parameters of the camera have been calibrated in advance,30,31 the camera is used to shoot the same area to be measured at multiple calibration positions to collect the calibration images and at the measuring position to collect the reference image and deformation images. Similar to the principle of extrinsic parameter calibration of two cameras in Sec. 2.1.3, self-calibration can be accomplished by matching the corresponding fixed points on the reference image and the calibration images. Then, the relative extrinsic parameters between camera coordinate system of the measurement position to the world coordinate system of the bridge surface can be solved. The scale factor representing the scale information of the translation vector can be obtained from two points with known distance in the field of view or by measuring the distance from the camera to a point on the bridge with a laser rangefinder. The intrinsic parameters and extrinsic parameters can be used to calculate the mapping between the image coordinate system with the world coordinate system of the bridge. Finally, the full-field deflections of the bridge can be calculated according to the change of image coordinate of the measuring points and the solved mapping function.

Fig. 9

Camera position arrangement of the multipoint single-camera bridge deflection measurement method with self-calibration of full-field scale information.51

OE_61_7_070901_f009.png

3.2.

Three-Dimensional Measurement with Dual Cameras

Based on the principle of stereo vision measurement, two cameras shot from different angles can be used to measure the 3D displacement of the object surface. The biggest difference between the bridge deflection measurement and the traditional binocular 3D measurement is the field of view to be measured. The calibration of dual-camera system cannot be carried out by using the traditional plane calibration method because the large bridge structure results in large field of view. To achieve the 3D system calibration in large field of view, the calibration method of camera extrinsic parameters based on epipolar constraint in Sec. 2.1.3 can be used when intrinsic parameters have been calibrated. By using monocular photogrammetry to reconstruct mark points installed on the walls, the large wall can be directly used as a calibration board for camera intrinsic parameters calibration.30,31 As shown in Fig. 10, for the camera extrinsic parameter calibration during the field measurement considering the geometric characteristics of the bridge, it can even use the mark points carried by unmanned aerial vehicles (UAV) as the control points in the calibration process if the control point information in the field of view is insufficient.52

Fig. 10

Binocular vision 3D deformation measurement with large field of view.52

OE_61_7_070901_f010.png

For the 3D measurement with two cameras, it is necessary to transfer the vertical axis of the 3D reconstructed coordinate system to the same direction as the deflection of the bridge, so that the measured vertical displacement is the deflection value of the bridge. Similar to the 2D measurement with a single camera, the measurement resolution of 3D measurement with two cameras is mainly limited by the field of view to be measured and the imaging resolution. The resolution of measurement is usually low for the deflection measurement under a large field of view. Compared with the 2D measurement using a single camera, the 3D measurement with two cameras can measure the displacements in three directions, which makes the measurement results more abundant.

3.3.

Quasistatic Deflection Measurement Based on Monocular Photogrammetry

Based on the principle of monocular photogrammetry, the deflection information of multiple points at different moments can also be measured by installing marker points on the bridge and carrying out the 3D reconstruction of the marker points before and after the bridge deformation,53 as shown in Fig. 11. The method requires a reference coordinate system without deformation and the 3D coordinate information at different moments needs to be established in the reference coordinate system, so as to achieve the high accuracy measurement of deflection. For convenience, the reference coordinate system can be built on the pier and the vertical axis of the coordinate system needs to be consistent with the direction of the bridge deflection. At the same time, it is necessary to place some calibration rulers in the field of view to determine the measurement scale information. This method has the advantages of simple equipment, multipoint measurement, and high resolution, but it cannot realize the dynamic deflection measurement of the bridge.

Fig. 11

Quasistatic deflection measurement based on close-range photogrammetry.48

OE_61_7_070901_f011.png

3.4.

Multipoint Dynamic Measurement Based on the Displacement-Relay Videometrics

To realize multipoint, high-resolution and dynamic measurement of bridge deflection, Yu et al.54 proposed a measurement method called the displacement-relay videometrics with series camera network. Figure 12 shows the system configuration of this measurement method. C is the double-head camera; M and S are the cooperative marker points; The subscript 0,i,n is the unit number from left to right. Assuming that h is the vertical displacement of the cooperative mark point in the image, k is the magnification, Δy is the vertical displacement of the cooperative marker points or double-headed camera, d is the distance between the camera and the cooperative marker points, and θCi is the variation of the pitch angle of the double-headed camera numbered Ci, the following equations can be given only considering vertical displacement and pitch angles:

Eq. (12)

{hMiCi=kMiCi(ΔyMiΔyCidMiCi·sinθCi)hSiCi=kSiCi(ΔySiΔyCidSiCi·sinθCi)hM(i+1)Ci=kM(i+1)Ci(ΔyM(i+1)ΔyCidM(i+1)Ci·sinθCi)hS(i+1)Ci=kS(i+1)Ci(ΔyS(i+1)ΔyCidS(i+1)Ci·sinθCi),
h can be obtained by the change of the image coordinates of the marker points, then according to the formula, 4×n equations can be listed, and there are 4n+2 unknowns. If two controllable points are strictly stable or have known subsidence for this series network, the vertical displacement of 2n+2 marker points and n double-headed cameras and the change in pitching angle of n double-headed cameras can be calculated exactly by the principal components analysis.55,56

Fig. 12

System configuration of the displacement-relay videometrics with series camera network.50

OE_61_7_070901_f012.png

Figure 13 shows the dynamic bridge deflection measurement system developed by our research group based on the theory of displacement-relay videometrics with series camera network, which is applied to the real-time measurement of the multipoint dynamic deflection of the Nanjing Yangtze River Bridge. Figures 13(b) and 13(c) show the measured deflection of a span on the north side of the bridge. The total length of the span is 128 m with nine measuring points arranged and the distance between each measuring point is 16 m. It can be seen from the experimental results that the system performs well in measuring the real-time deflection of the bridge when a train passes by, and the field measurement system noise is <0.5  mm. The system has a great application prospect in the multipoint dynamic measurement of bridge deflection for the advantages of high resolution and dynamic measurement.

Fig. 13

Deflection measurement of Yangtze River bridge with series camera network (a) experimental site, (b) measured deflection as train passes at 14:44, and (c) measured deflection as train passes at 15:18.

OE_61_7_070901_f013.png

3.5.

Deflection Measurement Based on UAV Platform

Although the vision-based bridge deflection measurement method can achieve long-distance measurement, it still requires a fixed platform to place the camera. In addition, if the fixed camera is too far from the target, the measurement accuracy will be greatly affected by atmospheric disturbances and camera vibration. Therefore, it is very difficult to find a suitable place to fix the camera for bridges that cross the rivers and valleys.

To overcome this shortcoming, the deflection measurement method based on the UAV platform equipped with camera was proposed to improve the flexibility and applicability of the vision-based bridge deflection measurement method. The deflection measurement based on UAV platform uses the same measurement principle as the deflection measurement based on fixed platform. The key difference is that the former needs to eliminate the influence from the error of position and attitude of the UAV because the UAV is not stationary during the measurement.

The commonly used method is to estimate the motion of the camera on the UAV by a fixed reference target. Yoon et al.57 estimated the camera motion by tracking fixed Artificial feature points in the background to recover the absolute structural displacement. Perry and Guo58 integrated both optical and IR cameras with a UAV platform to measure three-component dynamic structural displacement, again using a fixed reference target and reference plane in the background. Chen et al.59 achieved geometric correction of images by establishing the plane homography transformation between the reference image and the image to be correct with fixed points, so as to obtain the real displacements of the bridge model. Besides, Wu et al.60 took four corner points on a fixed object plane as reference points, estimated the projection matrix between the bridge plane and each frame image plane through UAV camera calibration, and then recovered the 3D world coordinates of the target points on the bridge model.

However, the above methods require the measured target and reference target to be visible in the same field of view. Therefore, the field of view must be enlarged when measuring long-span bridges. But this will reduce the resolution of the target in the image, thus increasing the measurement error. Zhuge et al.61 developed a noncontact deflection measurement for the bridge through a multi-UAVs system. In this method, multiple UAVs equipped with cameras are used to measure the position to be measured and the fixed position of the bridge, respectively. According to the collinearity of the spot projected on the plane by the coplanar laser designator, as shown in Fig. 14, the motion of the UAV can be eliminated and the vertical displacement of the measured position relative to the bridge pier can be calculated. Therefore, the change of deflection Δhii+1 from ti to ti+1 can be expressed as

Eq. (13)

Δhii+1=YCi+1YCi,

Eq. (14)

YCi=yBi+YByAixBi+XBxAi[xCi+XCxAi]+yAiyCi,

Eq. (15)

YCi+1=yBi+1+YByAi+1xBi+1+XBxAi+1[xCi+1+XCxAi+1]+yAi+1yCi+1,
where the coordinate system A (CSA) is set as the coordinate system of bridge (CSbridge). The vertical displacement of CSC relative to CSbridge at ti is YCi, and the vertical displacement changes to YCi+1 at ti+1. The coordinates of laser spot in each local coordinate system are (xAi,yAi), (xBi,yBi), and (xCi,yCi) at ti. The offsets of CSB and CSC between CSA are (XB,YB) and (XC,YC).

Fig. 14

Deflection estimation for multi-UAV system.61

OE_61_7_070901_f014.png

3.6.

Advantages and Limitations of Different Measurement Methods

Table 2 shows the advantages and limitations of different measurement methods and each method has its pros and cons. Different methods can be selected according to different application scenarios. For quasistatic deflection measurement, the measurement method based on monocular photogrammetry is highly recommended. For long-term high-resolution dynamic monitoring, the measurement method based on displacement-relay videometrics should be chosen. As for short-term detection, the 2D measurement method using a single camera should be more flexible. If the multidimensional motion can by wind load is considered, the 3D measurement with dual cameras may be more helpful.

Table 2

Advantages and limitations of different measurement methods.

Measuring methodAdvantagesLimitations
2D measurement with a single camera- Simple setup- Local scale calibration
- Easy calibration
- Low cost- Measurement resolution depends on field of view
- Dynamic measurement
3D measurement with dual cameras- Full-field scale calibration- Camera synchronization
- 3D information- Complex calibration
- Dynamic measurement- Measurement resolution depends on field of view
Quasistatic deflection measurement based on monocular photogrammetry- Simple setup- Cannot be applied to the dynamic measurement
- Easy calibration
- Low cost
Multipoint dynamic measurement based on the displacement-relay videometrics- Easy calibration- Relatively high cost
- High-resolution
- Dynamic measurement- Needs to be set up on the bridge
Deflection measurement based on UAV platform- Flexibility- Relatively high cost
- High-resolution
- Cannot be applied to long-term measurement (limited power supply)
- Dynamic measurement

4.

Influencing Factors

4.1.

Camera Factors

The errors caused by the cameras are mainly in two aspects: (1) Image noise. The camera will produce noise in the process of transforming optical signals into electrical signals and forming images. To reduce the error, the following methods can be adopted. First, select the camera with high signal-to-noise ratio for image acquisition. Second, take the average of multiple images to reduce the displacement measurement error caused by image noise. Third, select reasonable calculation parameters to resist the influence of noise.62 (2) Camera self-heating. The temperature of the electronic device inside the camera will rise when it is working, which causes a slight change in image distance and leads to virtual deformation. Ma et al.63,64 studied the systematic errors of DIC caused by camera self-heating. Several techniques could be used to eliminate the error. First, preheat the camera for 1 to 2 h before measurement to reach the thermal balance stage. Second, use the corresponding influence curve of temperature on strain to correct the measured results if preheating is impossible. Third, observe the fixed point without thermal deformation near the measuring area at the same time for temperature compensation.

4.2.

Calibration Factors

The errors caused by camera calibration factors mainly come from two aspects. (1) Lens distortion. For 2D and 3D measurement, the lens distortion without calibration will cause the measurement error. The distortion coefficient can be introduced to correct the points in the image coordinate system to reduce the error during the lens distortion calibration.18,19 (2) The failure of camera calibration parameters caused by camera motion. After the camera extrinsic parameters calibration, the calibrated extrinsic parameters are usually used directly for subsequent measurement, which means that the extrinsic parameters are considered to remain unchanged in the measurement process. However, in the field measurement, the camera itself is affected by wind or ground vibration, which results in the failure of extrinsic parameters. The general solution is to look for fixed points (such as piers) while ensuring that the target area is within the camera’s field of view. The influence of camera motion can be eliminated by subtracting the displacement of the measured fixed point from the displacement of the point in the area to be measured.65 Besides, it is also common to calculate the displacements using averaged images by redundancy measurements,66,67 but it cannot be used for dynamic measurements. For the relative extrinsic parameters of the two cameras in the binocular system, the method in Sec. 2.1.3 can be used to calculate the real-time extrinsic parameters of the camera.

4.3.

Algorithm Factors

The influence of algorithm factor mainly refers to the error in the process of feature matching, which mainly includes two aspects. (1) Feature matching algorithm. The accuracy of different feature matching algorithms is also different. Compared with the matching algorithm based on feature points, the subpixel matching algorithm based on gray features can usually achieve higher accuracy, but the former performs better in estimating the initial value of large deformation and large rotation. Therefore, the result of feature point matching can be taken as the initial value of template matching in DIC.68 (2) Interpolation error of the subpixel matching algorithm based on grayscale. These methods can be used to reduce the error: higher-order interpolation,69 image prefilter processing,70 or other interpolation error elimination algorithms.7173

4.4.

Environmental Factors

The most difficult challenge to overcome in the field measurement of bridge deflection is the influence of environmental factors, which mainly comes from five aspects. (1) Environmental temperature. According to the analysis of 2-h deflection measurement results in Ref. 58, it is pointed out that the environmental temperature has little influence on the deflection measurement values in a short time, which can be basically ignored. However, Zhou et al.74 found that the error of environmental temperature on displacement measurement fluctuated daily and showed a cumulative trend over time through more than half a year of intermittent measurement. (2) Heat flow disturbance. The air between the camera and the target will flow because of the uneven temperature, which will distort the image acquired by the camera. To maintain the sharpness of the original pixel grid after averaging multiple images, Joshi and Cohen75 proposed a novel local weighted averaging method based on ideas from “lucky imaging” that minimizes blur, resampling, and alignment errors, as well as effects of sensor dust. Anantrasirichai et al.76 extracted accurate detail about objects behind the distorting layer by selecting informative ROIs only from good quality frames and solved the space-varying distortion problem using region-level fusion based on the dual-tree complex wavelet transform (DT-CWT). Luo and Feng77 filtered the heat haze distortion by establishing the distortion basis to match with the most similar sample image in terms of the shortest Euclidean distance. (3) Influence of rain, snow, and fog. If there is rain, snow and fog between the camera and the measured object during the measurement process, the image will appear blurred or even error, which makes the feature detection and matching difficult.78 (4) Luminance fluctuation. The luminance fluctuation of the measuring environment will affect the quality of the collected images and the result of the feature detection and matching. It can be reduced by using a feature detection algorithm with brightness invariance49 or adding a light source.47 (5) Influence of strong wind or ground vibration. In the outdoor measurement whose object distance between the camera and the bridge to be measured is often large, the shaking and swaying of the camera caused by the wind and ground vibration often result in large errors because of the optical lever.

5.

Applications and Accuracy

With the improvement of the accuracy and computational efficiency and development of measuring equipment, the vision-based bridge deflection measurement methods perform well in the static and dynamic deflection measurement of various bridges and its measurement distance is constantly increasing. In addition, based on the measured deflection data of the bridge, the strain of the bridge surface can be also obtained and the dynamic parameters of the bridge can be further identified, which can be used to evaluate the SHM of the bridge. The following describes the application and analysis of vision-based bridge methods at different measuring distances. To facilitate classification, the measuring distances within 10 m, between 10 and 100 m, and above 100 m are called short distance, medium distance, and long distance, respectively.

5.1.

Short Distance Measurement

Dhanasekar et al.79 measured the deflections and strains of two aged masonry arch bridges with the internal span length of 7.85 and 13.11 m, respectively. The distances from the camera to three key regions (crown, support, and quarter-point) are 4  m. When the noise amplitude was 0.05 pixels and the corresponding measurement uncertainty after applying the Savitzky–Golay filter was ±0.025  mm, the measured maximum deflection and strain were 0.5 mm and 110 microstrain, respectively, which were validated through a 3D finite element model.

Ngeljaratan and Moustafa80 used two cameras to monitor the 3D displacement of targets (circular black and white stickers) attached to a 27-m footbridge under pedestrian dynamic loads at the 100-Hz sampling rate when the cameras were located 7.9  m away from the targets and separated at 1.56 m. Figure 15 shows the view of the monitored footbridge as well as locations of monitoring equipment, sensor locations, and pedestrian loads. Experimental results showed that the displacements of the bridge less than 0.1 in (2.54 mm) under pedestrian impact load can be captured. Based on these data, the vibration frequencies of the full bridge were determined and compared well to the value of the SAP2000 analytical model.

Fig. 15

View of the monitored footbridge and locations of monitoring equipment, sensor locations, and pedestrian loads.73

OE_61_7_070901_f015.png

Jáuregui et al.81 measured vertical deflections of bridges using digital close-range terrestrial photogrammetry in a laboratory and two field experiments. Results from laboratory testing of a steel beam showed an accuracy ranging from 0.51 to 1.3 mm. Field evaluation of a prestressed concrete bridge showed an average difference of 3.2  mm as compared with elevation measurements made with a total station. Based on the conventional control point method used by Jáuregui and Jiang53 proposed the refined distance constraint (RDC) approach to make the measurement more convenient for engineers. Compared with the laboratory and field measurement results of dial gage and differential level, the proposed method differed within 1 mm from the gage measurement and within 2 mm from the level readings, respectively.

5.2.

Medium Distance Measurement

Pan et al.46 measured the deflections of the middle point of the span and the point near the pier of a 60 m three-span railway bridge at the object distance of 22.825 and 22.438 m when a freight train passed at a speed of 80  km/h. The measurement results show that the average deflection and the amplitude of the former point are 4.2 and 3 mm respectively, while that of the latter point were 0.9 and 0.75 mm, respectively. Moreover, Fourier analysis of the vertical displacements of the two points all indicates that the first-order natural frequency of the test bridge is 1.0250 Hz, which is equal to that measured by LVDTs.

Alipour et al.82 measured the midspan deflection of the Hampton Roads Bridge–Tunnel with a low-cost consumer-grade imaging system mounted at the pier cap directly beneath each girder. The span under study was 22.86-m long and consisted of seven girders. The average accuracy was consistently <0.2  mm by using the targets speckled with a random dot pattern for the correlation analysis.

Lee et al.83 conducted a feasibility test on a 120-m-long pedestrian suspension bridge with stiffened steel girders, as shown in Fig. 16, to check the applicability of the vision-based method to a suspension bridge. The distance between the camera placed on the ground near an abutment and the target placed at the center point of midspan is about 70 m. Compared with the first-order natural frequency of 1.83 Hz measured by the accelerometer, the first-order natural frequency based on the image processing technology is 1.82 Hz.

Fig. 16

A pedestrian suspension bridge.75

OE_61_7_070901_f016.png

5.3.

Long Distance Measurement

Tian et al.47 measured the deflection-time curves of the six measurement points of the Wuhan Yangtze River Bridge under static loading by using actively illuminated LED targets. When the minimum distance between the camera sensor and the LED target is 107.3 m and the maximum distance is 288.9 m, although the mean errors randomly fluctuate around zeros values, standard deviation errors increase with the increase of the measuring distances, which reaching 0.57 mm at the maximum distance.

Fukuda et al.43 monitored the existing features on the deflection of the Vincent Thomas Bridge, a 1500-ft long suspension bridge without using a target panel. The cameras were placed at a stationary location >300  m from the midspan of the bridge main span, as shown in Fig. 17. Experiments showed that the average of the standard deviation between the measured displacements with and without the target panel was 6 mm and the dominant frequency calculated by the measured displacements is consistent with the bridge fundamental frequency measured by accelerometers installed on the bridge.

Fig. 17

Field test at long-span bridge. (a) Vincent Thomas Bridge (Los Angeles, CA). (b) Satellite image of field test and position relation of vision-based system and target position to be measured.40

OE_61_7_070901_f017.png

6.

Conclusion

Starting from the necessity of bridge deflection measurement, this paper introduces the principle of the vision-based deflection measurement method, including camera calibration, 3D stereo vision, monocular photogrammetry, and feature detection and matching. Besides, the paper analyzes the advantages and disadvantages of the single-camera 2D measurement, dual camera 3D measurement, quasistatic measurement based on photogrammetry, multipoint dynamic measurement based on displacement-relay videometrics with series camera network and the deflection measurement based on UAV platform. Moreover, the paper expounds the influencing factors and applications of the bridge deflection measurement and summarizes the research results of relevant scholars. We hope that this study will offer some reference value to the community of optics as well as civil engineering to select a proper bridge deflection measurement method for a given application.

The measurement method of bridge deflection based on vision is still not fully mature. How to reduce the error caused by various influencing factors is the main direction of future research. In addition, it is also the focus of the subsequent research about how to improve the calculation rate while ensuring the accuracy, so as to realize the real-time monitoring of engineering. By reducing measurement errors caused by various influencing factors and further improving measurement accuracy and efficiency, the long-term vision-based bridge deflection monitoring will absolutely play a more significant role in bridge health monitoring.

Acknowledgments

This work was supported by the National Natural Science Foundation of China (under Grant Nos. 11827801 and 11332012).

References

1. 

P. Paultre, J. Proulx and M. Talbot, “Dynamic testing procedures for highway bridges using traffic loads,” J. Struct. Eng., 121 (2), 362 –376 (1995). https://doi.org/10.1061/(ASCE)0733-9445(1995)121:2(362) Google Scholar

2. 

K. Park et al., “The determination of bridge displacement using measured acceleration,” Eng. Struct., 27 (3), 371 –378 (2004). https://doi.org/10.1016/j.engstruct.2004.10.013 ENSTDF 0141-0296 Google Scholar

3. 

H. Nassif, M. Gindy and J. Davis, “Comparison of laser Doppler vibrometer with contact sensors for monitoring bridge deflection and vibration,” NDT & E Int., 38 (3), 213 –218 (2004). https://doi.org/10.1016/j.ndteint.2004.06.012 Google Scholar

4. 

H. Xia et al., “Experimental analysis of a high-speed railway bridge under Thalys trains,” J. Sound Vibr., 268 (1), 103 –113 (2003). https://doi.org/10.1016/S0022-460X(03)00202-5 Google Scholar

5. 

C. J. Brown et al., “Monitoring of structures using the global positioning system,” Proc. Inst. Civil Eng.-Struct. Build., 134 (1), 97 –105 (1999). https://doi.org/10.1680/istbu.1999.31257 Google Scholar

6. 

T. Yi, H. Li and M. Gu, “Recent research and applications of GPS based technology for bridge health monitoring,” Sci. China Technol. Sci., 53 (10), 2597 –2610 (2010). https://doi.org/10.1007/s11431-010-4076-3 Google Scholar

7. 

M. Pieraccini et al., “Static and dynamic testing of bridges through microwave interferometry,” NDT & E Int., 40 (3), 208 –214 (2007). https://doi.org/10.1016/j.ndteint.2006.10.007 Google Scholar

8. 

Q. Yu and Y. Shang, Image-Based Precise Measurement and Motion Measurement, Science Press, Beijing (2009). Google Scholar

9. 

Y. Xu and J. M. W. Brownjohn, “Review of machine-vision based methodologies for displacement measurement in civil structures,” J. Civil Struct. Health Monit., 8 (1), 91 –110 (2018). https://doi.org/10.1007/s13349-017-0261-4 Google Scholar

10. 

X. W. Ye, C. Z. Dong and T. Liu, “A review of machine vision-based structural health monitoring: methodologies and applications,” J. Sens., 2016 7103039 (2016). https://doi.org/10.1155/2016/7103039 Google Scholar

11. 

D. Feng and M. Q. Feng, “Computer vision for SHM of civil infrastructure: from dynamic response measurement to damage detection–a review,” Eng. Struct., 156 105 –117 (2018). https://doi.org/10.1016/j.engstruct.2017.11.018 ENSTDF 0141-0296 Google Scholar

12. 

R. Jiang, D. V. Jáuregui and K. R. White, “Close-range photogrammetry applications in bridge measurement: literature review,” Measurement, 41 (8), 823 –834 (2008). https://doi.org/10.1016/j.measurement.2007.12.005 Google Scholar

13. 

M. Fazzini et al., “Study of image characteristics on digital image correlation error assessment,” Opt. Lasers Eng., 48 (3), 335 –339 (2010). https://doi.org/10.1016/j.optlaseng.2009.10.012 Google Scholar

14. 

T. Siebert et al., “High-speed digital image correlation: error estimations and applications,” Opt. Eng., 46 (5), 051004 (2007). https://doi.org/10.1117/1.2741217 Google Scholar

15. 

P. L. Reu, “A study of the influence of calibration uncertainty on the global uncertainty for digital image correlation using a Monte Carlo approach,” Exp. Mech., 53 (9), 1661 –1680 (2013). https://doi.org/10.1007/s11340-013-9746-1 EXMCAZ 0014-4851 Google Scholar

16. 

R. Balcaen et al., “Influence of camera rotation on stereo-dic and compensation methods,” Exp. Mech., 58 (7), 1101 –1114 (2018). https://doi.org/10.1007/s11340-017-0368-x EXMCAZ 0014-4851 Google Scholar

17. 

B. Pan et al., “Performance of sub-pixel registration algorithms in digital image correlation,” Meas. Sci. Technol., 17 (6), 1615 –1621 (2006). https://doi.org/10.1088/0957-0233/17/6/045 MSTCEP 0957-0233 Google Scholar

18. 

J. Weng and P. Cohen, “Camera calibration with distortion models and accuracy evaluation,” IEEE Trans. Pattern Anal. Mach. Intell., 14 (10), 965 –980 (1992). https://doi.org/10.1109/34.159901 ITPIDJ 0162-8828 Google Scholar

19. 

J. Wang et al., “A new calibration model of camera lens distortion,” Pattern Recognit., 41 (2), 607 –615 (2008). https://doi.org/10.1016/j.patcog.2007.06.012 Google Scholar

20. 

Z. Zhuang et al., “A single-image linear calibration method for camera,” Measurement, 130 298 –305 (2018). https://doi.org/10.1016/j.measurement.2018.07.085 0263-2241 Google Scholar

21. 

R. Galego et al., “Uncertainty analysis of the DLT-Lines calibration algorithm for cameras with radial distortion,” Comput. Vision Image Understanding, 140 115 –126 (2015). https://doi.org/10.1016/j.cviu.2015.05.015 Google Scholar

22. 

F. Devernay and O. Faugeras, “Straight lines have to be straight,” Mach. Vision Appl., 13 (1), 14 –24 (2001). https://doi.org/10.1007/PL00013269 Google Scholar

23. 

M. Ahmed and A. Farag, “Nonmetric calibration of camera lens distortion: differential methods and robust estimation,” IEEE Trans. Image Process., 14 (8), 1215 –1230 (2005). https://doi.org/10.1109/TIP.2005.846025 IIPRE4 1057-7149 Google Scholar

24. 

M. A. Sutton, J. J. Orteu and H. Schreier, Image Correlation for Shape, Motion and Deformation Measurements: Basic Concepts, Theory and Applications, 27 –28 Springer Science & Business Media, New York (2009). Google Scholar

25. 

Z. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Mach. Intell., 22 (11), 1330 –1334 (2000). https://doi.org/10.1109/34.888718 ITPIDJ 0162-8828 Google Scholar

26. 

E. E. Hemayed, “A survey of camera self-calibration,” in Proc. IEEE Conf. Adv. Video and Signal Based Surveill., 351 –357 (2003). https://doi.org/10.1109/AVSS.2003.1217942 Google Scholar

27. 

Q. Sun et al., “Camera self-calibration with lens distortion,” Optik, 127 (10), 4506 –4513 (2016). https://doi.org/10.1016/j.ijleo.2016.01.123 OTIKAJ 0030-4026 Google Scholar

28. 

W. Liu et al., “Calibration method based on the image of the absolute quadratic curve,” IEEE Access, 7 29856 –29868 (2019). https://doi.org/10.1109/ACCESS.2019.2893660 Google Scholar

29. 

D. Nistér, “An efficient solution to the five-point relative pose problem,” IEEE Trans. Pattern Anal. Mach. Intell., 26 (6), 756 –770 (2004). https://doi.org/10.1109/TPAMI.2004.17 ITPIDJ 0162-8828 Google Scholar

30. 

X. Shao et al., “Calibration of stereo-digital image correlation for deformation measurement of large engineering components,” Meas. Sci. Technol., 27 (12), 125010 (2016). https://doi.org/10.1088/0957-0233/27/12/125010 MSTCEP 0957-0233 Google Scholar

31. 

S. Dong et al., “Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry,” Appl. Opt., 55 (23), 6363 –6370 (2016). https://doi.org/10.1364/AO.55.006363 APOPAI 0003-6935 Google Scholar

32. 

S. Ri et al., “Dynamic deformation measurement by the sampling Moiré method from video recording and its application to bridge engineering,” Exp. Tech., 44 313 (2020). https://doi.org/10.1007/s40799-019-00358-4 Google Scholar

33. 

C. Chen, F. Mao and J. Yu, “A digital image correlation-aided sampling Moiré method for high-accurate in-plane displacement measurements,” Meas. Sci. Technol., 182 109699 (2021). https://doi.org/10.1016/j.measurement.2021.109699 MSTCEP 0957-0233 Google Scholar

34. 

W. Tong, “An evaluation of digital image correlation criteria for strain mapping applications,” Strain, 41 (4), 167 –175 (2005). https://doi.org/10.1111/j.1475-1305.2005.00227.x Google Scholar

35. 

L. Tian et al., “Application of digital image correlation for long-distance bridge deflection measurement,” Proc. SPIE, 8769 87692V (2013). https://doi.org/10.1117/12.2020139 PSISDG 0277-786X Google Scholar

36. 

B. Pan, K. Li and W. Tong, “Fast, robust and accurate digital image correlation calculation without redundant computations,” Exp. Mech., 53 (7), 1277 –1289 (2013). https://doi.org/10.1007/s11340-013-9717-6 EXMCAZ 0014-4851 Google Scholar

37. 

Y. Gao et al., “High-efficiency and high-accuracy digital image correlation for three-dimensional measurement,” Opt. Lasers Eng., 65 73 –80 (2015). https://doi.org/10.1016/j.optlaseng.2014.05.013 Google Scholar

38. 

X. Shao, X. Dai and X. He, “Noise robustness and parallel computation of the inverse compositional Gauss–Newton algorithm in digital image correlation,” Opt. Lasers Eng., 71 9 –19 (2015). https://doi.org/10.1016/j.optlaseng.2015.03.005 Google Scholar

39. 

C. Harris and M. Stephens, “A combined corner and edge detector,” in Alvey Vision Conf., 147 –151 (1988). https://doi.org/10.5244/c.2.23 Google Scholar

40. 

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vision, 60 (2), 91 –110 (2004). https://doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar

41. 

H. Bay et al., “Speeded-up robust features (SURF),” Comput. Vision Image Understanding, 110 (3), 346 –359 (2008). https://doi.org/10.1016/j.cviu.2007.09.014 Google Scholar

42. 

R. Schnabel, R. Wahl and R. Klein, “Efficient RANSAC for point-cloud shape detection,” Comput. Graphics Forum, 26 (2), 214 –226 (2007). https://doi.org/10.1111/j.1467-8659.2007.01016.x CGFODY 0167-7055 Google Scholar

43. 

Y. Fukuda et al., “Vision-based displacement sensor for monitoring dynamic response using robust object search algorithm,” IEEE Sens. J., 13 (12), 4725 –4732 (2010). https://doi.org/10.1109/JSEN.2013.2273309 ISJEAZ 1530-437X Google Scholar

44. 

S. Yoneyama et al., “Bridge deflection measurement using digital image correlation,” Exp. Tech., 31 (1), 34 –40 (2007). https://doi.org/10.1111/j.1747-1567.2006.00132.x Google Scholar

45. 

M. Feng et al., “Nontarget vision sensor for remote measurement of bridge dynamic response,” J. Bridge Eng., 20 (12), 04015023 (2015). https://doi.org/10.1061/(ASCE)BE.1943-5592.0000747 Google Scholar

46. 

B. Pan, L. Tian and X. Song, “Real-time, non-contact and targetless measurement of vertical deflection of bridges using off-axis digital image correlation,” NDT & E Int., 79 73 –80 (2016). https://doi.org/10.1016/j.ndteint.2015.12.006 Google Scholar

47. 

L. Tian and B. Pan, “Remote bridge deflection measurement using an advanced video deflectometer and actively illuminated LED targets,” Sensors (Basel), 16 (9), 1344 (2016). https://doi.org/10.3390/s16091344 Google Scholar

48. 

S. Yu, J. Zhang and X. He, “An advanced vision-based deformation measurement method and application on a long-span cable-stayed bridge,” Meas. Sci. Technol., 31 065201 (2020). https://doi.org/10.1088/1361-6501/ab72c8 MSTCEP 0957-0233 Google Scholar

49. 

F. Ullah and S. Kaneko, “Using orientation codes for rotation-invariant template matching,” Pattern Recognit., 37 (2), 201 –209 (2004). https://doi.org/10.1016/S0031-3203(03)00184-5 Google Scholar

50. 

L. Tian et al., “Full-field bridge deflection monitoring with off-axis digital image correlation,” Sensors, 21 (15), 5058 (2021). https://doi.org/10.3390/s21155058 SNSRES 0746-9462 Google Scholar

51. 

J. Huang et al., “Multi-point single-camera bridge deflection measurement method with self-calibration of full-field scale information,” Google Scholar

52. 

W. Feng et al., “Unmanned aerial vehicle-aided stereo camera calibration for outdoor applications,” Opt. Eng., 59 (1), 014110 (2020). https://doi.org/10.1117/1.OE.59.1.014110 Google Scholar

53. 

R. Jiang and D. Jauregui, “Development of a digital close-range photogrammetric bridge deflection measurement system,” Measurement, 43 1431 –1438 (2010). https://doi.org/10.1016/j.measurement.2010.08.015 0263-2241 Google Scholar

54. 

Q. Yu et al., “A displacement-relay videometric method for surface subsidence surveillance in unstable areas,” Sci. China Technol. Sci., 58 (6), 1105 –1111 (2015). https://doi.org/10.1007/s11431-015-5811-6 Google Scholar

55. 

H. Gao et al., “Robust principal component analysis-based four-dimensional computed tomography,” Phys. Med. Biol., 56 (11), 3181 (2011). https://doi.org/10.1088/0031-9155/56/11/002 PHMBA7 0031-9155 Google Scholar

56. 

E. J. Candès et al., “Robust principal component analysis?,” J. ACM, 58 (3), 1 –37 (2011). https://doi.org/10.1145/1970392.1970395 Google Scholar

57. 

H. Yoon, J. Shin and Jr. B. F. Spencer, “Structural displacement measurement using an unmanned aerial system,” Comput.-Aided Civil Infrastruct. Eng., 33 (3), 183 –192 (2018). https://doi.org/10.1111/mice.12338 Google Scholar

58. 

B. J. Perry and Y. Guo, “A portable three-component displacement measurement technique using an unmanned aerial vehicle (UAV) and computer vision: a proof of concept,” Measurement, 176 109222 (2021). https://doi.org/10.1016/j.measurement.2021.109222 0263-2241 Google Scholar

59. 

G. Chen et al., “Homography-based measurement of bridge vibration using UAV and DIC method,” Measurement, 170 108683 (2021). https://doi.org/10.1016/j.measurement.2020.108683 0263-2241 Google Scholar

60. 

Z. Wu et al., “Three-dimensional reconstruction-based vibration measurement of bridge model using UAVs,” Appl. Sci., 11 (11), 5111 (2021). https://doi.org/10.3390/app11115111 Google Scholar

61. 

S. Zhuge et al., “Noncontact deflection measurement for bridge through a multi-UAVs system,” Comput.-Aided Civil Infrastruct. Eng., 37 (6), 746 –761 (2022). https://doi.org/10.1111/mice.12771 Google Scholar

62. 

Z. Wang et al., “Statistical analysis of the effect of intensity pattern noise on the displacement measurement precision of digital image correlation using self-correlated images,” Exp. Mech., 47 (5), 701 –707 (2007). https://doi.org/10.1007/s11340-006-9005-9 EXMCAZ 0014-4851 Google Scholar

63. 

S. Ma, J. Pang and Q. Ma, “The systematic error in digital image correlation induced by self-heating of a digital camera,” Meas. Sci. Technol., 23 (2), 025403 (2012). https://doi.org/10.1088/0957-0233/23/2/025403 MSTCEP 0957-0233 Google Scholar

64. 

Q. Ma and S. Ma, “Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating,” Opt. Express, 21 (6), 7686 –7698 (2013). https://doi.org/10.1364/OE.21.007686 OPEXFF 1094-4087 Google Scholar

65. 

S. Yoneyama and H. Ueda, “Bridge deflection measurement using digital image correlation with camera movement correction,” Mater. Trans., 53 (2), 285 –290 (2012). https://doi.org/10.2320/matertrans.I-M2011843 MTJIEY 0916-1821 Google Scholar

66. 

M. A. Sutton et al., “Effects of subpixel image restoration on digital correlation error estimates,” Opt. Eng., 27 (10), 271070 (1988). https://doi.org/10.1117/12.7976778 Google Scholar

67. 

G. Vendroux and W. G. Knauss, “Submicron deformation field measurements: Part 2. Improved digital image correlation,” Exp. Mech., 38 (2), 86 –92 (1998). https://doi.org/10.1007/BF02321649 EXMCAZ 0014-4851 Google Scholar

68. 

Z. Wang et al., “Automated fast initial guess in digital image correlation,” Strain, 50 (1), 28 –36 (2014). https://doi.org/10.1111/str.12063 Google Scholar

69. 

H. Schreier, J. Braasch and M. Sutton, “Systematic errors in digital image correlation caused by intensity interpolation,” Opt. Eng., 39 (11), 2915 –2921 (2000). https://doi.org/10.1117/1.1314593 Google Scholar

70. 

B. Pan, “Bias error reduction of digital image correlation using Gaussian pre-filtering,” Opt. Lasers Eng., 51 (10), 1161 –1167 (2013). https://doi.org/10.1016/j.optlaseng.2013.04.009 Google Scholar

71. 

Y. Su et al., “Elimination of systematic error in digital image correlation caused by intensity interpolation by introducing position randomness to subset points,” Opt. Lasers Eng., 114 60 –75 (2019). https://doi.org/10.1016/j.optlaseng.2018.10.012 Google Scholar

72. 

D. Wang et al., “Bias reduction in sub-pixel image registration based on the anti-symmetric feature,” Meas. Sci. Technol., 27 (3), 035206 (2016). https://doi.org/10.1088/0957-0233/27/3/035206 MSTCEP 0957-0233 Google Scholar

73. 

W. Heng et al., “Digital image correlation with reduced bias error based on digital signal upsampling theory,” Appl. Opt., 58 (15), 3962 –3973 (2019). https://doi.org/10.1364/AO.58.003962 APOPAI 0003-6935 Google Scholar

74. 

H. Zhou et al., “Performance of videogrammetric displacement monitoring technique under varying ambient temperature,” Adv. Struct. Eng., 22 (16), 3371 –3384 (2019). https://doi.org/10.1177/1369433218822089 Google Scholar

75. 

N. Joshi and M. F. Cohen, “Seeing Mt. Rainier: lucky imaging for multi-image denoising, sharpening, and haze removal,” in IEEE Int. Conf. Comput. Photogr. (ICCP), 1 –8 (2010). https://doi.org/10.1109/ICCPHOT.2010.5585096 Google Scholar

76. 

N. Anantrasirichai et al., “Atmospheric turbulence mitigation using complex wavelet-based fusion,” IEEE Trans. Image Process., 22 (6), 2398 –2408 (2013). https://doi.org/10.1109/TIP.2013.2249078 IIPRE4 1057-7149 Google Scholar

77. 

L. Luo and M. Q. Feng, “Vision based displacement sensor with heat haze filtering capability,” in Int. Workshop of Struct. Health Monit., 3255 –3262 (2017). Google Scholar

78. 

X. Ye et al., “Vision-based structural displacement measurement: system performance evaluation and influence factor analysis,” Measurement, 88 372 –384 (2016). https://doi.org/10.1016/j.measurement.2016.01.024 0263-2241 Google Scholar

79. 

M. Dhanasekar et al., “Serviceability assessment of masonry arch bridges using digital image correlation,” J. Bridge Eng., 24 (2), 04018120 (2019). https://doi.org/10.1061/(ASCE)BE.1943-5592.0001341 Google Scholar

80. 

L. Ngeljaratan and M. A. Moustafa, “Structural health monitoring and seismic response assessment of bridge structures using target-tracking digital image correlation,” Eng. Struct., 213 110551 (2020). https://doi.org/10.1016/j.engstruct.2020.110551 ENSTDF 0141-0296 Google Scholar

81. 

D. V. Jáuregui et al., “Noncontact photogrammetric measurement of vertical bridge deflection,” J. Bridge Eng., 8 (4), 212 –222 (2003). https://doi.org/10.1061/(ASCE)1084-0702(2003)8:4(212) Google Scholar

82. 

M. Alipour, S. J. Washlesky and D. K. Harris, “Field deployment and laboratory evaluation of 2D digital image correlation for deflection sensing in complex environments,” J. Bridge Eng., 24 (4), 04019010 (2019). https://doi.org/10.1061/(ASCE)BE.1943-5592.0001363 Google Scholar

83. 

J. J. Lee and M. Shinozuka, “Real-time displacement measurement of a flexible bridge using digital image processing techniques,” Exp. Mech., 46 (1), 105 –114 (2006). https://doi.org/10.1007/s11340-006-6124-2 EXMCAZ 0014-4851 Google Scholar

Biography

Xinxing Shao is an assistant professor in the School of Civil Engineering at Southeast University. His current works are focus on real-time, high-resolution and fully automatic deformation measurement, development of scientific instruments and experimental fracture mechanics. He is a member of SPIE and Optica (formerly OSA), and serves as reviewer for more than 20 international journals. He has published more than 60 journal and conference papers in the field of optical deformation measurement.

Biographies of the other authors are not available.

© 2022 Society of Photo-Optical Instrumentation Engineers (SPIE)
Jinke Huang, Xinxing Shao, Fujun Yang, Jianguo Zhu, and Xiaoyuan He "Measurement method and recent progress of vision-based deflection measurement of bridges: a technical review," Optical Engineering 61(7), 070901 (4 July 2022). https://doi.org/10.1117/1.OE.61.7.070901
Received: 9 March 2022; Accepted: 17 June 2022; Published: 4 July 2022
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Bridges

Cameras

Calibration

Distortion

Imaging systems

3D metrology

Distance measurement

Back to Top