|
1.IntroductionAll range images begin with a series of range measurements, and the quality of the range image depends on the quality of each of those measurements. The quality of a range measurement depends on measurement uncertainty and measurement resolution; however, spatial uncertainty is also strongly affected by environmental factors, such as return signal intensity and relationship to a measurement’s immediate neighbors. One or more of these factors can be expressed as a metric representing the deviation of some quality attribute associated with the measurement from a predefined standard. Spatial measurement quality represents the degree of confidence one can place in how accurately a measurement represents the position of a real surface in the environment. Laser range scanners can also provide intensity information that may be used in representing the surface so quality attributes relating to return signal intensity are useful. In this paper, contemporary approaches to evaluating measurement quality attributes are reviewed, including measurement uncertainty, return signal intensity, range, sampling density, and relationship to neighboring points. This review focuses particularly on measurement quality metrics for ground-based laser range scanners that can be adapted for automated systems. Within this context, measurement quality metrics provide a way to direct and terminate automated scanning procedures. Perceptual quality metrics can be either objectively or subjectively defined;1 however, only objective quality metrics are useful for automating data acquisition. For this reason, only objective quality metrics are considered here. Objective quality metrics can be further classified as referenced or unreferenced.1 Unreferenced quality metrics use no benchmark; thus, automated processes can only evaluate the change in a quality attribute in response to some action. Referenced quality metrics can be evaluated in the same manner as unreferenced quality metrics, but the size of the deviation of an attribute from a reference can be used to evaluate whether the measurement should be either retained or ignored. For this reason, referenced quality metrics are preferred for automated systems in which thousands, or even millions, of measurements may be obtained. Referenced range measurement quality metrics quantify the relationship of a quality attribute to some previously established benchmark or reference. These metrics can then be used to either compare methods or systems, or they can be used in an iterative process to maximize some qualitative attribute of a range image.2 Quality metrics appear most often in the guise of a weighting parameter when merging measurements or data sets. Two important components of a referenced quality metric are a clearly defined quality benchmark against which to compare the current state of the range image, and a quality scale to indicate the degree to which the range measurement quality attribute deviates from the benchmark. In this paper, metrics for quantifying the quality of measurements are reviewed. For purposes of discussion, these metrics have been classified as measurement uncertainty based, signal intensity based, range based, and neighborhood based. As will be demonstrated, considerable work remains to ensure that the quality of measurements and points used to construct virtual models is effectively and comprehensively defined. 2.Spatial ResolutionThe spatial resolution of a laser range scanner measurement is dependant on the size of the laser spot that illuminates the surface at the point the measurement is obtained. For pulsed laser systems, the spatial resolution is also dependant on the pulse length of the system. The spatial resolution can be divided into range resolution and angular resolution. Angular resolution is the minimum angular distance between features such that they can be resolved as separate features. Range resolution is the minimum distance between angularly resolved features such that they can be distinguished as separate features.3 The angular resolution of a laser range scanner is defined by the Rayleigh criterion4 and represents the size of the smallest feature that can be angularly resolved.3, 5 The laser projects a spot into the surface being scanned, and the region in which the surface intersects the laser spot is referred to as the beam footprint. Features within the footprint contribute to the return signal intensity, which is used to obtain the spatial measurement that approximates the position of a portion of the surface.6 The area covered by the beam footprint is generally not measured by laser range scanners; thus, it is approximated by a model of the area of the laser spot that illuminates the area. Ideally, the laser spot area should be the same as the beam footprint area; however, environmental factors, such as spatial discontinuities7 or dense fog,8 can result in the beam footprint deviating from that predicted by the laser spot model. Moreover, if the surface normal is assumed to be oriented along the line of sight in the laser spot model, then surface angulation can result in a discrepancy between actual and predicted beam footprint areas. Quality metrics provide a way to predict by how much the beam footprint of a measurement might deviate from that predicted by the laser spot model. The spatial resolution of a measurement can be represented by the instantaneous resolution, which assumes the footprint is stationary at the time the measurement is acquired, or the effective resolution, which takes into account the procession of the footprint over the surface during the acquisition process. When the term resolution is used in this paper, unless otherwise stated, it refers to the instantaneous resolution. The term footprint and laser spot are also used interchangeably in this paper, although the terms are strictly equivalent only when the surface is continuous within the laser spot. 3.Measurement Uncertainty-Based MetricsMeasurement uncertainty is the most common attribute used to assess measurement quality. Range measurement uncertainty is generally modeled as an independent zero-mean Gaussian process added to the quantity returned by the range sensor; that is, where is the ground truth position or surface characteristic, is the quantity returned by the sensor, and is the additive zero-mean Gaussian noise process with measurement covariance . This may not always be a valid assumption; environmental effects and nonlinear bias in the sensors may cause the observed measurement distribution to become distinctly non-Gaussian. In practice, Gaussian models provide the benefit of simplifying mathematical analyses and result in an approximation of how a system should behave under a broad range of circumstances. Non-Gaussian models are highly situation dependant, therefore are rarely used for predicting measurement uncertainty.The uncertainty associated with the range sensor is referred to here as the radial error and is one attribute that can be used to evaluate measurement quality. Rotational or translational position are referred to here generically as positional error and represent two more attributes which can be employed to evaluate the quality of a measurement. Figure 1 shows one example of a triangulation laser range scanner system in which the angular position of the laser spot on a surface in the environment is controlled by two rotating mirrors. Similar dual-axis optical scanning configurations are used in time-of-flight (TOF) systems and other laser range systems by combining orthogonal galvanometers, rotating mirrors, or motors. As a result, the geometrical model and measurement uncertainties can be generalized to a variety of laser range scanning systems. 3.1.Measurement UncertaintyMeasurement uncertainty is represented by a covariance matrix, generally based on a model of the root-mean-square (rms) sensor error along each axis of motion employed by the scanner and on a model of the error associated with the range sensor. Sensor variance is often based on a model of the sensor error, rather than on the spread of repeated measurements acquired in situ, because it is often not practical to obtain a large enough set of repeated measurements to derive a situation-specific variance profile. These models are generally obtained under ideal conditions for specific materials and surface orientations. As a result, there can be a significant discrepancy between the model sensor variance and what would be observed using a repeated measures approach in the field. For example, if the variance model of a system was based on white cardboard, then the model variance would significantly underestimate the variance resulting from black felt.9 This can be a significant issue where the type of material being scanned cannot be known a priori or where the object being scanned may consist of multiple types of material. In general, measurement uncertainty cannot be considered a sufficient quality metric on its own because it depends heavily on a variety of other attributes. In the following sections, various attributes that can result in true measurement uncertainty deviating from model-based measurement uncertainty are identified. 3.2.Positional UncertaintyAssuming a fixed-viewpoint scanner, such as the one shown in Fig. 1, the positional uncertainty is a function of the mechanisms used to control the orientation of the laser and the photosensor.10 These mechanisms are typically precision galvanometers or rotating motors, and the positional uncertainty reflects the variation in real laser/sensor orientations when the galvanometer or motor indicate that it has achieved a given angular position. In the case of fixed pattern projection systems, error positioning is often due to the stability of the optomechanical system. The acquisition of range and angular position measurements are generally synchronized, but synchronization errors, or jitter, can result in the true angular position differing from the angular position at the instant the range measurement is acquired.11 Although the laser is often modeled as originating either from the scanner viewpoint or from a fixed point near the viewpoint, its true origin may vary depending on the scanner geometry.12 Well-calibrated laser range scanner systems account for this complexity; however, the transformation between sensor data and spherical or Cartesian coordinates can introduce errors.13, 14 As a result, rotational uncertainty may not be constant, as is often assumed. A similar situation arises for laser range scanner systems using motor-controlled rotating bases. Thermal effects, wobble and jitter, and mirror nonplanarity can also cause the final reflection point position and output orientation to deviate from a Gaussian distribution. 3.3.Radial UncertaintyRange measurement uncertainty depends on how the interaction of the laser with the surface is measured. In TOF systems, the range is determined by the time between the pulse being generated and being detected. In triangulation systems, the range measurement depends on the position of the signal peak on a photodetector array. In both cases, a significant portion of the range measurement uncertainty is the ambiguity of the location of the signal peak. Range uncertainty is typically assumed constant for TOF scanners, as shown in Fig. 2 . Specifically, where is the range measurement error, is the speed of light, and is the time measurement error. The last term represents the uncertainty in the temporal location of the signal peak. This is found bywhere is the pulse rise time and SNR is the signal-to-noise ratio.15, 16 The range measurement error is determined by the signal bandwidth,15 amplitude of the return signal,17 thermal drift,17, 18 crosstalk between the transmitter and receiver,18 timing jitter,19 and nonuniformities and changes in the returning signal shape.15, 18, 19 For example, different surface materials can change the shape of the return signal resulting in significantly different error distributions.10 Moreover, feedback within the sensor can result in a measurement being affected by the previous measurement, violating the assumption that there is no correlation among range measurements.Laser motion while the signal is being emitted is negligible for pulse TOF systems because the pulse duration is so short, but it can affect continuously modulated laser systems. This motion can distort the return signal and introduce ambiguity into the true measurement. Consider, as well, that the range measurement equation is given by where is the propagation delay.16, 20, 21 This assumes that the TOF between the laser and the surface is equal to the TOF between the surface and the sensor. This may not always be the case, especially if the true origin of the laser pulse varies as a function of the mirror angles.The peak uncertainty of a triangulation scanner is typically dominated by speckle noise.22 This can be modeled as where is the laser wavelength, is the focal length of the receiving lens, is the diameter of the collecting lens, and is the Scheimpflugg angle of the photodetector array.23 Speckle noise arises when speckle elements on the surface illuminated by the laser spot are large when compared to the wavelength of the laser light.22, 24 Under this assumption, each speckle element becomes a point emitter with respect to the photodetector array. Interference patterns are generated when each speckle element reflects light from the laser onto the photodetector array,22, 25 as shown in Fig. 3 . There, they constructively and destructively interact to form a speckle image on the photodetector array.22, 24, 25Speckle noise is generally countered by integrating a single measurement over several intensity samples as the laser spot is moved over the surface being scanned.26 Figure 4 illustrates the reduction in speckle after integration. This is complicated by the need to minimize aliasing by ensuring that the measurements are, where possible, separated by a distance less than the radius of the laser spot (Fig. 4).7 Similarly, the range uncertainty in an amplitude-modulation continuous-wave scan can be decreased by increasing the sampling rate.27 3.4.Environmental EffectsThe mechanical effects described in Sec. 3.3 can be included in a model of expected range and rotational uncertainty; however, many environmental factors, summarized in Table 1 , can cause the true measurement uncertainty to deviate from the model. For example, measurement uncertainty can increase with increasing incidence angle,28, 29, 30, 31 a reduction in surface reflectivity,10, 32 and an increase in ambient lighting.6, 33 Table 1Environmental factors affecting measurement uncertainty.
Equation 5 assumes that the size of the spot projected onto the photodetector array has not been distorted by occlusion, surface orientation, or other environmental effects. Figure 5 shows the effect of laser spot distortion arising from a surface discontinuity.7 In this case, the discontinuity occludes part of the laser spot so that the spot centroid no longer coincides with the signal peak. This introduces an error into the horizontal location of the signal peak, denoted here as . This results in a range error , which is compounded by the surface orientation with respect to the direction of the laser. The deviation of the surface normal from the laser path is denoted here as . Sudden changes in surface height are not uncommon and represent a reduction in measurement quality that is not captured by model-based measurement uncertainty. Different surface materials can also affect the accuracy of range measurements. Figure 6 demonstrates the effect of a partially translucent material, such as marble, in which the laser may penetrate part way into the surface before sufficient light is reflected to estimate the distance to the surface.7 In this case, the range measurement does not represent the surface of the material, and the actual range measurement obtained depends on the reflective and refractive qualities of the material. According to Beraldin ,34 translucent surfaces like marble change the shape of the laser spot on the photodetector array of a triangulation scanner, resulting in the range estimate being in error. As well, the nonhomogeneity of the material increases the range measurement uncertainty.16 Translucent nonhomogeneous materials can also feature a greater measurement uncertainty as well as a bias that increases with the distance between the scanner and the surface.35 Surface complexity is not limited to variations in the height and frequency of surface structures; transitions between areas of different surface reflectivity can affect the accuracy of a range measurement,7, 36 as illustrated in Fig. 7 . Different materials with different reflectivity properties can also generate very different range measurement uncertainties.10 The change in reflectivity for different portions of the laser spot results in a shift in the signal peak that introduces an error into both the range measurement and return signal intensity, a topic discussed in Sec. 4. Moreover, a reduction in surface reflectivity can result in an increase in range measurement uncertainty.33 Increasing the surface orientation with respect to the line of sight of the scanner can result in an elongation of the laser spot, which increases peak detection uncertainty.31 This problem is most pronounced when the length of the baseline is significant with respect to the distance to the surface, as is the case with triangulation laser range scanners, even when operating in the far field. Moreover, increased surface orientation with respect to the line of projection of the laser increases the spot size on the surface, resulting in more speckle elements contributing to the spot projected onto the photodetector array. Because the range uncertainty of triangulation laser range scanners is dependant on the surface orientation, model-based range uncertainty is not sufficient to represent the quality of a range measurement. 3.5.Measurement Uncertainty as a Quality MetricMeasurement spatial uncertainty has often been used as a way to quantify the quality of the measurement. For example, Sequeira 37, 38 and Sequeira and Goncalves39 used range sensor uncertainty as part of a reliability metric generated from the weighted sum of measurement attributes. They recognized that spatial uncertainty is not a sufficient metric and therefore combined it with other measurement quality metrics. The combining of quality metrics to generate a more holistic view of measurement quality is discussed in Sec. 7. Some range sensors, such as the triangulation scanner shown in Fig. 2, have range measurement variance that increases with the square of the distance between the scanner and the surface.16, 20, 23, 40, 41 In this case, using range sensor uncertainty as a quality metric means that measurements closer to the scanner are considered to be of higher quality. If the measurements are being merged using a modified Kalman minimum variance estimator (MKMV) approach,42, 43 then the measurement variance becomes a function of the number of measurements that are merged to form a point in a virtual model. Moreover, the merged measurements could be obtained from different viewpoints; thus, range measurement uncertainty alone is insufficient as a quality metric. To counter this problem, the covariance matrix may be used as a multidimensional quality metric. For example, using the MKMV approach, two measurements and are merged to form a point in the virtual model. The point is generated using the weighted sum whereandare the weighting factors. As a result, the position of is closest to the measurement with the smallest covariance. In effect, , for example, becomes a quality metric for measurement ; the location of the point represents the integration of multiple measurements that maximizes the quality of the point from the perspective of measurement uncertainty.One drawback of Sequeira’s weighting method is that it was only applied to radial uncertainty. Table 1 illustrates the reasoning behind considering only radial uncertainty: it is the attribute that is generally affected by environmental factors. In Sequeira’s case, the metric was only applied to range images and not to the merged data; thus, this approach was sufficient for the purpose for which it was designed. Rotational uncertainty could be assumed constant and, thus, ignored. The method, however, is not generalizable to data merged using the MKMV method. Consider that the covariance of is found by thus, the radial and rotational uncertainties of are less than the radial and rotational uncertainties of either or . If only the radial uncertainty is considered, then the reduction in rotational uncertainty is never taken into account. Similar issues arise when combining data from multiple types of scanners, each which may have different radial and rotational uncertainties.The MKMV weighting factors, although effective quality metrics for measurement merger, are less effective for representing the quality of the measurement from the perspective of spatial measurement uncertainty. Ideally, an uncertainty metric should represent the uncertainty of a measurement as a scalar value so that the relative quality of measurements can be compared along a single vector rather than within a multidimensional space. On the other hand, reducing a multidimensional parameter to a single dimension risks losing potentially important information; therefore, the choice of unidimensional representation must be carefully chosen. Although the covariance matrix approach addresses the issue of ignoring potentially valuable information in the position uncertainty attribute, it does not address the issues of surface complexity and orientation increasing the effective measurement uncertainty above the level predicted by the model. As a quality metric, range uncertainty and even measurement covariance are useful quality metrics but not sufficient by themselves. In particular, metrics evaluating surface spatial complexity, surface orientation, and changes in surface reflectivity need to be examined to augment measurement spatial uncertainty as a quality metric. 4.Signal Intensity-Based MetricsIt was noted in Sec. 3.4 that a decrease in surface reflectivity can result in an increase in measurement uncertainty. Surface reflectivity can be assessed by examining how the intensity of the received signal varies from what would be expected for a surface of known reflectivity; however, signal intensity measurements can vary significantly as a result of such factors as range,32 high incidence angles,28, 31, 32 low reflectivity,10, 18 atmospheric attenuation,44 sharp discontinuities,16, 45 and translucency of the material being scanned.16, 40 For example, the return signal intensity decreases with an increase in angle of incidence and decreases with an increase in distance between the scanner and the surface when the transmitted signal power remains constant.32 As a result, quality metrics provide a way to predict the extent to which the actual reflectivity of a surface might deviate from that predicted from the return signal intensity. Figure 5 illustrated that surface discontinuities can result in range errors; however, a change in the shape of the signal intensity profile in a triangulation laser range scanner can also result in a reduction in return signal intensity. When the shape of the peak is sufficiently distorted, as is the case with mixed measurements, it becomes difficult to locate its centroid. Laser spots that cross edges can result in smeared or multiple return signals that result in ambiguous range measurements, what is referred to as mixed measurement error.32, 46 Mixed measurements are a result of receiving reflected energy from two surfaces within the laser spot and are often interpreted as a range measurement somewhere between the two surfaces.6, 33, 47 Hebert and Krotkov6 referred to the interdependence of measured range with signal intensity as range/intensity crosstalk. TOF systems calculate range by comparing the return signal to the transmitted signal, thus are more sensitive to signal intensity changes. Figure 8 shows that a discontinuity in surface reflectivity can also reduce the return signal intensity.7 As a result, quality metrics provide a way to predict the extent to which the spatial position of the measurement might be in error as a result of the return signal intensity deviating from that predicted using a model of the laser range scanner optics. Some surfaces may be difficult, if not impossible, to scan because the return signal is diffusely scattered, what is referred to as volumetric scattering.46, 48 Surfaces that exhibit this property include glass, hair,46 and grass.48 Figure 6 illustrates that translucent materials can also reduce the strength of the return signal.7 Other surfaces are excessively absorbent so the signal is of insufficient intensity to obtain a range measurement, while other surfaces may be very highly reflective that the photodetector is saturated.46 The absence of a return signal, referred to as a nonreturn measurement, can be a valuable piece of information but is almost always discarded. Given a reference material, the change in return signal intensity can be modeled as a function of range. A shift in the return signal intensity from the model value can then be used as a metric of the quality of a measurement. Measurement spatial uncertainty is also affected by return signal intensity; thus, both, variables are important in assessing measurement quality, and neither are sufficient by themselves. Moreover, signal intensity shifts can indicate the presence of mixed pixels and surface material transitions, either of which may introduce errors into the range measurement. The challenge is in how to determine the cause of the intensity shift, given that only the spatial position and deviation in signal intensity from a model value are known. Deviations from model return intensity can arise from several different environmental conditions; therefore, return intensity, even when combined with spatial position and model spatial uncertainty, is not sufficient to completely represent the quality of a measurement. Table 2 summarizes the factors that affect return signal intensity. Table 2Environmental factors affecting return signal intensity.
4.1.Intensity as a Quality AttributeSignal intensity is rarely used as a quality metric; it is more often used as a weighting factor for combining measurements. For example, Godin 49 used the compatibility of signal intensities between correspondence pairs of measurements prior to iterative closest point (ICP) registration. Given two intensity measurements and , the compatibility is found by where is an estimate of the reliability of the intensity measurements. In Eq. 10, and are quality attributes associated with measurements and respectively; however, this metric only assessed the quality of association between two measurements, not the quality of each measurement. Fiocco 50 defined a reflectivity quality metric for each measurement. It took the formwhere and defined the minimum and maximum acceptable reflectivity of the surface, and was the observed surface reflectivity. Sequeira 37, 38 simply applied a weighting factor to the detected signal intensity.One drawback of Fiocco ’s method is that it employs a binary scale, which, while useful for the application for which it was designed, lacks the generalizability of a sliding scale. Sequeira’s approach of using a weighting factor avoids this problem, but does not address the issue of the ideal reflectivity changing with an increase in range. As with Fiocco’s method, the weighted intensity approach used by Sequeira was sufficient for the application for which it was designed but is not applicable to medium-range scanning without some modifications to take into account the relationship between range and return signal intensity. Fiocco avoids this problem by using reflectivity, which is independent of range. 5.Range-Based MetricsIt was noted in Sec. 3.3 that measurement spatial uncertainty generally increases with increased range and, in Sec. 4, that return signal intensity generally decreases with increased range. The range measurement itself can be used to represent the quality of a measurement. For example, Sequeira 37, 38 and Fiocco 50 each used the range portion of the measurement as part of their reliability metrics. Figure 9 graphically demonstrates how the quality of a measurement decreases as the distance between the scanner and the surface that generated the measurement increases. In general, the farther a surface is from the scanner, the larger the area encompassed within the laser spot. The size of the spot projected onto a surface is represented by the beam width at the point of intersection. The beam width depends on the distribution of irradiance, which is often assumed to follow a Gaussian distribution. Specifically, where is the is irradiance of the beam along the central axis, is the radial distance perpendicular to the central axis, and is the spot radius a distance from the beam waist.51, 52 Figure 10 shows the irradiance profile centered on the central axis and the spot size as a function of distance from the beam waist.The surface formed by represents the distance from the central axis at which the beam irradiance falls to . As a result, the volume bounded by represents the region within which 86.5% of the beam irradiance is contained.51, 52, 53 The laser spot defined in this way represents the portion of the surface being scanned from which most of the laser irradiance is being reflected. As a result, the laser spot represents the smallest region that can be resolved by the laser range scanner. The boundary of can be approximated by the hyperbolic equation, where is the beam radius as a function of the radius of the beam waist and is the ratio of the beam waist to the depth of focus of the beam . The depth of focus, illustrated in Fig. 10, is defined bywhere is the laser wavelength. Meanwhile, the beam waist for an aberration-free optical system using a circular lens can be approximated by the Rayleigh diffraction equation,where is the lens diameter and is the focal length of the lens.52 The focal length also represents the distance from the lens to the beam waist.Range can act as a proxy for the resolution of a measurement under the assumption that the focal length remains fixed and the surface is farther from the scanner than the beam waist. Under these conditions, measurements closer to the scanner can be considered to be of higher quality than those farther from the scanner. Although range is generally not referred to as an indicator of the quality of a measurement, this relationship is implied when the more distant of a pair of measurements is dropped as part of the registration process. Fiocco 50 defined a distance quality metric based on the minimum and maximum range limits. In practice, a scanner is bounded by the minimum and maximum effective range, defined by a variety of factors, including the laser power, beam spread, and photodetector sensitivity. Sequeira 37, 38 simply applied a weighting factor to the range measurement to obtain a quality metric. Only long range scans (those for which , referred to as far-field measurements51) are guaranteed to have a measurement resolution decrease with range. Medium range scanners may be used for surfaces that are at, or even less than, the distance to the beam waist. Surfaces that are closer than the beam waist have an inverse relationship between resolution and range, as shown in Fig. 10. In this case, measurement quality decreases with distance. As a result, Fiocco ’s and Sequeira ’s methods are only applicable to the situation for which they were designed; laser range scanners in which the surface is farther from the scanner than the beam waist. For medium-range scanning, the surface may be placed such that it coincides as much as possible with the beam waist. A more general-purpose resolution-based quality metric should be applicable to both long and medium range scanner data, as well as data from scanners with multiple focal lengths. The use of laser spot size in assessing measurement quality will be addressed in Sec. 6. 6.Neighborhood-Based MetricsAttributes such as surface orientation, or spatial or reflectivity discontinuities, cannot be determined from single measurements; they can only be inferred from groups of measurements located in close spatial proximity to each other. Spatially related measurements are referred to here as a neighborhood and are used to model a small portion of the surface being scanned to predict some aspect of that surface, such as its orientation. The class of neighborhood-based metrics encompasses all quality metrics defined by the neighborhood of a measurement. Neighborhood-based quality metrics attempt to infer some aspect of a measurement by its relationship to its immediate neighbors. For purposes of discussion, a neighborhood is defined as a point and the set of all points considered to be the immediate neighbors of by some commonly accepted criteria. It is assumed that this criterion is either the Euclidean or the rotational distance, although the discussion could apply to other distance metrics. Two neighborhood-based quality metrics are considered: those based on interpoint distance and those based on vertex orientation with respect to the line of sight. The former is a measure of the density of the measurements in a neighborhood, which, in turn, indicates how finely the surface has been sampled. The latter is used to estimate the orientation of the surface at a spatial location of the measurement and is the most commonly used quality metric after measurement uncertainty. Surface complexity can also be evaluated using edge-detection techniques. Specifically, spatial (illustrated in Fig. 5) and intensity (illustrated in Figs. 7 and 8) discontinuities result in range measurement errors so the quality of measurements corresponding to discontinuities are of lower quality than measurements arising from surfaces without discontinuities. Edge detection, applied to either spatial data, intensity data, or both, can be used to detect the presence of discontinuities, which are one type of surface complexity. A complete review of edge-detection techniques is, however, beyond the scope of this paper. For surveys on edge-detection techniques, see Argyle,54 Davis,55 Peli and Malah,56 Ziou and Tabone,57 Trichili ,58 Xiao ,59 and Basu.60 6.1.Distance MetricsDistance metrics are typically used to evaluate two attributes: distance to neighboring points and the density of points in the neighborhoods. The latter is referred to as sampling density, which is the number of measurements per unit area of the surface being modeled. Densely sampled surfaces have the greatest possibility of detecting important surface features that might be missed by more sparse sampling methods. On the other hand, dense scanning techniques generate a large number of points, many of which may be redundant if the surface being scanned lacks significant surface features. With respect to quality, densely sampled surfaces, to within certain limits, have the greatest probability of generating high-quality models; thus, sampling density is a measure of the potential quality of the final model. According to Shannon sampling theory, given a band-limited signal, the sampled signal will contain all the information in the band-limited signal only if the sampling frequency is more than twice the signal bandwidth.61 This is also known as the Shannon-Nyquist sampling theorem62 or simply the Nyquist sampling theorem.63 This means that the distance between samples must be less than half the smallest feature size resolvable to the scanner;64 that is, where is the distance between samples and the smallest feature size resolvable is given by . The signal bandwidth is referred to as the Nyquist frequency, and the Nyquist rate, equal to twice the Nyquist frequency, defines the frequency that must be exceeded by the sampling frequency. If the sampling frequency is less than or equal to the Nyquist rate, then aliasing, or aliasing distortion, occurs.63 On the other hand, measurement quality does not improve in proportion to the amount by which the sampling rate exceeds the Nyquist rate;65 thus, the sampling rate is often defined to be only slightly higher than the Nyquist rate. The Nyquist rate, therefore, represents a quality breakpoint.Shannon sampling requires a band-limited signal, and diffraction in the optical system ensures this by imposing a limit on the size of features that can be resolved. The Rayleigh criteria represents the resolution limit of the scanning system even if measurement noise were negligible.3, 4 In the case of a perfectly focused, diffraction-limited optical system, laser physics still imposes a limit on the size of the feature that can be resolved, given by the Rayleigh criteria. If represents the minimum distance between beam footprint peaks at which they can be separately resolved, then the Nyquist rate is given by and the Nyquist frequency becomes . The smallest feature that can be resolved is given by the beam width .If is large with respect to , then fine details are blurred;3 however, if the is too large, then fine details are missed. It is convenient, in the absence of other information about the system, to choose a sampling density slightly less than the smallest angular beamwidth such that within the volume of interest. The goal of scanning a surface is to achieve an intersample surface distance , given that the laser scanner is, under ideal conditions, unable to resolve features at . Sampling density and intersample distance, therefore, are useful in assessing model quality. Klein and Sequeira66 and Klein and Zachmann67 compared the actual sample density to the expected sampling density . The expected sampling density was found using where is the position of the scanner in the world, is the resolution, is the solid angle of the patch covered by a single pixel, is the normal of the surface at , and is a point in the global coordinate system. If is part of an unscanned surface, then . These quality metrics were then used to perform a cost-benefit analysis of potential viewpoints. Specifically, they calculated the resolution quality of point in the surface as seen from viewpoint usingwhere is the maximum sampling density of , and is the observed sampling density of . In this case, the benchmark is . The benefit of this approach is that it combined measurement resolution, surface orientation, and sampling density into quality metric for each point on a surface.Fiocco 50 used a less complicated method for defining the density of a set of measurements than proposed by Klein and Sequeira66 and Klein and Zachmann.67 They defined the density quality metric as where is the distance to the closest neighbor and is the maximum acceptable distance. Meanwhile, Sequeira 37, 38 used the weighted average-distance between neighboring points as a quality metric.One drawback of the quality metrics employed by Refs. 37, 38, 50, 66, 67 is that they ignore measurement spatial uncertainty, which also affects the resolution of the system.3, 68 In particular, spatial uncertainty makes it difficult to know, precisely, the extent of the region covered by each laser spot. Another drawback of these metrics is that they do not make clear whether quality is being assessed relative to the desired resolution or the attainable resolution . The former is generally constant while the latter depends on surface orientation, the presence of spatial or reflectivity discontinuities, and the size of the laser spot illuminating the surface. In some cases, may not even be attainable for certain combinations of range and surface orientation. 6.2.Orientation MetricsA typical approach to generating the orientation of a measurement is to obtain a mesh model of the surface and use the normals of each of the mesh elements to estimate the normal of the surface at the measurement.22, 42, 69, 70, 71 Orientation is often represented by the surface normal, which is generally found by taking the average of the normals of all Delaunay facets that have this measurement as a vertex.42, 70 The exception is Hoppe ,72 who preferred to use the normal of a plane fit to the neighborhood of the measurement. The benchmark for the grazing angle attribute is the angle that generates the most accurate range measurement; that is, when the surface normal is oriented along the line between the surface and the scanner. Assuming the maximum grazing angle is one in which the surface normal is perpendicular to the line between the surface and the scanner, the scale of the grazing angle attribute is from 0 (best quality) to (worst quality) radians. This is often represented as the cosine of the grazing angle,30, 70 which has a range of 1 (best quality) to 0 (worst quality). Often the deviation of the return signal intensity from the ideal Lambertian model is represented by the surface normal25, 42, 69, 70 or grazing angle.73 The reasoning is that the signal intensity decreases with increasing surface orientation; thus, surface orientation can be used as a proxy for signal intensity. However, return signal intensity is affected by all the factors summarized in Table 2. Therefore, this assumption is true only in the absence of other factors, such as surface spatial complexity and changes in surface reflectivity. Surface orientation also affects the uncertainty of range measurements,74 particularly for triangulation laser range scanners; thus, surface orientation as a metric can affect quality metrics for both spatial uncertainty and return signal intensity. Fiocco 50 used the deviation of the line of sight to the scanner from the surface normal as a quality metric. This metric took the form where is the surface orientation deviation in units of degrees. Turk and Levoy70 used the cosine of the grazing angle to weight measurements prior to ICP registration. Soucy and Laurendeau30 showed that the squared cosine of the grazing angle corresponds to the relative illuminance received by the photodetector. They used this metric to perform a weighted merge of measurements from different viewpoints such thatwhereis the weighting factor associated with measurement . The cosine of the grazing angle can be found usingwhere is a measurement located units from the viewpoint, and is the normal to the surface at . In this case, the coordinate system is assumed centered on the scanner viewpoint. Curless22 employed a similar approach to merging measurements that co-occupied the same voxel. Soucy and Laurendeau30 demonstrated that the reflectivity of the surface was directly proportional to the square of Eq. 24. Because measurement quality was expected to be directly proportional to the amount of light returned to the sensor, would better represent measurement quality than ; however, this was based on the assumption that the reflectivity change was primarily caused by high surface orientation. The relationship is less clear when the surface reflectivity is more complex.Scott 73 suggested that basing quality solely on the grazing angle of a measurement ignores the objective effects of high grazing angle in favor of a more subjective metric. Surface orientation, in particular, ignores factors that affect the shape and peak height of the intensity profile, such as surface reflectivity changes. Moreover, the surface normal is the average of the orientations along each Delaunay edge extending from a point. As a result, it is possible to have a wide range of vertex normals but a surface normal oriented along the line of sight. Finally, for systems in which the baseline is not insignificant with respect to range, the line of sight could be defined with respect to the photodetector, the laser, or the scanner origin, each yielding a different result. As a result, surface orientation is important but insufficient as a quality metric. An alternative to grazing angle for representing surface orientation of a range image obtained using a raster scan pattern is the facet edge length ratio. In this case, the ratio of longest to shortest edge of a Delaunay facet is used to assess the quality of the facet and, by extension, its measurements. Sequeira used this approach to discard the facet if the ratio was too large.37 Consider the image on the left in Fig. 11 , which represents a two-dimensional Delaunay triangulation of a range image; when seen in three dimensions, facets on a discontinuity are elongated with respect to their neighbors. The ratio between the longest and shortest edge should ideally be 1:1; that is, the triangles should be equilateral. As the surface orientation increases with respect to the line of sight from the scanner, the ratio between the longest and shortest edges increases. Specifically, given a facet with edges , the facet edge ratio can be found by The weighting factor decreases toward zero as the disparity between the longest and shortest edges increases.The facet ratio represents a quality metric in which the neighborhood is limited to the three measurements bounding the Delaunay facet. High-quality measurements would be those in which the facet ratio was close to 1, whereas those in which was very small would be considered to be low-quality measurements. Low-quality measurements would have elongated facets indicating steep surface slopes. A drawback of this method is that it is specifically designed to assess the quality of facets and can only be applied to measurements as a side benefit. Moreover, it is specifically designed to work with regularly spaced raster patterns. Nonraster patterns can feature large edge ratios, even if the surface is relatively flat, as illustrated in Fig. 12 . Arrangements in Figs. 12, 12, 12 contain facets with large facet ratios regardless of the range value associated with them. Although well suited to the purpose for which it was designed, facet ratio is not easily adapted to use as a general-purpose quality metric representing surface orientation. Fiocco’s method as well as the more popular grazing angle metric described by Eq. 24 are better suited as general-purpose surface orientation quality metrics. 7.Total Quality MetricQuality metrics are generally combined to generate an overall measure of quality, referred to here as a total quality metric. Scott 2 cited two common examples of how quality metrics could be combined: weighted summation and composite binary pass/fail. The weighted summation approach takes the form where represents the ’th quality metric. An example of this approach is the weighted average model used by Sequeira 37 to determine the total quality of each measurement in a range image. To ensure that , the weight values can be restricted such that . Meanwhile, the binary product approach has the formwhere is a threshold quality limit for the ’th quality metric. In this case, when the quality metric equals or exceeds the threshold value, and otherwise.The choice of how quality metrics are combined depends on the application and the relative weight placed on each of the quality metrics. The weighted summation approach allows the researcher to tailor the contribution of each of the quality metrics to the overall measurement quality without any one metric dominating the result. For example, Fiocco 50 experimentally derived the weights for each sensor used in the experiment. They also standardized the weighting factors such that each sensor technology could be represented by a single weighting factor that modified each of the metric weights. Sequeira 37, 38 also used the weighted-sum approach but did not indicate how the weights were derived. The binary product approach is effective if the goal is to simply exceed some preset quality level. 8.Unresolved Quality IssuesSeveral quality attributes are notably absent from contemporary, and even emerging, quality metrics. In particular, no quality metric has been developed to address the motion of the laser spot during the acquisition process. This is of particular interest in triangulation scanners where multiple sample intervals may be integrated to combat speckle noise. No quality metric has been defined to quantify the effect of measurement resolution. Even using range as a quality metric only addresses measurement resolution by proxy. In fact, neighborhood-based metrics do not consider the issue of measurement density or proximity that is less than the measurement resolution of the system. No metric has addressed the problem of measurement repeatability, most likely because it requires multiple range images of the same surface, which substantially increases scanning time. Finally, surface complexity is only imperfectly evaluated using surface orientation. Measurement quality metrics are rarely combined into a total quality metric. As a result, operations such as measurement merge, range image registration, and deciding whether or not to delete a measurement are often based on inadequate information. For example, although a maximum likelihood merge of two measurements is statistically valid, the covariance matrix only partially describes the quality of the measurement. In fact, a measurement with relatively large covariance may be of substantially lower quality than a measurement with relatively small covariance when other factors, such as distortion of the signal peak and surface orientation, are taken into account. A more comprehensive approach to applying measurement quality to manipulating measurements is required. Finally, nonreturn measurements are generally treated as having no qualitative value, thus are often ignored during data collection. This means that information about regions of the environment that cannot be scanned is lost. Future research should examine what can be learned about the environment being scanned from the absence of a return signal. 9.ConclusionsQuality metrics have featured significantly in contemporary research; however, most quality metrics have been designed for specific application or specific algorithms, and are often used independently. Measurement uncertainty has been used extensively to represent measurement quality, but many environmental factors affect measurement uncertainty, making it insufficient as an independent quality metric. The relationship of range and resolution to measurement quality depends on the beam width. Additional work is required to better define the relationship between measurement quality and resolution for midfield measurements, where parallax must be taken into account. Sampling density has also been featured in various forms as a quality metric, although most approaches are highly application specific. Absent from the literature is a more detailed analysis of how sampling density is related to measurement quality and how to quantify sampling density as a quality metric in a generalized fashion. Surface orientation has also been used extensively as a quality metric, although it is also insufficient as an independent quality metric. Reflectivity is affected by surface materials, orientation, and surface complexity; thus, this factor has been used to represent measurement quality. Given, however, that reflectivity is affected by multiple factors, it, too, is insufficient as an independent quality metric. The current state of the art in quality metrics performs adequately in assessing the quality of measurements within the context of specific applications, but are often not readily generalizable. Few researchers combine quality metrics so that the strengths of one may offset the weakness of the other. This paper was a first step in assessing the relationship among the various quality metrics currently in use. More work is needed to develop a more comprehensive approach to measurement quality assessment. AcknowledgmentsWe thank the National Research Council of Canada for providing funding for this research through the Graduate Student Scholarship Supplement, as well as for providing facilities and equipment. referencesB. Rohani and H.-J. Zepernick,
“Application of a perceptual speech quality metric for link adaptation in wireless systems,”
260
–264
(2004). Google Scholar
W. Scott, G. Roth, and J. Rivest,
“Performance-Oriented View Planning for Model Acquisition,”
212
–219
(2000). Google Scholar
D. D. Lichti and S. Jamtsho,
“Angular resolution of terrestrial laser scanners,”
Photogramm. Rec., 21 141
–160
(2006). Google Scholar
J. G. Walker,
“Optical imaging with resolution exceeding the Rayleigh criterion,”
Opt. Acta, 30
(9), 1197
–1202
(1983). Google Scholar
F. Blais and J.-A. Beraldin,
“Recent developments in 3D multi-modal laser imaging applied to cultural heritage,”
Mach. Vision Appl., 17
(6), 395
–409
(2006). https://doi.org/10.1007/s00138-006-0025-3 Google Scholar
M. Hebert and E. Krotkov,
“3-D measurements from imaging laser radars: How good are they?,”
359
–364
(1991). Google Scholar
F. Blais, J. Taylor, L. Cournoyer, M. Picard, L. Borgeat, L. Dicaire, M. Rioux, J.-A. Beraldin, G. Godin, C. Lahanier, and G. Aitken,
“High resolution imaging at
using a portable XYZ-RGB color laser scanner,”
(2005). Google Scholar
J. S. Ryan and A. I. Carswell,
“Laser beam broadening and depolarization in dense fogs,”
J. Opt. Soc. Am., 68 900
–908
(1978). Google Scholar
M. Adams,
“Lidar design, use, and calibration concepts for correct environmental detection,”
IEEE Trans. Rob. Autom., 16 753
–761
(2000). https://doi.org/10.1109/70.897786 Google Scholar
J.-A. Beraldin, S. El-Hakim, and L. Cournoyer,
“Practical range camera calibration,”
Proc. SPIE, 2067 21
–31
(1993). https://doi.org/10.1117/12.162133 Google Scholar
D. Green and F. Blais,
“A Multiple DSP-based 3D Laser Range Sensor and its Application to Real-time Motion Detection,”
(2002). Google Scholar
D. MacKinnon, F. Blais, and V. Aitken,
“Object location using edge-bounded planar surfaces from sparse range data,”
(2003). Google Scholar
M. Longbin, S. Ziaoquan, Z. Yiyu, and S. Z. Chang,
“Unbiased converted measurements for tracking,”
IEEE Trans. Aerosp. Electron. Syst., 34 1023
–1027
(1998). Google Scholar
P. Suchomski,
“Explicit expressions for debiased statistics of 3D converted measurements,”
IEEE Trans. Aerosp. Electron. Syst., 35 368
–370
(1999). Google Scholar
J.-A. Beraldin, C. Latouche, S. El-Hakim, and A. Filiatrault,
“Applications of photogrammetric and computer vision techniques in shake table testing,”
3458
(2004). Google Scholar
J.-A. Beraldin, M. Picard, S. El-Hakim, G. Godin, L. Borgeat, F. Blais, E. Paquet, M. Rioux, V. Valzano, and A. Bandiera,
“Virtual reconstruction of heritage sites: Opportunities and challenges created by 3D technologies,”
141
–156
(2005). Google Scholar
F. Blais,
“Review of
of Range Sensor Development,”
J. Electron. Imaging, 13 231
–243
(2004). https://doi.org/10.1117/1.1631921 Google Scholar
M. Adams,
“Coaxial range measurement—Current trends for mobile robotic applications,”
IEEE Sens. J., 2 2
–13
(2002). Google Scholar
M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux,
“Laser ranging: a critical review of usual techniques for distance measurement,”
Opt. Eng., 40 10
–19
(2001). https://doi.org/10.1117/1.1330700 Google Scholar
F. Blais, J. A. Beraldin, and S. F. El-Hakim,
“Range error analysis of an integrated time-of-flight, triangulation, and photogrammetry 3D laser scanning system,”
Proc. SPIE, 4035 236
–247
(2000). Google Scholar
E. Garcia and H. Lamela,
“Low-cost three-dimensional vision system based on a low-power semiconductor laser rangefinder and a single scanning mirror,”
Opt. Eng., 40 61
–66
(2001). https://doi.org/10.1117/1.1331267 Google Scholar
R. Baribeau and M. Rioux,
“Influence of speckle on laser range finders,”
Appl. Opt., 30 2873
–2978
(1991). Google Scholar
J.-A. Beraldin, F. Blais, M. Rioux, L. Cournoyer, D. Laurin, and S. MacLean,
“Eye-safe digital 3-D sensing for space applications,”
Opt. Eng., 39 196
–211
(2000). https://doi.org/10.1117/1.602352 Google Scholar
J. W. Goodman,
“Some fundamental properties of speckle,”
J. Opt. Soc. Am., 66 1145
–1150
(1976). Google Scholar
B. L. Curless,
“New methods for surface reconstruction from range images,”
Stanford Univ.,
(1997). Google Scholar
R. Baribeau, M. Rioux, and G. Godin,
“Color reflectance modeling using a polychromatic laser range sensor,”
IEEE Trans. Pattern Anal. Mach. Intell., 14 263
–269
(1991). https://doi.org/10.1109/34.121793 Google Scholar
J. A. Hancock,
“Laser intensity-based obstacle detection and tracking,”
Carnegie Mellon Univ.,
(1999). Google Scholar
F. Prieto, P. Boulanger, R. Lepage, and T. Redarce,
“Automated inspection system using range data,”
2557
–2562
(2002). Google Scholar
J. Lang and D. Pai,
“Bayesian estimation of distance and surface normal with a time-of-flight laser rangefinder,”
109
–117
(1999). Google Scholar
M. Soucy and D. Laurendeau,
“A general surface approach to the integration of a set of range views,”
IEEE Trans. Pattern Anal. Mach. Intell., 17 344
–358
(1995). https://doi.org/10.1109/34.385982 Google Scholar
A. Johnson, R. Hoffman, J. Osborn, and M. Hebert,
“A system for semi-automatic modeling of complex environments,”
213
–220
(1997). Google Scholar
J. Hancock, M. Hebert, and C. Thorpe,
“Laser intensity-based obstacle detection,”
1541
–1546
(1998). Google Scholar
J. Hancock, D. Langer, M. Hebert, R. Sullivan, D. Ingimarson, E. Hoffman, M. Mettenleiter, and C. Froehlich,
“Active laser radar for high-performance measurements,”
1465
–1470
(1998). Google Scholar
J.-A. Beraldin, F. Blais, M. Rioux, J. Domey, L. Gonzo, F. D. Nisi, F. Comper, D. Stoppa, M. Gottardi, and A. Simoni,
“Optimized position sensors for flying-spot active triangulation systems,”
29
–36
(2003). Google Scholar
G. Godin, J.-A. Beraldin, M. Rioux, M. Levoy, and L. Cournoyer,
“An assessment of laser range measurement of marble surfaces,”
49
–56
(2001). Google Scholar
S. El-Hakim and J.-A. Beraldin,
“Configuration design for sensor integration,”
Proc. SPIE, 2598 274
–285
(1995). Google Scholar
V. Sequeira, K. Ng, E. Wolfart, J. G. Goncalves, and D. Hogg,
“Automated 3D reconstruction of interiors with multiple scan-views,”
Proc. SPIE, 3641 106
–117
(1998). https://doi.org/10.1117/12.333775 Google Scholar
V. Sequeira, K. Ng, E. Wolfart, J. G. M. Goncalves, and D. Hogg,
“Automated reconstruction of 3D models from real environments,”
ISPRS J. Photogramm. Remote Sens., 54 1
–22
(1999). Google Scholar
V. Sequeira and J. Goncalves,
“3D reality modelling: Photo-realistic 3D models of real world scenes,”
776
–783
(2002). Google Scholar
J.-A. Beraldin,
“Integration of laser scanning and close-range photogrammetry—The last decade and beyond,”
972
–983
(2004). Google Scholar
F. DeNisi, F. Comper, L. Gonzo, M. Gottardi, D. Stoppa, A. Simoni, and J.-A. Beraldin,
“A CMOS sensor optimized for laser spot-position detection,”
IEEE Sens. J., 5 1296
–1304
(2005). https://doi.org/10.1109/JSEN.2005.859217 Google Scholar
M. Rutishauser, M. Stricker, and M. Trobina,
“Merging range images of arbitrarily shaped objects,”
573
–580
(1994). Google Scholar
Z. Zhang and O. Faugeras,
“A 3D world model builder with a mobile robot,”
Int. J. Robot. Res., 11 269
–285
(1992). Google Scholar
D. Carmer and L. Peterson,
“Laser radar in robotics,”
Proc. IEEE, 84 299
–320
(1996). https://doi.org/10.1109/5.482232 Google Scholar
S. El-Hakim and J.-A. Beraldin,
“On the integration of range and intensity data to improve vision-based threee-dimensional measurements,”
Proc. SPIE, 2350 306
–321
(1994). Google Scholar
W. R. Scott, G. Roth, and J.-F. Rivest,
“View planning for automated three-dimensional object reconstruction and inspection,”
ACM Comput. Surv., 35 64
–96
(2003). Google Scholar
J. Tuley, N. Vandapel, and M. Hebert,
“Analysis and removal of artifacts in 3-D LADAR data,”
2203
–2210
(2005). Google Scholar
S. Slob, H. Hack, and A. Turner,
“An approach to automate discontinuity measurements of rock faces using laser scanning techniques,”
87
–94
(2002). Google Scholar
G. Godin, M. Rioux, and R. Baribeau,
“Three-dimensional registration using range and intensity information,”
Proc. SPIE, 2350 279
–290
(1994). https://doi.org/10.1117/12.189139 Google Scholar
M. Fiocco, G. Boström, J. Gonçalves, and V. Sequeira,
“Multisensor fusion for volumetric reconstruction of large outdoor areas,”
47
–54
(2005). Google Scholar
D. Williams, Optical Methods in Engineering Metrology, 11
–16 1st ed.Chapman & Hall, London (1993). Google Scholar
B. Chu, Laser Light Scattering Basic Principles and Practice, 156
–160 2nd ed.Academic Press, New York
(1991). Google Scholar
G. Jacobs,
“Understanding spot size for laser scanning,”
Professional Surv. Mag., 26
(10), 48
–50
(2006). Google Scholar
E. Argyle,
“Techniques for edge detection,”
Proc. IEEE, 59 285
–287
(1971). Google Scholar
L. Davis,
“A survey of edge detection techniques,”
Comput. Graph. Image Process., 4 248
–270
(1975). Google Scholar
T. Peli and D. Malah,
“A study of edge detection algorithms,”
Comput. Graph. Image Process., 20 1
–21
(1982). Google Scholar
D. Ziou and S. Tabbone,
“Edge Detection Techniques -An Overview,”
Int. J. Pattern Recog. and Image Anal., 8 537
–559
(1998). Google Scholar
H. Trichili, M.-S. Bouhlel, N. Derbel, and L. Kamoun,
“A survey and evaluation of edge detection operators application to medical images,”
(2002). Google Scholar
Z. Xiao, M. Yu, C. Guo, and H. Tang,
“Analysis and comparison on image feature detectors,”
651
–656
(2002). Google Scholar
M. Basu,
“Gaussian-based edge-detection methods-a survey,”
IEEE Trans. Syst. Man Cybern., 32
(3), 252
–260
(2002). Google Scholar
A. J. Jerri,
“The shannon sampling theorem—its various extensions and applications: A tutorial review,”
Proc. IEEE, 65 1565
–1596
(1977). Google Scholar
C.-H. Lee,
“Image surface approximation with irregular samples,”
IEEE Trans. Pattern Anal. Mach. Intell., 11 206
–212
(1989). Google Scholar
A. V. Oppenheim and R. W. Schafer, Discrete Signal Processing, 142
–147 2nd ed.Prentice-Hall Signal Processing, Prentice-Hall, Englewood Cliffs, NJ
(1999). Google Scholar
G. Guidi, B. Frischer, M. Russo, A. Spinetti, L. Carosso, and L. L. Micoli,
“Three-dimensional acquisition of large and detailed cultural heritage objects,”
Mach. Vision Appl., 17 349
–360
(2006). Google Scholar
R. D. Fiete and T. A. Tantalo,
“Image quality of increased along-scan sampling for remote sensing systems,”
Opt. Eng., 38 815
–820
(1999). https://doi.org/10.1117/1.602053 Google Scholar
K. Klein and V. Sequeira,
“The view-cube: an efficient method of view planning for 3D modelling from range data,”
186
–191
(2000). Google Scholar
J. Klein and G. Zachmann,
“Proximity graphs for defining surfaces over point clouds,”
(2004). Google Scholar
A. J. den Dekker and A. van den Bos,
“Resolution: A survey,”
J. Opt. Soc. Am. A, 14 547
–557
(1997). https://doi.org/10.1364/JOSAA.14.000547 Google Scholar
M. Soucy, A. Croteau, and D. Laurendeau,
“A multi-resolution surface model for compact representation of range images,”
1701
–1706
(1992). Google Scholar
G. Turk and M. Levoy,
“Zippered polygon meshes from range images,”
SIGGraph-94, 311
–318 ACM(1994). Google Scholar
N. A. Massios and R. B. Fisher,
“A best next view selection algorithm incorporating a quality criterion,”
780
–789
(1998). Google Scholar
H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle,
“Surface reconstruction from unorganized points,”
71
–78
(1992). Google Scholar
W. Scott, G. Roth, and J.-F. Rivest,
“View planning for multi-stage object reconstruction,”
64
–71
(2001). Google Scholar
A. Johnson and S. B. Kang,
“Registration and integration of textured 3-D data,”
234
–241
(1997). Google Scholar
BiographyDavid MacKinnon holds a BSc (1990) in mathematics from the University of Prince Edward Island (PEI), a BSc (2001) in electrical and computer engineering from the University of New Brunswick, and both an MASc (2003) and PhD (2008) in electrical engineering from Carleton University. He is currently a research associate at the National Research Council Canada’s Institute for Information Technology, working in the area of measurement standards in 3D metrology. Between 1991 and 1998, he worked as a statistician, first with the PEI Food Technology Centre, then with the UPEI Clinical Research Centre. He is currently an Engineer-in-Training with the Association of Professional Engineers and Geoscientists of New Brunswick. Victor Aitken holds a BSc (1987) in electrical engineering and mathematics from the University of British Columbia, and the MEng (1991) and PhD (1995) degrees in electrical engineering from Carleton University, Ottawa. He is currently an associate professor and chair of the Department of Systems and Computer Engineering at Carleton University, Ottawa, and is a member of the Professional Engineers of Ontario. His research interests include control systems, state estimation, data and information fusion, redundancy, sliding mode systems, nonlinear systems, vision, and mapping and localization for navigation and guidance of unmanned vehicle systems with applications in underground mining, landmine detection, and exploration. François Blais is principal research officer and group leader of visual information technology, at the National Research Council Canada’s Institute for Information Technology. He received his BSc and MSc in electrical engineering from Laval University, Quebec City. Since 1984, his research has resulted in the development of a number of innovative 3D sensing technologies licensed to various industries and applications, including space and the in-orbit 3D laser inspection of NASA’s Space Shuttles. He led numerous R&D initiatives in 3D and the scanning and modeling of important archeological sites and objects of art, including the masterpiece Mona Lisa by Leonardo Da Vinci. He received several awards of excellence for his work and is very active on the international scene with scientific committees, more than 150 publications and patents, invited presentations, and tutorials. |