In the past few years, for its lower-cost, safer and high-resolution images, unmanned aerial vehicles (UAVs) demonstrated great potential for photogrammetric measurements in numerous application fields. Nevertheless, these images are often affected by large rotation, big viewpoint change as well as small overlaps, in which case traditional procedure are not able to orientate images or generate reliable Digital Generation Models (DSM). This paper introduces the whole procedure of the DSM generation, which comprehensively utilizes advantage of both computer vision and multi-image matching algorithms in extracting points and generating a dense DSM. Experiment shows that, based on this procedure, it can quickly extract points from the high-resolution images acquired by UAVs with high location accuracy.
KEYWORDS: 3D modeling, Head, Data modeling, Clouds, LIDAR, Feature extraction, Process modeling, Solid modeling, Statistical modeling, Visual process modeling
Extracting three-dimensional model of the pylon from aerial LIght Detection And Ranging (LiDAR) point clouds automatically is one of the key techniques for digitization and visualization of smart grid facilities. This paper presents a model-driven three-dimensional pylon modeling method using airborne LiDAR data. On the basis of in-depth study of the actual structure of the pylon and the characteristics of point clouds data, a conceptual model of pylon is constructed, in which the pylon is divided into three parts as pylon foot, pylon body and pylon head. Parameters of the model such as position and orientation are defined. In this approach, a complicated pylon is divided into three relatively simple parts firstly. Then different parts of the pylon are reconstructed with different strategies. Finally, model parts are assembled to a complete pylon model using the position and direction information. Results of experiments on the point clouds data from Southern Power Grid show that the precision of extracted pylon orientation and position reached centimeter-level, the accuracy of pylon head classification is higher than 95%, and the pylon model fits well with pylon points. It suggests that the proposed approach can achieve the goal of semi-automatic three-dimensional modeling of the pylon effectively.
Traditional image processing techniques have been proven to be inadequate for urban land-cover mapping using very high resolution (VHR) remotely sensed imagery. Abundant features such as texture, shape, and structural information can be extracted from high-resolution images, which make it possible to distinguish land covers more effectively. However, the multisource characteristics of VHR images place significant demands on the classification method in terms of both efficiency and effectiveness. The most often used method is vector stacking fusion, in which a single classifier is trained over the whole feature space; statistical differences and separability complementarities among different features are rarely considered. Hence, appropriate feature fusion and classification of multisource features become the key issues in the field of urban land-cover mapping. A novel decision fusion method based on a Bayesian network is proposed to handle the multisource features of VHR images which provide redundant or complementary results. Subclassifiers are constructed separately based on multiple feature sets and then embedded into the naive Bayesian network classifier (NBC). The final results are obtained by fusing all the subclassifiers into the NBC framework. Experiments on aerial and QuickBird images demonstrated that the performance of the proposed method is greatly improved compared with vector stacking methods, and significantly improved compared with the multiple-classifier systems and multiple kernels learning support vector machine. Moreover, the proposed method has advantages in feature fusion of VHR images in urban land-cover mapping.
In this paper, a direct 3D visualization environment with DEM, DSM and DOM from LiDAR point cloud data is
presented to replace the stereo observation of photogrammetry image pairs for vector map extraction. A unified DEM,
DSM and DOM data model is designed to manage huge data volume and then to construct the 3D visualization
environment. With the 3D visualization of LiDAR data, the landscape can be recovered, and thus the vector map can be
directly digitized. Also the contour lines can be overlaid in the 3D visualization environment for the further quality check
and control. The presented vector map extraction approach can reduce the professional equipment and software
investment.
In this paper a residential area texture description based on the 3 x 3 region grey deviations is designed and the Gauss blur is applied to make the residential area in the texture character image possess accordant grey value and limited contrast relative to the background area so as to obtain self-adaptive threshold for image segmentation. And a skeleton processing is proposed to eliminate the road from the residential area. The experiment results of the semi-automatic extraction of the residential area in the remote sensing image with 3 meters ground resolution show this technique is very simple and effective to the semi-automatic extraction of the residential area and can meet the precision requirement of the mapping and surveying with satellite images.
This paper discusses stereo photogrammetry analytic principle of the binocular sequence images and deduces the formula of the movement parameters estimate model. An aberrance correction model and sensors 3D spatial relationship calibration method is proposed. On this foundation, the common principle, calculation model and implement preceding and methods of the binocular sequence images aided by GPS/INS navigation are summarized. A method that used for positioning and orientation by GPS/INS assisted by motion analysis is proposed. Based on case of rapid scatter when GPS is lost, this method used the constraint offered by relative position and attitude from motion analysis to improve precision of position and navigatoin and constrain the scatter process. The experiment results of the vehicle navigatoin in GPS blocking case show the high navigation precision wiht the technique of the binocular sequence images aided GPS/INS navigation.
Bit compression is processed to increase the compression ratio and remove correlation based on 4-neighboring pixels decomposition and integer orthogonal wavelet transformation is done combining the contour feature of multiple spectrum images. An image restoration based on the theory of modulation transfer function (MTF) is given to improve the image quality. The test of SPOT remote sensing images shows that the compression ratio is over 8, the average fidelity reached 0.99 and peak value signal-noise ratio (PSNR) is over 42.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.