Mohsen Abedi; Mohammad SaadatSeresht; Reza Shahhoseini
Abstract
Extended Abstract
Introduction
Nowadays, updating information collected from urban areas is of great importance, since it provides the basis for many fields of study such as land cover changes and environmental studies. Remote sensing provides an opportunity to obtain information from urban areas ...
Read More
Extended Abstract
Introduction
Nowadays, updating information collected from urban areas is of great importance, since it provides the basis for many fields of study such as land cover changes and environmental studies. Remote sensing provides an opportunity to obtain information from urban areas at different levels of accuracy while widely used in various change detection applications. Detecting changes in buildings as one of the most important features in urban areas is of particular importance. Powerful and expensive processing systems are the only way to process large volume of remote sensing and photogrammetry data generated by the ever increasing number of sources to which laymen do not have access. The present study has applied deep learning methods and high computational volume of data processing in free clouds to make this possible for the public.
Materials & Methods
Two case studies have been selected in the present study. The first includes DSM and Orthophoto images captured by drones from Mashhad in 2011 and 2016. DSM and Orthophoto images in the second case study has been collected by drones from Aqda in Yazd province in 2015 and 2018. In accordance with the type of data used and high computational volume used for processing, the present study has applied fuzzy clustering method to detect buildings with a high computational speed and deep learning method to detect their changes. Object-based method and fuzzy logic theory have been used in the first step to classify features and detect buildings. In the second step, deep learning method and DSM differentiation method were also used to detect changes in buildings and evaluate results obtained from deep learning method. In the first step, buildings were detected using descriptors extracted from terrestrial and non-terrestrial features, and related decisions were made using fuzzy logic. In the second step, DSM differentiation method has applied the masks extracted from buildings in both epochs on the related DSMs to find their difference and detects changes using an elevation threshold. In deep learning method, a convolutional neural network model was trained to detect changes in buildings during both epochs. Using the DSM of buildings in both epochs and a part of their interface, the network input layers were generated for training. Changes detected in the buildings by the differentiation method were also introduced as the output layer. Following the training and introducing the entire interface in both epochs as the input layer, the trained neural network has detect changes in the buildings. The same process was performed once more using the difference between two DSMs. In other words, a single input layer was used in the network and the rest of the process was the same as before. Finally, changes detected by the neural network was compared with changes detected in the DSM differentiation method
Results & Discussion
In the first step, buildings were detected and images were classified in accordance with the fuzzy logic. The overall accuracy of the first epoch classification in Mashhad equaled 94.6% indicating higher acuracy of object-based methods as compared to pixel-based methods. The overall accuracy of first epoch in Aqda equaled 95.5%. Neural network method detected changes in buildings with an overall accuracy of 90%. In accordance with the ground truth used in network training (both using DSMs as the input layer and the difference between the epochs as the input layer), results indicated that deep learning method is highly accurate in one-dimensional convolution mode. Moreover, the second step has applied the difference between DSMs in the two epochs and thus, many areas lacking a change in height were removed in both epochs and the network was trained more appropriately and accurately.
Conclusion
Necessity of extracting features, especially urban features such as buildings and identifying their changes over time have been investigated in the present study. Due to the high computational volume of modern remote sensing and photogrammetry data and highly expensive systems required for their processing, a new method was presented in the present study to solve this problem. Considering the type of data used and the complexity of features, object-based methods were selected instead of pixel-based methods to identify features and buildings. Deep learning method was used to detect changes in buildings. The method was also compared with DSM differentiation method. A one-dimensional convolutional neural network was used in the deep learning method. Two different modes were used in the network to train and predict changes. In the first, DSMs extracted from the buildings in each epoch were used as the input layer, while in the second one, the difference between DSMs were introduced as a single input layer to the network and the network was trained in accordance with the ground truth collected from areas with and without change obtained from the DSM differentiation method. Following the training process, changes were predicted using the trained network. Much better results were obtained from the second mode in which the difference between DSMs were used.
Abolfazl Sharifi; Mohammad SaadatSeresht Mohammad SaadatSeresht
Abstract
Extended Absrtact
Introduction
Today, With the improvement of UAV technology as a spatial data collection platform, using the UAV photogrammetric method for mapping aims has become more popular. The advantages of this method include cost-effectiveness, speeding up the project process, ...
Read More
Extended Absrtact
Introduction
Today, With the improvement of UAV technology as a spatial data collection platform, using the UAV photogrammetric method for mapping aims has become more popular. The advantages of this method include cost-effectiveness, speeding up the project process, high resolution of spatial data, and production of various spatial products such as orthophoto mosaic, digital surface and ground models, 3D virtual model, and 3D map. From quality point of view, in addition to the network design in UAV photogrammetry projects, the camera and its accurate calibration are essential too. Metric cameras have a strong geometry, and their calibration parameters are known and stable with the smallest possible values. In spite of high accuracy outputs of metric cameras, it is practically impossible to use them in ultra-light public drones due to their high weight, size and cost. Therefore, today, non-metric and unstable digital cameras are conventional in UAV photogrammetric systems.However, many efforts are being made to reduce this weakness by improving the geometric quality of lightweight and inexpensive non-metric cameras. Despite of these efforts, application of non-metric cameras will not yet give us acceptable products without some practical considerations such as reducing flight altitude, increasing image side lap and overlap, and using high density of ground control points, which leads to a significant increase of cost and time. The main problem with these non-metric cameras is the weak geometry of their components that makes a high instability in the camera calibration parameters. This highlights the importance of proper geometric calibration of these cameras.
Materials & Methods
So far, several distortion models have been used to calibrate the metric cameras such as Brown model with a maximum of 12 parameters, including principal distance, principal point coordinates, lens radial and decentering distortions and affinity. These parameters are simultaneously estimated in a bundle adjustment with self-calibration process. Therefore, it can be said that this model considers fixed physical parameters for geometric modeling of the camera by which many images acquired in a photogrammetric block. If non-metric camera geometry is not modeled by a dynamic model with local spatial and temporal distortion parameters, some local systematic errors remain in the image observations. These systematic errors cause the estimation of unknown parameters in the least square adjustment is biased. Though this solution significantly improves the result of non-metric cameras in UAV photogrammetry, some errors in the 3D reconstruction remain yet due to low strength of observation equations set which comes from dynamic nature of the camera distortion model.The dynamic image distortions lead to parallax in stereoscopic vision and horizontal/vertical steps in the boundaries of connected 3D models. This paper proposes a post-processing method to reduce dynamic image distortions after conventional self-calibration of a non-metric camera with Brown model. The proposed method is based on local modeling of the image residuals using a finite element method. The data used in this study are photogrammetric drone images taken by ILCE-7RM2T, FC6310 and FC300S cameras. The proposed algorithm has been implemented in MATLAB programming environment and Agisoft Metashapesoftware has been used for initial processing.
Results & Discussion
As mentioned, the proposed algorithm is a post-processing task which reduces the image residuals and increases the geometric compatibility of 3D stereovision models.One of the critical indicators in the photogrammetric mapping production line is the quality of stereoscopic vision and the study of the vertical steps between connected 3D models. Because, photogrammetric map production requires stereo vision and the amount of model steps is used as a criterion for evaluation of image geometric distortion level. It can conclude that the use of the above idea is very effective in non-metric cameras with high geometric instability. The results of our experiments performed on the UAV photogrammetry data with low camera geometric stability indicate a60% reduction in the vertical steps of the models in stereoscopic vision and a 70% reduction in image residuals. This leads to a higher geometric quality of digital-elevation, 3D model, orthophoto, and map with 3D stereoscopic vision process. On the other hand, using this algorithm for non-metric cameras with higher geometric stability has a lower effect on the results. In our experiments, it was shown the vertical steps between 3D models can be reduced by 15% to 20%. However, there are still consecutive stereo models with quick steps in this type of camera, which will improve the geometric errors in stereoscopic vision if we ignore the computational costs.
Conclusion
The results of our experiments performed on the UAV photogrammetry data with low camera geometric stability indicate a 70% reduction in image residuals and a 60% reduction in the vertical steps of the models in stereoscopic vision. In this paper, the behavior of image residuals, the rate of model step reduction, and processing time in different dimensions of the distortion grid were investigated, and the grid dimensions of 150 to 200 pixels were recommended to apply the proposed method. Suggestions for further research are summarized in three sections. First, various factors such as the weight of observations and the weight of constraint equations can affect the estimation of the distortion grade, which can be estimated from the VCE method. Another point to consider in completing the proposed solution is to apply the temporal dependence between distortion grids in consecutive images. Also, although the proposed method uses the idea of finite elements as post-processing, it is more accurate to estimate this grid of distortion at the same time as the bundle adjustment.
Ali Erfanzadeh; Mohammad Saadatseresht
Abstract
Extended AbstractIntroductionNowadays, UAV photogrammetry has become one of the most effective methods of collecting spatial data according to the factors time, cost, quality and variety of outputs among terrestrial and aerial mapping technologies. Because the quality of a UAV photogrammetry products ...
Read More
Extended AbstractIntroductionNowadays, UAV photogrammetry has become one of the most effective methods of collecting spatial data according to the factors time, cost, quality and variety of outputs among terrestrial and aerial mapping technologies. Because the quality of a UAV photogrammetry products depends on the network design parameters setting according to the existing conditions and limitations, therefore, awareness of the behavior and impact of network design parameters on the quality of 3D reconstruction to achieve optimal quality of outputs is a very important issue. However, due to the time-consuming and the high cost of doing this study with huge real data, comprehensive research has not yet been conducted to measure the behavior of the effective parameters in network design and 3D reconstruction. There are various parameters include camera field of view, positioning error and imaging tilt in flight navigation, flight altitude and designed ground pixel dimensions, amount of sidelap and overlap images, image observation noise due to image quality, aerial triangulation error, in the process of preparing the map from aerial images, which is known as the most important parameters of UAV photogrammetric network design. In this paper, the simulation method is used to investigate the effect and behavior of the above parameters on the quality of three-dimensional reconstruction. Materials & MethodsIn the proposed method in MATLAB software environment, from a point with known 3D coordinates, using the collinearity equations and the value set for the network design parameters and their standard deviation according to the reality and experience of the expert, the imaging is done in a simulated manner. Then, by applying random and systematic errors on the visual observations and aerial triangulation parameters, the collinearity equations of the photographic observations form the desired point and using the least squares method of error in solving nonlinear equations, three-dimensional reconstruction, and quality are performed, then it has been evaluated by the Monte Carlo method. To achieve the results with high reliability, the quality of three-dimensional reconstruction is evaluated in five modes, respectively, ideal, excellent, good, average and bad, according to the expert opinion in setting the values of each parameter.Results & DiscussionThe results of this study show, most effective parameters in the quality of three-dimensional reconstruction in ideal conditions are camera instability, error of exterior orientation parameters and image quality, respectively, which gradually give way to parameters of flight altitude, imaging coverage and camera field of view in bad conditions. The results of the flight navigation error show, increased imaging platform instability has no significant effect on the average accuracy of 3D reconstruction, however, the accuracy changes in different places increase up to 20% due to the heterogeneity of the coverage and the visibility of different parts of the earth in the video network. The results also show that with increasing geometric instability of the non-metric camera, the accuracy of 3D reconstruction decreases linearly, in this regard, the imaging in bad conditions and the quality of the camera, the slower the reduction speed. It has also been shown that with increasing image observation error, which depends on image quality, the accuracy of 3D reconstruction decreases linearly. The results of the study of aerial triangulation parameters show that the three-dimensional reconstruction error increases linearly with increasing tie point matching error. In addition, as the focal length increases in the fixed flight altitude mode, the horizontal accuracy increases in proportion to the inverse magnification, and as the focal length decreases, the altitude accuracy decreases linearly, in the fixed ground sampling distance (GSD) mode, the horizontal error of 3D reconstruction is slowly reduced to 20%, while the height error increases with increasing height and decreasing the geometric resistance of the network by a factor of half magnification. The results also show that unlike traditional photogrammetry here, with increasing flight altitude, the horizontal and altitude errors of the 3D reconstruction increase linearly. The results of the study of the parameters of sidelap and overlap images show that the sidelap and overlap images can change the surface error up to 10 times and the height error and complete three-dimensional reconstruction up to 5 times. ConclusionThis study, while introducing the effective parameters in three-dimensional reconstruction by UAV photogrammetric method, has investigated the behavior and effect of these parameters on the quality of three-dimensional reconstruction in the simulation environment. This means how the quality of the reconstruction changes with minor changes to each of the parameters from half to twice the standard mode. Therefore, the closer this simulation is to reality, the more practical the results will be. Naturally, this complicates the simulation and increases the computational volume. Although this simulation is not entirely consistent with the actual situation, it can provide a kind of behavioral measurement of the parameters that serves as a complementary research to routine try and error investigations.