نوع مقاله : مقاله پژوهشی

نویسندگان

1 دانشجوی کارشناسی ارشد فتوگرامتری، دانشکده مهندسی نقشه‌برداری و اطلاعات مکانی، دانشکده‌های فنی، دانشگاه تهران

2 دانشیار دانشکده سیستم اطلاعات مکانی و نقشه‌برداری دانشگاه تهران

چکیده

کالیبراسیون دوربین رکن مهم هر پروژه فتوگرامتری است. تاکنون مدلهای اعوجاج متعددی برای کالیبراسیون دوربین متریک بهکارگرفته شده است مانند مدل براون که حداکثر 12 پارامتر شامل فاصله اصلی، مختصات نقطه اصلی، اعوجاج فیزیکی عکسی شامل اعوجاجات شعاعی، اعوجاجات مماسی و ... را در یک فرآیند سرشکنی باندل، بهصورت خودکالیبراسیون برآورد می­نماید. این راهکار کماکان برای دوربینهای ناپایدار و غیرمتریک در فتوگرامتری پهپاد نیز بهکار گرفته شده که اگرچه باعث بهبود معنی­داری در مختصات سهبعدی شئ­ای میشود اما هنوز اعوجاجاتی بهواسطه ناپایداری هندسی دوربین در مختصات عکسی باقی می­ماند. این اعوجاجات باقیمانده، منجر به پارالاکس و ایجاد پله ارتفاعی بین مدلهای سهبعدی در برجسته­بینی میشود. در این مقاله یک روش پسپردازش برای کاهش اعوجاجات باقیمانده عکسی بعد از خودکالیبراسیون دوربین غیرمتریک در پروژه­های فتوگرامتری پهپاد مطرح میگردد. روش ارائه شده مدلسازی باقیمانده‌‌های عکس بهکمک یک روش اجزاء محدود است. دادههای استفاده شده در این تحقیق تصاویر پهپاد فتوگرامتری اخذ شده توسط دوربینهای ILCE_7RM2و ، FC6310 ، وFC300S است. پیادهسازی الگوریتم پیشنهادی در محیط برنامه‌‌نویسی Matlab انجام شده و از نرمافزار متاشیپ[1] نیز برای پردازش اولیه استفاده شده است. نتایج حاصل از آزمون­های انجام شده روی چند داده فتوگرامتری پهپاد با مشخصات و مقیاس­های متفاوت بیانگر کاهش باقیماندههای عکسی تا 70 درصد در پس از مدلسازی و تصحیح اعوجاجات روی تصاویر است. همچنین با انجام برجسته­بینی سه­بعدی روی تصاویر تصحیح یافته، شاهد کاهش 60 درصدی پله بین مدلهای استریو هستیم که منجر به کیفیت هندسی بالاتر تهیه مدل رقومی ­ارتفاعی، ارتوفتو و تهی نقشه با برجسته­بینی سه­بعدی میشود.
 
[1]-  Agisoft Metashape   
 

کلیدواژه‌ها

عنوان مقاله [English]

Modeling of photographic residues from aerial triangulation of UAV photogrammetric network and its evaluation

نویسندگان [English]

  • Abolfazl Sharifi 1
  • Mohammad SaadatSeresht Mohammad SaadatSeresht 2

1 Master student in School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran

2 Associate Professor in School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran

چکیده [English]

 
Extended  Absrtact
Introduction
Today, With the improvement of UAV technology as a spatial data collection platform, using the UAV photogrammetric method for mapping aims has become more popular. The advantages of this method include cost-effectiveness, speeding up the project process, high resolution of spatial data, and production of various spatial products such as orthophoto mosaic, digital surface and ground models, 3D virtual model, and 3D map. From quality point of view, in addition to the network design in UAV photogrammetry projects, the camera and its accurate calibration are essential too. Metric cameras have a strong geometry, and their calibration parameters are known and stable with the smallest possible values. In spite of high accuracy outputs of metric cameras, it is practically impossible to use them in ultra-light public drones due to their high weight, size and cost. Therefore, today, non-metric and unstable digital cameras are conventional in UAV photogrammetric systems.However, many efforts are being made to reduce this weakness by improving the geometric quality of lightweight and inexpensive non-metric cameras. Despite of these efforts, application of non-metric cameras will not yet give us acceptable products without some practical considerations such as reducing flight altitude, increasing image side lap and overlap, and using high density of ground control points, which leads to a significant increase of cost and time. The main problem with these non-metric cameras is the weak geometry of their components that makes a high instability in the camera calibration parameters. This highlights the importance of proper geometric calibration of these cameras.
 
Materials & Methods
So far, several distortion models have been used to calibrate the metric cameras such as Brown model with a maximum of 12 parameters, including principal distance, principal point coordinates, lens radial and decentering distortions and affinity. These parameters are simultaneously estimated in a bundle adjustment with self-calibration process. Therefore, it can be said that this model considers fixed physical parameters for geometric modeling of the camera by which many images acquired in a photogrammetric block. If non-metric camera geometry is not modeled by a dynamic model with local spatial and temporal distortion parameters, some local systematic errors remain in the image observations. These systematic errors cause the estimation of unknown parameters in the least square adjustment is biased. Though this solution significantly improves the result of non-metric cameras in UAV photogrammetry, some errors in the 3D reconstruction remain yet due to low strength of observation equations set which comes from dynamic nature of the camera distortion model.The dynamic image distortions lead to parallax in stereoscopic vision and horizontal/vertical steps in the boundaries of connected 3D models. This paper proposes a post-processing method to reduce dynamic image distortions after conventional self-calibration of a non-metric camera with Brown model. The proposed method is based on local modeling of the image residuals using a finite element method. The data used in this study are photogrammetric drone images taken by ILCE-7RM2T, FC6310 and FC300S cameras. The proposed algorithm has been implemented in MATLAB programming environment and Agisoft Metashapesoftware has been used for initial processing.
 
Results & Discussion
As mentioned, the proposed algorithm is a post-processing task which reduces the image residuals and increases the geometric compatibility of 3D stereovision models.One of the critical indicators in the photogrammetric mapping production line is the quality of stereoscopic vision and the study of the vertical steps between connected 3D models. Because, photogrammetric map production requires stereo vision and the amount of model steps is used as a criterion for evaluation of image geometric distortion level. It can conclude that the use of the above idea is very effective in non-metric cameras with high geometric instability. The results of our experiments performed on the UAV photogrammetry data with low camera geometric stability indicate a60% reduction in the vertical steps of the models in stereoscopic vision and a 70% reduction in image residuals. This leads to a higher geometric quality of digital-elevation, 3D model, orthophoto, and map with 3D stereoscopic vision process. On the other hand, using this algorithm for non-metric cameras with higher geometric stability has a lower effect on the results. In our experiments, it was shown the vertical steps between 3D models can be reduced by 15% to 20%. However, there are still consecutive stereo models with quick steps in this type of camera, which will improve the geometric errors in stereoscopic vision if we ignore the computational costs.
 
Conclusion
The results of our experiments performed on the UAV photogrammetry data with low camera geometric stability indicate a 70% reduction in image residuals and a 60% reduction in the vertical steps of the models in stereoscopic vision. In this paper, the behavior of image residuals, the rate of model step reduction, and processing time in different dimensions of the distortion grid were investigated, and the grid dimensions of 150 to 200 pixels were recommended to apply the proposed method. Suggestions for further research are summarized in three sections. First, various factors such as the weight of observations and the weight of constraint equations can affect the estimation of the distortion grade, which can be estimated from the VCE method. Another point to consider in completing the proposed solution is to apply the temporal dependence between distortion grids in consecutive images. Also, although the proposed method uses the idea of finite elements as post-processing, it is more accurate to estimate this grid of distortion at the same time as the bundle adjustment.

کلیدواژه‌ها [English]

  • Camera calibration
  • Non-metric cameras
  • Camera instability
  • Dynamic model of camera distortions
  • Vertical steps of the models
1- Agisoft, L. (2020). Agisoft metashape user manual: Standard edition. In.
2- Babapour, H., Mokhtarzade, M., & Valadan Zoej, M. J. (2017). A novel post-calibration method for digital cameras using image linear features. International Journal of Remote Sensing, 38(8-10), 2698-2716.  
3- Chio, S.-H. (2016). VBS RTK GPS-assisted self-calibration bundle adjustment for aerial triangulation of fixed-wing UAS images for updating topographic maps. Boletim de Ciências Geodésicas, 22, 665-684.   
4- Dang, T., Hoffmann, C., & Stiller, C. (2009). Continuous stereo self-calibration by camera parameter tracking. IEEE Transactions on image processing, 18(7), 1536-1550.        
5- Duane, C. B. (1971). Close-range camera calibration. Photogramm. Eng, 37(8), 855-866.           
6- Dumitru, P. D., Plopeanu, M., & Badea, D. (2013). Comparative study regarding the methods of interpolation. Recent advances in geodesy and Geomatics engineering, 1, 45-52.        
7- Faugeras, O. D., Luong, Q.-T., & Maybank, S. J. (1992). Camera self-calibration: Theory and experiments. Paper presented at the European conference on computer vision.    
8- Fraser, C., & Brown, D. (1986). Industrial photogrammetry: New developments and recent applications. The Photogrammetric Record, 12(68), 197-217.  
9- Fraser, C. S., & Al-Ajlouni, S. (2006). Zoom-dependent camera calibration in digital close-range photogrammetry. Photogrammetric Engineering & Remote Sensing, 72(9), 1017-1026.     
10- Gruen, A., & Beyer, H. A. (2001). System calibration through self-calibration. In Calibration and orientation of cameras in computer vision (pp. 163-193): Springer.      
11- Habed, A., & Boufama, B. (2008). Camera self-calibration from bivariate polynomials derived from Kruppa’s equations. Pattern Recognition, 41(8), 2484-2492.       
12- Hartley, R. I. (1993). Euclidean reconstruction from uncalibrated views. Paper presented at the Joint European-US workshop on applications of invariance in computer vision.             
13- Hastedt, H., Luhmann, T., & Tecklenburg, W. (2002). Image-variant interior orientation and sensor modelling of high-quality digital cameras. INTERNATIONAL ARCHIVES OF PHOTOGRAMMETRY REMOTE SENSING AND SPATIAL INFORMATION SCIENCES, 34(5), 27-32.
14- Heyden, A., & Astrom, K. (1997). Euclidean reconstruction from image sequences with varying and unknown focal length and principal point. Paper presented at the Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition.        
15- Huang, W., Jiang, S., & Jiang, W. (2021). Camera Self-Calibration with GNSS Constrained Bundle Adjustment for Weakly Structured Long Corridor UAV Images. Remote Sensing, 13(21), 4222.
16- Kruppa, E. (1939). Zur ermittlung eines objektes zwei perspektiven mit innerer orientierung. Sitz-Ber. Akad. Wiss., Wien, math. naturw. Kl. Abt., IIa (122), 1948.    
17- Lei, C., Wu, F., Hu, Z., & Tsui, H.-T. (2002). A new approach to solving kruppa equations for camera self-calibration. Paper presented at the Object recognition supported by user interaction for service robots.          
18- Lourakis, M. I., & Deriche, R. (1999). Camera self-calibration using the singular value decomposition of the fundamental matrix: From point correspondences to 3D measurements. INRIA,      
19- Maybank, S. J., & Faugeras, O. D. (1992). A theory of self-calibration of a moving camera. International journal of computer vision, 8(2), 123-151.         
20- Pollefeys, M., Koch, R., & Van Gool, L. (1999). Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters. International journal of computer vision, 32(1), 7-25.          
21- Pollefeys, M., & Van Gool, L. (1999). Stratified self-calibration with the modulus constraint. IEEE Transactions on Pattern Analysis and Machine Intelligence, 21(8), 707-724.   
22- Reddy, J. N. (2019). Introduction to the finite element method: McGraw-Hill Education.          
23- Xu, G., Terai, J.-i., & Shum, H.-Y. (2000). A linear algorithm for camera self-calibration, motion and structure recovery for multi-planar scenes from two perspective images. Paper presented at the Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662).            
24- Zhang, Z. (2000). A flexible new technique for camera calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11), 1330-1334.     
25- Zienkiewicz, O. C., Taylor, R. L., Nithiarasu, P., & Zhu, J. (1977). The finite element method (Vol. 3): McGraw-hill London.