Improving precision of augmented reality using targets - Case study: Visualization of underground infrastructure

Document Type : Research Paper


1 Ph.D Student, Faculty of geodesy and geomatics eng., K.N.Toosi University of Technology, Tehran, Iran

2 Assistant proffessor, Faculty of geodesy and geomatics eng., K.N.Toosi University of Technology, Tehran, Iran



Extended Abstract
Underground infrastructure such as electricity, gas, telecommunications, water and sewage are managed by different organizations. Since most projects in these organizations require drilling,and imprecise excavations will endanger infrastructure and result in extensive financial and physical losses, drilling projects require having accurate information about the infrastructure status. However, reaching accurate position of facilities such as pipes and cables is difficult due to their being concealed underground.Nowadays, ubiquitous computing and new developments in Geospatial Information Systems (GIS) can be an appropriate solution to such problems. This new generation of GIS is called the Ubiquitous Geospatial Information System (UBGIS). New technologies such as Augmented Reality (AR) can visualize this infrastructure on platforms like smart phones or tablets. Such technologies show spatial and descriptive attributes of these utilities more interactively, and thus can be applied as a modern solution for this problem. One of the major features of AR is identifying and locating real-world objects with respect to the person’s head or a camera. To have an accurate Augmented Reality, the position and orientation (pose) of the camera should be estimated with high accuracy. Therefore, exterior orientation parameters of the camera are required for AR and tracking. Different methods are used to calculate these exterior orientation parameters. One of the most common methods applies different sensors,such as Global Positioning System (GPS) and Inertial Measuring Unit (IMU),embedded in smart phones or tablets to calculate these parameters. These sensors include accelerometers, gyroscopes, magnetic sensors and compasses. Althoughsimple and fast, this method is not suitable for accurate cases, because sensors of mobile phones or tabletscannot provide such high accuracy. Vision-based (sometimes called image-based) method is another way of estimating exterior orientation parameters. In this method, fixed or dynamic images are used to determine the position and orientation of camera. The method is more complex and slower, but more accurate than the first one.
 Materials and Methods
Regarding previously mentioned issues, the present article aims to visualize underground infrastructure using both sensor-based and vision-based approaches of Augmented Reality. Since the sensors embedded in a mobile phone or tablet do not provide such an accuracy (an accuracy of a few centimeters considering diameter of pipes and width of streets and pavements), a novel vision-based approach is proposed. In this method, image-based techniques and special kinds of targets, known as coded targets, are used to estimate camera’s position and orientation along with space resection method. In photogrammetry,space resection involves determining the spatial position and orientation of an image based on thesize of ground control points appearing on the image. Since space resection is a nonlinear problem, existing methods involve linearization of the collinearity condition and the use of an iterative process to determine the final solution using the least squares method. The process also requires determination of the initial approximate values of the unknown parameters, some of which must be estimated using another least squares solution. In order to obtain suitable initial values for space resection procedure, data received from GPS, accelerometers, and magnetic sensors are used and a low-pass filter is applied to reduce noise and increase precision. Then, due to improved camera pose parameters, the resulting virtual model is overlaid at its correct real worldplanimetriclocation. The planimetric coordinates are shown graphically on the ground and the Z coordinate (depth) is presented as a descriptive parameter.
 Results and Discussion
Both proposed methods were implemented and tested in an Android Operating System. Camera pose parameters were estimated and the virtual modelwas overlaid at its correct real world planimetric location and shown on camera. Then, the results were compared and evaluatedusingthe well-known photogrammetry software, Agisoft, with the aim of modelling and precise measuring based on basic photogrammetry and machine vision. For sensor-based method, mean accuracy of the position parameters equals 4.2908±3.951 meters and mean accuracy of orientation parameters equals 6.1796±1.478 degrees,whilein vision-based method,these decreases to 0.1227±0.325 meters and 2.2017±0.536 degrees, respectively. Thus, results indicate that the proposed methodimprove accuracy and efficiency of AR technologies.
Augmented Reality is a technology that can be used to visualize underground facilities. Although,processing in sensor-based methods is sufficiently fast and simple, they lack the precision required for this purpose. Despite the fact that noise elimination and sensor integration using Kalman filter improves accuracy to some degree, it still does not reach the required accuracy. The present article sought to improve the accuracy of augmented reality in underground infrastructureusing targets. Results indicated that the machine vision and vision-based methods improve the accuracy. In drillings, third dimension (accuracy of height measurements) is as crucial as other parameters, thusit is suggested that future researches consider this not as a descriptive parameter, but as a three dimensional parameter to reach 3dimensional visualization.


1.   Ahn, S. J., Rauh, W., & Kim, S. I. J. I. J. o. P. R. A. I. (2001). Circular coded target for automation of optical 3D-measurement and camera calibration. 15(06), 905-919.
2.   Amin, D., & Govilkar, S. J. I. J. o. C. S. (2015). Comparative study of augmented reality SDKs. 5(1), 11-26.
3.   Ansar, A., & Daniilidis, K. J. I. T. o. P. A. M. I. (2003). Linear pose estimation from points or lines. 25(5), 578-589.
4.   Azuma, R. T. J. P. T. V. E. (1997). A survey of augmented reality. 6(4), 355-385.
5.   Blum, J. R., Greencorn,D. G., & Cooperstock, J. R. (2012). Smartphone sensor reliability for augmented reality applications. Paper presented at the International Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services.
6.   Butchart, B. (2011). Augmented reality for smartphones.
7.   Caudell, T. P., & Mizell, D. W. (1992, January). Augmented reality: An application of heads-up display technology to manual manufacturing processes. In Proceedings of the twenty-fifth Hawaii international conference on system sciences (Vol. 2, pp. 659-669). IEEE.
8.   Fraser, C. J. T. P. R. (1997). Innovations in automation for vision metrology systems. 15(90), 901-911.
9.   Garson, G. D., Biggs, R. S., & Biggs, R. S. (1992). Analytic mapping and geographic databases: Sage.
10. Greenfield, A. (2006). Everyware: The Dawning Age of Ubiquitous Computer. In: Berkeley: New Riders, AIGA, EUA.
11. Gupta, S., Lohani, B. J. I. A. o. t. P., Remote Sensing, & Sciences, S. I. (2014). Augmented reality system using lidar point cloud data for displaying dimensional information of objects on mobile phones. 2(5), 153.
12. Hoff, W. A., Nguyen, K., & Lyon, T. (1996). Computer-vision-based registration techniques for augmented reality. Paper presented at the Intelligent Robots and Computer Vision XV: Algorithms, Techniques, Active Vision, and Materials Handling.
13. Hunter, A. (2000). The road to ubiquitous geographic information systems roam anywhere-Remain connected.
14. Karimi, M., Sadeghi Niaraki, A., & Hosseininaveh Ahmadabadian, A. J. G. E. J. (2017). Overview of Role of Ubiquitous GIS in Urban Underground Infrastructure Management. 8(1), 59-69.
15. Lee, Y., Choi, J. J. I. j. o. m., & engineering, u. (2014). Tideland Animal AR: Superimposing 3D Animal Models to User Defined Targets for Augmented Reality Game. 9(4), 343-348.
16. Li, K. J. N. d. C., Pusan National University, Yangsan. (2007). Ubiquitous GIS.
17. Maidi, M., Ababsa, F., Mallem, M. J. E. J. o. I., & Processing, V. (2010). Handling occlusions for robust augmented reality systems. 2010(1), 146123.
18. Marchand, E., Uchiyama, H., Spindler, F. J. I. t. o. v., & graphics, c. (2016). Pose estimation for augmented reality: a hands-on survey. 22(12), 2633-2651.
19. Mekni, M., & Lemieux, A. J. A. C. S. (2014). Augmented reality: Applications, challenges and future trends. 205-214.
20. Miyano, R., Inoue, T., Minagawa, T., Uematsu, Y., & Saito, H. (2012). Camera pose estimation of a smartphone at a field without interest points. Paper presented at the Asian Conference on Computer Vision.
21. Mourcou, Q., Fleury, A., Franco, C., Klopcic, F., & Vuillerme, N. J. S. (2015). Performance evaluation of smartphone inertial sensors measurement for range of motion. 15(9), 23168-23187.
22. Pagani, A., Koehler, J., & Stricker, D. (2011). Circular markers for camera pose estimation.
23. Rekimoto, J. (1998). Matrix: A realtime object identification and registration method for augmented reality. Paper presented at the Proceedings. 3rd Asia Pacific Computer Human Interaction (Cat. No. 98EX110).
24. Rekimoto, J., & Ayatsuka, Y. (2000). CyberCode: designing augmented reality environments with visual tags. Paper presented at the Proceedings of DARE 2000 on Designing augmented reality environments.
25. Schall, G. (2009). Handheld augmented reality in civil engineering. Paper presented at the 4th Conference on Computer Image Processing and its Application in Slovenia 2009 (ROSUS 2009).
26. Schall, G., Mendez, E., Kruijff, E., Veas, E., Junghanns, S., Reitinger, B., . . . computing, u. (2009). Handheld augmented reality for underground infrastructure visualization. 13(4), 281-291.
27. Schall,G., Newman, J., & Schmalstieg, D. (2005). Rapid and accurate deployment of fiducial markers for augmented reality. Paper presented at the Proc. 10 th Computer Vision Winter Workshop (CVWW 2005). http://www. icg. tu-graz. [2005.
28. Siltanen, S. (2012). Theory and applications of marker-based augmented reality: VTT.
29. Swan, J. E., Singh, G., Ellis, S. R. J. I. t. o. v., & graphics, c. (2015). Matching and reaching depth judgments with real and augmented reality targets. 21(11), 1289-1298.
30. Taketomi, T., Okada, K., Yamamoto, G., Miyazaki, J., Kato, H. J. C., & Graphics. (2014). Camera pose estimation under dynamic intrinsic parameter change for augmented reality. 44, 11-19.
31. Uchiyama, H., & Marchand, E. (2012). Object detectionand pose tracking for augmented reality: Recent approaches. Paper presented at the 18th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV).
32. Weiser, M. J. I. p. c. (2002). The computer for the 21st century. 1(1), 19-25.
33.          Zollmann, S., Schall, G., Junghanns, S., & Reitmayr, G. (2012). Comprehensible and interactive visualizations of GIS data in augmented reality. Paper presented at the International Symposium on Visual Computing.34.