%0 Journal Article %T Improving precision of augmented reality using targets - Case study: Visualization of underground infrastructure %J Scientific- Research Quarterly of Geographical Data (SEPEHR) %I National Geographical Organization %Z 2588-3860 %A Karimi, Mina %A Sadeghi Niaraki, Abolghasem %A Hosseininaveh Ahmadabdian, Ali %D 2020 %\ 02/20/2020 %V 28 %N 112 %P 75-91 %! Improving precision of augmented reality using targets - Case study: Visualization of underground infrastructure %K Ubiquitous GIS %K Augmented reality %K Underground infrastructure %K Target %K precision %K pose estimation %R 10.22131/sepehr.2020.38608 %X Extended Abstract Introduction Underground infrastructure such as electricity, gas, telecommunications, water and sewage are managed by different organizations. Since most projects in these organizations require drilling,and imprecise excavations will endanger infrastructure and result in extensive financial and physical losses, drilling projects require having accurate information about the infrastructure status. However, reaching accurate position of facilities such as pipes and cables is difficult due to their being concealed underground.Nowadays, ubiquitous computing and new developments in Geospatial Information Systems (GIS) can be an appropriate solution to such problems. This new generation of GIS is called the Ubiquitous Geospatial Information System (UBGIS). New technologies such as Augmented Reality (AR) can visualize this infrastructure on platforms like smart phones or tablets. Such technologies show spatial and descriptive attributes of these utilities more interactively, and thus can be applied as a modern solution for this problem. One of the major features of AR is identifying and locating real-world objects with respect to the person’s head or a camera. To have an accurate Augmented Reality, the position and orientation (pose) of the camera should be estimated with high accuracy. Therefore, exterior orientation parameters of the camera are required for AR and tracking. Different methods are used to calculate these exterior orientation parameters. One of the most common methods applies different sensors,such as Global Positioning System (GPS) and Inertial Measuring Unit (IMU),embedded in smart phones or tablets to calculate these parameters. These sensors include accelerometers, gyroscopes, magnetic sensors and compasses. Althoughsimple and fast, this method is not suitable for accurate cases, because sensors of mobile phones or tabletscannot provide such high accuracy. Vision-based (sometimes called image-based) method is another way of estimating exterior orientation parameters. In this method, fixed or dynamic images are used to determine the position and orientation of camera. The method is more complex and slower, but more accurate than the first one.  Materials and Methods Regarding previously mentioned issues, the present article aims to visualize underground infrastructure using both sensor-based and vision-based approaches of Augmented Reality. Since the sensors embedded in a mobile phone or tablet do not provide such an accuracy (an accuracy of a few centimeters considering diameter of pipes and width of streets and pavements), a novel vision-based approach is proposed. In this method, image-based techniques and special kinds of targets, known as coded targets, are used to estimate camera’s position and orientation along with space resection method. In photogrammetry,space resection involves determining the spatial position and orientation of an image based on thesize of ground control points appearing on the image. Since space resection is a nonlinear problem, existing methods involve linearization of the collinearity condition and the use of an iterative process to determine the final solution using the least squares method. The process also requires determination of the initial approximate values of the unknown parameters, some of which must be estimated using another least squares solution. In order to obtain suitable initial values for space resection procedure, data received from GPS, accelerometers, and magnetic sensors are used and a low-pass filter is applied to reduce noise and increase precision. Then, due to improved camera pose parameters, the resulting virtual model is overlaid at its correct real worldplanimetriclocation. The planimetric coordinates are shown graphically on the ground and the Z coordinate (depth) is presented as a descriptive parameter.  Results and Discussion Both proposed methods were implemented and tested in an Android Operating System. Camera pose parameters were estimated and the virtual modelwas overlaid at its correct real world planimetric location and shown on camera. Then, the results were compared and evaluatedusingthe well-known photogrammetry software, Agisoft, with the aim of modelling and precise measuring based on basic photogrammetry and machine vision. For sensor-based method, mean accuracy of the position parameters equals 4.2908±3.951 meters and mean accuracy of orientation parameters equals 6.1796±1.478 degrees,whilein vision-based method,these decreases to 0.1227±0.325 meters and 2.2017±0.536 degrees, respectively. Thus, results indicate that the proposed methodimprove accuracy and efficiency of AR technologies.  Conclusion Augmented Reality is a technology that can be used to visualize underground facilities. Although,processing in sensor-based methods is sufficiently fast and simple, they lack the precision required for this purpose. Despite the fact that noise elimination and sensor integration using Kalman filter improves accuracy to some degree, it still does not reach the required accuracy. The present article sought to improve the accuracy of augmented reality in underground infrastructureusing targets. Results indicated that the machine vision and vision-based methods improve the accuracy. In drillings, third dimension (accuracy of height measurements) is as crucial as other parameters, thusit is suggested that future researches consider this not as a descriptive parameter, but as a three dimensional parameter to reach 3dimensional visualization. %U https://www.sepehr.org/article_38608_57f37f01ee419a16d9b5011ec679131f.pdf