Document Type : Research Paper

Authors

1 M.Sc. Graduate, Department of soil science, Faculty of agriculture, University of Zanjan, Zanjan, Iran

2 Assistant professor, Graduate, Department of soil science, Faculty of agriculture, University of Zanjan, Zanjan, Iran

3 Associate professor, Department of soil science, Faculty of agriculture, University of Zanjan, Zanjan, Iran

Abstract

Extended Abstract
Introduction
In the last few decades, thematic maps and models were usually assessed using Kappa index of agreement. The index gives us the relative observed agreement among raters (identical to accuracy), but lacks any useful information to make practical decision making about the model’svalidity easier. In other words, Kappa index does not provide an explanation about classification quality or an idea about increasing theaccuracyof the predicted map. Moreover, the index does not explain the causes of disagreement.Thus, giving indices of agreement without any interpretation will not be satisfactory. Today, new complementary methods are required to show the quantitative and spatialagreement and disagreement between two maps. It is necessary to show how a modeled map can be produced with better accuracy. The present study seeks to introduce and explain concepts of agreement and disagreement components with an example. Finally, these components are introduced as a useful method for the validation of digital maps.
 Materials and Methods
An area of 410 hectares which belongs toZanjanUniversity was used to express the findings of this study. The area is located 5 km before the beginning of Zanjan-Miyaneh Road at 48.4° eastern latitude and 36.68° northern longitude.
A digital soil mapin which probability distribution of different soil classes is obtained using multinomial logistic regression algorithm and a reference soil mapproduced with the conventional methods are usedto explain the concepts and investigate the spatial and quantitative agreement and disagreement indices. Validation and calculation of quantitative and spatial agreement and disagreements are performed using IDRISI software (SELVA version). To simplify the process, two maps with a grid structure (3 x 3) are introduced as a reference map and a predicted map. The reference map is used for spatial and quantitative evaluation and validation of the predicted map cells. Each map contains 9 cells and each grid cell has a membership value of either white or gray categories.
 Results and Discussion
In the validation process of two maps, most researchers seek to find answers to two important questions: 1- How much agreement is there between the cells of each mapping class group? And 2- How much agreement is there between the map used in modeling and the reference map regarding the position of the cells in each class?
The present study expresses agreement between the two soil maps using an index of (M (m)) which equals 60.69%. With an average level of quantitative and spatial information about different classes of the digital soil map (DSM), the H (m) index equals46.4%. Results indicate that if the produced map is modified or rearranged (provided that the level of quantitative information remains unchanged but the amount of spatial information increases), the agreement between the maps increases dramatically and reaches 87.17%. Quantitative and spatialagreement and disagreement between the digital and traditional soil maps also equal 61% (M(m) = 61%) and 39%, respectively. The DSM accuracy can be increased to 87% (P (m) = 87%) compared to thetraditional soil map through spatial modification of cells(without changing quantitative information).
 Conclusion
Evaluating the accuracy and validity of digital maps are considered to be an important and sensitive stepof research projects. Therefore, introducing more accurate indices is very important. According to the results of the present study, displayingquantitative and spatialagreement and disagreement in the form of a matrix and according to the different levels of quantitative and spatial information can be a new strategy to verify modeling methods. The method presented here not only introduces and interprets sources of (quantitative and spatial)error, but also provides information on the possible ways of reducing these errors. Thus, introducing the amount of error without any scientific interpretation cannot be useful for predicted maps. Unfortunately, researchers does not concur on how to report agreement and disagreement. However, it seems thatwhen it comes to explaining errors and finding a method to reduce such errors,the components of disagreement and its related parameters are more useful than agreement component and its indices. Therefore, it is recommended tointerpretdisagreement components before other components of agreement. The advantage of this method is that complex analyses can be reported in a simple form. Finally, this assessment and validation method is expected to be used in different studies as an appropriate and alternative method.
 

Keywords

1- آقایاری، حمید، 1393، تهیه نقشه خاک و ارزیابی دقت جداسازی واحدهای آن در محدوده دانشگاه زنجان، دلاور، محمد امیر، پایان‌نامه کارشناسی‌ارشد، دانشگاه زنجان، گروه علوم خاک.
2- صادق‌بیگی، اکرم، 1393، کاربرد تجزیه و تحلیل‌های مدل رقومی زمین برای تولید نقشه رقومی خاک، مروج، کامران، پایان‌نامه کارشناسی‌ارشد، دانشگاه زنجان، گروه علوم خاک.
3- Allouche, O., Tsoar, A., & Kadmon, R. (2006). Assessing the accuracy of species distribution models: prevalence, kappa and the true skill statistic (TSS). Journal of applied ecology, 43(6), 1223-1232.
4- Cabeza, M., Araújo, M. B., Wilson, R. J., Thomas, C. D., Cowley, M. J., & Moilanen, A. (2004). Combining probabilities of occurrence with spatial reserve design. Journal of applied ecology, 41(2), 252-262.
5- Collingham, Y.C., Wadsworth, R.A., Huntley, B., & Hulme, P.E. (2000). Predicting the spatial distribution of non-indigenous riparian weeds: issues of spatial scale and extent. Journal of Applied Ecology, 37 (Supplement), 13–27.
6- El Emam, K. (1999). Benchmarking Kappa: Inter rater agreement in software process assessments. Empirical Software Engineering, 4(2), 113-133.
7- Eugenio, B. D., & Glass, M. (2004). The kappa statistic: A second look. Computational linguistics, 30(1), 95-101.
8- Foody, G. M. (1992). On the compensation for chance agreement in image classification accuracy assessment. Photogrammetric engineering and remote sensing, 58(10), 1459-1460.
9- Foody, G. M. (2002). Statuses of land cover classification accuracy assessment. Remote sensing of environment, 80(1), 185-201.
10- Foody, G. M. (2004). Thematic map comparison. Photogrammetric Engineering & Remote Sensing, 70(5), 627-633.
11- Foody, G. M. (2008). Harshness in image classification accuracy assessment. International Journal of Remote Sensing, 29(11), 3137-3158.
12- Foody, G. M. (2020). Explaining the unsuitability of the kappa coefficient in the assessment and comparison of the accuracy of thematic maps obtained by image classification. Remote Sensing of Environment, 239, 167-179.
13- Jaberg, C., & Guisan, A. (2001). Modelling the distribution of bats in relation to landscape structure in a temperate mountain environment. Journal of Applied Ecology, 38(6), 1169-1181.
14- Jung, H. W. (2003). Evaluating inter rater agreement in SPICE-based assessments. Computer Standards & Interfaces, 25(5), 477-499.
15- Kantakumar, L. N., Kumar, S., & Schneider, K. (2019). SUSM: a scenario-based urban growth simulation model using remote sensing data. European Journal of Remote Sensing, 52(sup2), 26-41.
16- Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33, 159-174.
17- Mishra, P. K., Rai, A., & Rai, S. C. (2019). Land use and land cover change detection using geospatial techniques in the Sikkim Himalaya, India. The Egyptian Journal of Remote Sensing and Space Science. 22 (3), 227-238.
18- Pontius, Jr, R.G. (2000) .Quantification error versus location error in comparison of categorical maps. Photogrammetric Engineering and Remote Sensing. 66 (8), 1011–1016.
19- Pontius Jr, R. G. (2002). Statistical methods to partition effects of quantity and location during comparison of categorical maps at multiple resolutions. Photogrammetric Engineering and Remote Sensing, 68(10), 1041-1050.
20- Pontius Jr, R. G., & Suedmeyer, B. (2004). Components of agreement between categorical maps at multiple resolutions. Remote sensing and GIS accuracy assessment, 233-251.
21- Pontius Jr, R. G., & Chen, H. (2006). GEOMOD modeling. Clark University.
22- Pontius Jr, R. G., & Millones, M. (2011). Death to Kappa: birth of quantity disagreement and allocation disagreement for accuracy assessment. International Journal of Remote Sensing, 32(15), 4407-4429.
23- Ruelland, D., Dezetter, A., Puech, C., & Ardoin-Bardin, S. (2008). Long-term monitoring of land cover changes based on Landsat imagery to improve hydrological modelling in West Africa. International Journal of Remote Sensing, 29(12), 3533-3551.
24- Schneider, L. C., & Pontius Jr, R. G. (2001). Modeling land-use change in the Ipswich watershed, Massachusetts, USA. Agriculture, Ecosystems & Environment, 85(1-3), 83-94.
25- Turk, G. (2002). Map evaluation and” chance correction”. Photogrammetric Engineering and Remote Sensing, 68(2), 123-133.
26- Warrens, M. J. (2015). Five ways to look at Cohen’s kappa. J Psychol Psychother, 5(4), 2-4.
27- Wundram, D., & Löffler, J. (2008). High-resolution spatial analysis of mountain landscapes using a low-altitude remote sensing approach. International Journal of Remote Sensing, 29(4), 961-974.
28- Yilmaz, A. E., & Aktas, S. (2018). Ridit and exponential type scores for estimating the kappa statistic. Kuwait Journal of Science, 45(1), 89-99