Fariba Moghani Rahimi; Ahmad Mazidi; Hamid Reza Ghafarian Malamiri
Abstract
Abstract ExtendedIntroductionStudying land cover changes has a very long history which coincides with the beginning of human life. Following the formation of societies, primitive humans began to change the cover of wasteland to form suitable lands for agriculture and animal husbandry. More than half ...
Read More
Abstract ExtendedIntroductionStudying land cover changes has a very long history which coincides with the beginning of human life. Following the formation of societies, primitive humans began to change the cover of wasteland to form suitable lands for agriculture and animal husbandry. More than half of the world's population recently lives in cities, urbanization and urbanism is rapidly increasing, and this trend will continue to reach its peak. Due to their extensive coverage, reproducibility, easy-access, high accuracy and reduction in necessary time and expenses, remote sensing data are generally considered a preferred method used to study land cover, vegetation, and their changes. Many researchers have shown an interest in land cover change in different cities of the world. The history of land cover studies dates back to the early nineteenth century and the studies performed by von Thünen (1826). Von Thünen have determined the economic benefits of different land covers based on their distance from the central city and found an optimal distribution for production and land cover in the form of a series of concentric circles. Land cover changes due to human activities are considered to be an important topic in regional and development planning. Since land cover changes and urban development in the study area have not been previously studied, Landsat time series satellite imagery and a combination of Landsat 7 and 8 panchromatic and multispectral bands were used to identify and detect changes in land cover and urban development in the urban areas of Abarkooh from 2000 to 2020. Materials & MethodsSatellite remote sensing data are used in the present study (Landsat 7 and 8 multi-temporal satellite images collected in 2000, 2010 and 2020). 3 images were retrieved from US Geological Survey website and used in the present study. Raw remote sensing images always contain errors in geometry and the measured pixel values. The former category is called geometric errors and the latter is called radiometric errors. Atmospheric corrections were performed for all images used, and stripping in the imagery collected in 2010 image was also corrected. For image enhancement and extraction of more information from the images, false color composites were used (5-4-3 infrared, red and green bands) for Landsat 8 and Landsat 7 (3-4-3 near infrared, red and green bands) images. Using this technique, vegetation is shown in red. Compared to other methods, Gram-Schmidt based pan sharpening method produced higher spatial resolution images of the study area and thus was used to combine the selected images. Maximum likelihood method is considered to have the highest efficiency among various supervised classification methods. Results & DiscussionThis method assumes the presence of a normal distribution for all training areas. The accuracy of this classification has to be calculated following the classification. To do so, the kappa coefficient and overall accuracy of each class were calculated in ENVI5.3. The results are shown in the error matrix. Overall accuracy is the average of classification accuracy. The kappa coefficient calculates the accuracy of classification as compared to a completely random classification. Based on the available data, spatial resolution of the images and the information researcher has access to, 5 classes of training data (urban constructed space, roads, barren lands, arable lands, and gardens) have been selected for each image. Results obtained from the maximum likelihood classification method in ENVI5.3 environment were changed into the vector format and then used as a shape file in GIS environment. After compiling the land database, land cover maps and its changes were extracted in three periods and the area of each land cover class was determined. Each of the land cover maps, 5 classes with different colors are determined and shown. To ensure the accuracy of the classification, the accuracy of the classification has been evaluated. ConclusionThe resulting kappa coefficient for 2000 and 2020 equaled 86% and overall accuracy equaled 89%, while for 2010 kappa coefficient equaled 90% and overall accuracy equaled 92%. Thus, the error rate is small and acceptable. Finally, post-classification comparison method was used to investigate the nature of changes. 13 square kilometers of land cover were investigated in the present study. To identify the exact type of land cover changes, categorized images collected in these years were compared. Total area of residential land use showed an increasing trend: a total 4.25 square kilometers in 2000 (32.69 percent of the total area under study) has reached 5.58 square kilometers (42.92 percent) in 2020. Overall area of arable land use did not change much in the period of 2000 to 2010. However, a declining trend was observed in 2020 changing a part of this land use into residential and barren lands. Results of satellite image processing and classification indicate that supervised classification and maximum probability algorithm were close to ground realities and had an acceptable accuracy. In general, results indicate that significant amounts of vegetation and agricultural lands have been converted into urban areas and thus, planning for urban growth in these areas should be in favor of preserving gardens and agricultural lands.
Seyed Mehdi Yavari; Zahra Azizi
Abstract
Extended AbstractIntroductionLack of uniform light radiation on the objects, reduces the amount of contrast in the images and makes it difficult to extract image features. This problem destroys information about the behavior, shape, size, pattern, texture, and tone of the effects, and compresses the ...
Read More
Extended AbstractIntroductionLack of uniform light radiation on the objects, reduces the amount of contrast in the images and makes it difficult to extract image features. This problem destroys information about the behavior, shape, size, pattern, texture, and tone of the effects, and compresses the image histogram in one or more specific areas. UAV images have been widely used in recent years due to their extensive coverage, high operating speed, use in hard-to-reach areas and up-to-date equipment. If drone images are correctly taken and pre-processed, they provide good accuracy for a variety of applications. The preprocessing is important since the image acquisition conditions cannot be changed in most cases so that the acquired images are contaminated with some distortions or errors which must be removed or their effect reduced to a minimum before any process. Improving the exposure in the image, which increases the amplitude of the histogram, can highlight features with similar gray-scale values, and this is useful in identification. Materials & MethodsIn this study, two aerial images have been used with a variety of vegetation, soil and man-made features using Storm 2 hexacopter drone in Simorgh city (Kiakla) in Mazandaran province with longitude and latitude 52⸰ 54' 1'' and 36⸰ 35' 49''. At first the SMQT algorithm is applied to the input images. So the bits number of the input image is calculated to determine the number of transmission levels. Then with rgb2gray command creates a gray image of the original image. The overall average of the image is calculated and the DN of each pixel is compared to the average. If the DN is greater than the pixel value, the number 1 is assigned to the pixel, otherwise the number zero in another image is assigned to the pixel. The average calculation and segmentation of pixels based on the number of bits continues, each segmentation is called a transfer. Then, by converting the data from these divisions into values in the spectral range of the image, a new image is created. This image has higher radiometric resolution than the original input image but lower spectral resolution. For this reason, the image is fused. Global gamma correction is applied to the fused image. Finding gamma in the image, especially local gamma is time consuming and complex for programming and computing. Therefore, to increase the computing speed, a local gamma of 0.7 was applied to the whole image and then the first step processes are applied again and finally, the SSIM index is checked for image enhancement.Results & DiscussionThe SSIM value for input image 1 and 2 is 0.8372 and 0.8401 while this value before processing was 0.4352 and 0.4161. Examining the histogram of the images before and after processing, in all three bands R, G and B, shows the stretch of the image histogram in the range of 0 to 255. There is a decrease in the number of peaks and valleys in the histogram of the processed images. The density function for input and processed images shows that the more homogeneous the number of effects in the image, the greater the slope of the function graph. The value of the density function has increased after processing, which is due to the stretching of the image histogram. SSIM is used to validate the results in this study. The images have been visually improved significantly, but this is not enough for verification. The goal of quantitative quality recognition is to design computational methods that can accurately and automatically express image quality, which affects all the image pixels in the same way. The SSIM range is between (+1 and 0). The closer the measured value for an image to one, the better image quality will be. SMQT also has less computational complexity and less configuration. If the image of a light object is formed in a completely dark background (such as night shooting), this algorithm does not work in the background pixels. Examining the image samples taken from a complication at night, it was found that the black pixels changed color to purple after fusion. In order to optimize the algorithm, it is suggested to increase the efficiency of the algorithm by examining the spectral behavior of different features in different color spaces and integrating their effective components in image or feature highlighting or the use of plant or soil indicators. The fuzzy method can also be used for semi-shady areas. These improvements should also prevent complexity of computing by increasing efficiency.
Kosar Kabiri; Sayyed Bagher Fatemi
Abstract
Extended Abstract Introduction Different image fusion methodsprimarily seek to improve spectral and spatial content of the final result. However, the final fused image often suffers from some spectral distortions. Moreover, some image fusion methods are too slow. Image fusion using IHS transformation ...
Read More
Extended Abstract Introduction Different image fusion methodsprimarily seek to improve spectral and spatial content of the final result. However, the final fused image often suffers from some spectral distortions. Moreover, some image fusion methods are too slow. Image fusion using IHS transformation is known as a fast image fusion method. Unfortunately, the resulting image fused with IHS also suffers from some spectral distortions and therefore several versions of this method have been developed. Defining weights of each band for generation of the intensity component is one of the main problems discussed in the literature. Spectral response curves are used as one of the major sources for defining relative weight of each spectral band. Scientific reports indicate that spectral response curves can improve the quality of the final fused image. Weights of each individual band is often calculated based on the overlapping area of the spectral response curves of the panchromatic and multi-spectral bands. But, information like the non-overlapping areas of the curves are also considered to play a role in the calculation of the weights. The present comparative studyinvestigatesthe potential of using this information. Materials & Methods A multi-spectral Geoeye-1 satellite image with 2 meter spatial resolution, four spectral bands and the corresponding panchromatic band with a spatial resolution of0.5 meter were used to test the idea. Seven variants of the FastIHS fusion method have been developed based on different approaches of intensity component estimation using the information obtained from spectral response curves. The test methods have been compared with the original FastIHS image fusion method. The only difference of these methods was in the way they calculate the weights of each band. The seven tested methods included: 1) ratio of the overlapping area of the spectral response curves of the panchromatic and multi-spectral bands and multispectral response curves, 2) the ratio of the area of the multispectral band’s response curves and the area of the panchromatic band’s response curve, 3) the inverse of the distance between the central wavelength of the panchromatic and multispectral response curves, 4) the ratio of the overlapping area of the spectral response curves of the panchromatic and multi-spectral bands and the area of the panchromatic response curve, 5) the ratio of the non-overlapping area of the panchromatic and multi-spectral response curves and the area of the multispectral response curves, 6) ratio of the overlapping area of both panchromatic and multi-spectral response curves and the area of the panchromatic response curve minus the area of the multispectral response curves, 7) the ratio of the panchromatic and multispectral response curves’non-overlapping area and the area of the multispectral response curves multiplied by the ratio of the area of the multispectral response curve and the area of the overlapping regions of the panchromatic and multispectral response curves. Results & Discussion In order to evaluate the fused images, four criteria were used, including ERGAS, RMSE, Correlation Coefficient, and edge correlation with panchromatic band. In order to calculate edge correlation Coefficient, a Sobel filter was applied on the panchromatic and fused bands. Then, the correlation coefficient between the individual filtered spectral bands and the filtered panchromatic bands was calculated. All eight methods were ranked based on the four evaluation criteria. Because of the inconsistencies in the ranking results, the four criteria have been merged and a new ranking method was obtained based on the final results. Based on this final ranking, the fifth method is in the first rank and the second method is in the eighth rank. Therefore, the sorted list of the methods based on the final ranking is: IHS5, IHS3, IHS6, IHS1, IHS4, IHS7, FastIHS, and IHS2. As the ranking shows, almost all tested methods have a higher level of accuracy as compared to the base method (FastIHS). Conclusion The results indicates that using the information obtained from the spectral response curves can improve the final results of the FastIHS image fusion. This information can improvethe fusion speed and reduce spectral distortions of the final fused image. Unfortunately, the spectral feature of the data is preserved and the total number of detected edges is decreased. Spectral response curves are directly tied with the physics of the imaging, therefore using their information can produce some natural fused images with better visualization and enhanced spatial contents.