Remote Sensing (RS)
Somayeh Aslani Katouli; Reza Shah-Hosseini; Hamid Bagheri
Abstract
Extended Abstract
Introduction
A flood is a widespread and dramatic natural disaster that affects the life, infrastructure, economy, and local ecosystems of the world. In this paper, a method for flood detection in urban (and suburban) environments using the intensity and coherence of SAR based on ...
Read More
Extended Abstract
Introduction
A flood is a widespread and dramatic natural disaster that affects the life, infrastructure, economy, and local ecosystems of the world. In this paper, a method for flood detection in urban (and suburban) environments using the intensity and coherence of SAR based on a convolutional neural network is introduced, and from the time series of SAR intensity and coherence to draw flood without obstruction (e.g. Flooded bare soils and short vegetation) are used. Non-cohesive areas blocked by floods (e.g., flooded vegetation) and cohesive areas with flood-blocked areas (e.g., frequently constructed flooded areas) are distinguished.
This method is flexible according to the time period of the data sequences (at least one pair of pre-event and event intensities and one pair of pre-event and in-event coherence are required). The increasing number of SAR missions in orbit that have a fixed viewing scenario with a short retry time increases the chances of seeing a flood event, while also having a good pre-event scene achieved by the same sensor. This makes this method desirable for operational emergency responses.
Materials & Methods
CNN algorithm is a multilayer perceptron that is designed to identify two-dimensional information of images and includes: input layer, convolution layer, sample layer, and output layer. The CNN algorithm has two main processes: collection and sampling.
The convolution process involves the use of a trainable Fx filter, deconvolution of the input image (the first step of image input, input after image convolution, is the feature of each layer called Feature Map), then by adding bx can be hand convolution of the CX layer Found. Sampling process: n pixels are collected from each neighborhood to form a pixel, then weighted with a scalar weight of Wx + 1 and a bx + 1 bias is added, then a map of The Narrow n times feature map properties are generated.
Three images of Sentinel-1A VV polarization, wide width interference (IW), and mode (SLC) data were used in this study. Intensity images were pre-processed with radiometric calibration, noise reduced with a spell-filter (window size 5.5 pixels), and converted from linear units to decibels. Coherent images were obtained with a pair of consecutive images with a window of 7.28 (range _ azimuth). Validation data set due to the lack of other data in two separate sections of ground data in the urban area of GonbadKavous that have been collected to identify homes damaged by floods and terrestrial reality data from gamma image thresholds for output validation were extracted.
Results & Discussion
In this section, the results of the study are qualitatively and quantitatively analyzed. Because the simultaneous display of SAR data over time in the form of RGB compounds is widely used in the qualitative interpretation of land cover and surface dynamics, RGB compounds are used to provide evidence of flood magnitude in terms of intensity and coherence. For both cases, the results of combining intensity and coherence and intensity alone and coherence alone are quantitatively analyzed. Overall accuracy (OA), kappa correlation coefficient, false-positive rate (FPR), precision (e.g., correctly predicted positive patterns out of the total predicted patterns in a positive class), recall (e.g., a fraction of properly classified positive patterns), and an F1 score (ie the harmonic mean between precision and recall). Flood reference and ground data are mentioned and reported based on the reference.
Conclusion
In this paper, a method for mapping floods in urban environments based on SAR intensity and interferometry coherence was introduced. A combination of intensity and coherence extracts flood information in different types of land cover and outlet. This method was tested on the KavousGonbad flood incident obtained by various SAR sensors and the flood maps were confirmed by the flood reference resulting from thresholding and ground harvesting and satisfactory results were shown in this case study. The findings of this experiment show that the shared use of SAR intensity and coherence provides more reliable information than the use of SAR intensity and coherence alone in urban areas with different landscapes. In particular, flood detection in less cohesive / non-cohesive areas (e.g., bare soils, vegetation, vegetated areas) relies heavily on multi-temporality, while multi-temporal coherence provides more comprehensive flood information in areas Create coherence (e.g., mostly built-up areas). However, some flood-specific situations, such as flooded parking lots and flooded dense building blocks, are still challenging in terms of intensity and coherence. Also, since the proposed method is sensor and scene independent, with very frequent and regular observations of SAR missions such as Sentinel-1 and RADARSAT (RCM), there are opportunities to map global floods on a global scale, especially in small countries. Provides income.
Mohsen Abedi; Mohammad SaadatSeresht; Reza Shahhoseini
Abstract
Extended Abstract
Introduction
Nowadays, updating information collected from urban areas is of great importance, since it provides the basis for many fields of study such as land cover changes and environmental studies. Remote sensing provides an opportunity to obtain information from urban areas ...
Read More
Extended Abstract
Introduction
Nowadays, updating information collected from urban areas is of great importance, since it provides the basis for many fields of study such as land cover changes and environmental studies. Remote sensing provides an opportunity to obtain information from urban areas at different levels of accuracy while widely used in various change detection applications. Detecting changes in buildings as one of the most important features in urban areas is of particular importance. Powerful and expensive processing systems are the only way to process large volume of remote sensing and photogrammetry data generated by the ever increasing number of sources to which laymen do not have access. The present study has applied deep learning methods and high computational volume of data processing in free clouds to make this possible for the public.
Materials & Methods
Two case studies have been selected in the present study. The first includes DSM and Orthophoto images captured by drones from Mashhad in 2011 and 2016. DSM and Orthophoto images in the second case study has been collected by drones from Aqda in Yazd province in 2015 and 2018. In accordance with the type of data used and high computational volume used for processing, the present study has applied fuzzy clustering method to detect buildings with a high computational speed and deep learning method to detect their changes. Object-based method and fuzzy logic theory have been used in the first step to classify features and detect buildings. In the second step, deep learning method and DSM differentiation method were also used to detect changes in buildings and evaluate results obtained from deep learning method. In the first step, buildings were detected using descriptors extracted from terrestrial and non-terrestrial features, and related decisions were made using fuzzy logic. In the second step, DSM differentiation method has applied the masks extracted from buildings in both epochs on the related DSMs to find their difference and detects changes using an elevation threshold. In deep learning method, a convolutional neural network model was trained to detect changes in buildings during both epochs. Using the DSM of buildings in both epochs and a part of their interface, the network input layers were generated for training. Changes detected in the buildings by the differentiation method were also introduced as the output layer. Following the training and introducing the entire interface in both epochs as the input layer, the trained neural network has detect changes in the buildings. The same process was performed once more using the difference between two DSMs. In other words, a single input layer was used in the network and the rest of the process was the same as before. Finally, changes detected by the neural network was compared with changes detected in the DSM differentiation method
Results & Discussion
In the first step, buildings were detected and images were classified in accordance with the fuzzy logic. The overall accuracy of the first epoch classification in Mashhad equaled 94.6% indicating higher acuracy of object-based methods as compared to pixel-based methods. The overall accuracy of first epoch in Aqda equaled 95.5%. Neural network method detected changes in buildings with an overall accuracy of 90%. In accordance with the ground truth used in network training (both using DSMs as the input layer and the difference between the epochs as the input layer), results indicated that deep learning method is highly accurate in one-dimensional convolution mode. Moreover, the second step has applied the difference between DSMs in the two epochs and thus, many areas lacking a change in height were removed in both epochs and the network was trained more appropriately and accurately.
Conclusion
Necessity of extracting features, especially urban features such as buildings and identifying their changes over time have been investigated in the present study. Due to the high computational volume of modern remote sensing and photogrammetry data and highly expensive systems required for their processing, a new method was presented in the present study to solve this problem. Considering the type of data used and the complexity of features, object-based methods were selected instead of pixel-based methods to identify features and buildings. Deep learning method was used to detect changes in buildings. The method was also compared with DSM differentiation method. A one-dimensional convolutional neural network was used in the deep learning method. Two different modes were used in the network to train and predict changes. In the first, DSMs extracted from the buildings in each epoch were used as the input layer, while in the second one, the difference between DSMs were introduced as a single input layer to the network and the network was trained in accordance with the ground truth collected from areas with and without change obtained from the DSM differentiation method. Following the training process, changes were predicted using the trained network. Much better results were obtained from the second mode in which the difference between DSMs were used.