Kosar Kabiri; Sayyed Bagher Fatemi
Abstract
Extended Abstract Introduction Different image fusion methodsprimarily seek to improve spectral and spatial content of the final result. However, the final fused image often suffers from some spectral distortions. Moreover, some image fusion methods are too slow. Image fusion using IHS transformation ...
Read More
Extended Abstract Introduction Different image fusion methodsprimarily seek to improve spectral and spatial content of the final result. However, the final fused image often suffers from some spectral distortions. Moreover, some image fusion methods are too slow. Image fusion using IHS transformation is known as a fast image fusion method. Unfortunately, the resulting image fused with IHS also suffers from some spectral distortions and therefore several versions of this method have been developed. Defining weights of each band for generation of the intensity component is one of the main problems discussed in the literature. Spectral response curves are used as one of the major sources for defining relative weight of each spectral band. Scientific reports indicate that spectral response curves can improve the quality of the final fused image. Weights of each individual band is often calculated based on the overlapping area of the spectral response curves of the panchromatic and multi-spectral bands. But, information like the non-overlapping areas of the curves are also considered to play a role in the calculation of the weights. The present comparative studyinvestigatesthe potential of using this information. Materials & Methods A multi-spectral Geoeye-1 satellite image with 2 meter spatial resolution, four spectral bands and the corresponding panchromatic band with a spatial resolution of0.5 meter were used to test the idea. Seven variants of the FastIHS fusion method have been developed based on different approaches of intensity component estimation using the information obtained from spectral response curves. The test methods have been compared with the original FastIHS image fusion method. The only difference of these methods was in the way they calculate the weights of each band. The seven tested methods included: 1) ratio of the overlapping area of the spectral response curves of the panchromatic and multi-spectral bands and multispectral response curves, 2) the ratio of the area of the multispectral band’s response curves and the area of the panchromatic band’s response curve, 3) the inverse of the distance between the central wavelength of the panchromatic and multispectral response curves, 4) the ratio of the overlapping area of the spectral response curves of the panchromatic and multi-spectral bands and the area of the panchromatic response curve, 5) the ratio of the non-overlapping area of the panchromatic and multi-spectral response curves and the area of the multispectral response curves, 6) ratio of the overlapping area of both panchromatic and multi-spectral response curves and the area of the panchromatic response curve minus the area of the multispectral response curves, 7) the ratio of the panchromatic and multispectral response curves’non-overlapping area and the area of the multispectral response curves multiplied by the ratio of the area of the multispectral response curve and the area of the overlapping regions of the panchromatic and multispectral response curves. Results & Discussion In order to evaluate the fused images, four criteria were used, including ERGAS, RMSE, Correlation Coefficient, and edge correlation with panchromatic band. In order to calculate edge correlation Coefficient, a Sobel filter was applied on the panchromatic and fused bands. Then, the correlation coefficient between the individual filtered spectral bands and the filtered panchromatic bands was calculated. All eight methods were ranked based on the four evaluation criteria. Because of the inconsistencies in the ranking results, the four criteria have been merged and a new ranking method was obtained based on the final results. Based on this final ranking, the fifth method is in the first rank and the second method is in the eighth rank. Therefore, the sorted list of the methods based on the final ranking is: IHS5, IHS3, IHS6, IHS1, IHS4, IHS7, FastIHS, and IHS2. As the ranking shows, almost all tested methods have a higher level of accuracy as compared to the base method (FastIHS). Conclusion The results indicates that using the information obtained from the spectral response curves can improve the final results of the FastIHS image fusion. This information can improvethe fusion speed and reduce spectral distortions of the final fused image. Unfortunately, the spectral feature of the data is preserved and the total number of detected edges is decreased. Spectral response curves are directly tied with the physics of the imaging, therefore using their information can produce some natural fused images with better visualization and enhanced spatial contents.
Faeze Eslamizade; Heidar Rastiveis
Abstract
Extended abstract Introduction Given the population growth and increasing urbanization, the occurrence of natural disasters like earthquake can cause heavy losses and damages and interrupt the development of cities and countries. Among these disasters, the earthquakeisof great importance due to its unpredictability ...
Read More
Extended abstract Introduction Given the population growth and increasing urbanization, the occurrence of natural disasters like earthquake can cause heavy losses and damages and interrupt the development of cities and countries. Among these disasters, the earthquakeisof great importance due to its unpredictability and high frequency in relation to other events, as well as its location on the earthquake belt. According to the last year's estimate, Iran has been one of the 6 countries with high mortality rates in earthquakes. Therefore, finding a way to minimize the losses can be critical. Crisis managers need quick information from the affected area after the earthquake to minimize the fatalities and financiallosses. The destruction map is one of the information that helps crisis managers. These maps show the destructed buildings or roads with their degree of destruction. With these maps, the destructed buildings and roads can be found quickly. Materials & Methods Many methods are used to prepare the destruction maps, such as aerial/satellite images, LiDAR data, etc. These information can be used to determine the destructed buildings automatically or by visual interpretation. Visual interpretation for determining the degree of destruction requires operator. Although this method has high accuracy, it is less considered because it is time consuming and needs specialists to interpret the data. Therefore, researchers have focused on automated processing techniques for the production of the destruction maps. Various automatic change detection techniques are used to evaluate the destruction resulting from earthquakeby comparing satellite images in two pre and post-earthquake periods based on satellite and aerial images. LiDARdata is one of the most important sources of information to determine destructed buildings with high accuracy and speed. LiDAR data provides the possibility of 3-D demonstration of the destructed region. This information is a great help in preparing the destruction map automatically. The recent expansion of the LIDAR technology is due to the high spatial power of these data. As a result, many researchers have focused on developing an automatic destruction map using Lidardata.Although considering the textural information from the Lidar data, like homogeneity in the destructed region can be effective in distinguishing between the destroyedand undestroyed buildings. In this paper, a new algorithm is proposed to prepare the destruction map after the earthquake by integratingthe post-event high resolution satellite images and post-event LiDAR data. In the proposed method, different textural descriptors of the LiDAR image and data are extractedafter the necessary preprocessing on the satellite image andLiDAR data after the earthquake. In the next step, using the layer of buildings extracted from the map,the areas of the buildings are extracted from the satellite image and LiDAR data, as well as the satellite image descriptors and LiDARdata.Then, the textural descriptorsextracted from the satellite image and LiDAR data are combinedtogether. After that, the points inside this area are categorized into two classes of "debris" and "intact" by the method of support vector machine. Finally, based on the area of the debris class of each building, destroyed and undestroyed buildings were identified by taking a threshold limit into consideration. This algorithm is executed on each building from the destruction part to produce the final destruction map Results&Discussion In order to evaluate the proposed method,the data set was selected from the city of Port-au-Prince, the capital of Haiti, after the 2010 earthquake. According to the USGS reports, 97,294 buildings were damaged and 188,383 were destroyed in Port-au-Prince and most of the southern parts of Haiti. Furthermore, reports show that 222,570 people were killed, 300,000 were injured, and 1.3 million people were displaced. The sample data set include post-event WorldView II satellite images as well as post-event LiDAR data. The WorldView II satellite took images on January, 16 2010, and the LiDAR date was also obtained from this topography website. Obtaining LiDAR data is from January, 21 2010 to January, 27 2010. The vector map of the selected test area was generated in ArcGIS environment. By evaluating the proposed method and using the existing data, the overall accuracy of 97% and the Kappa coefficient of 92% were obtained which proved the reliability of this technique. Conclusion In this paper, a new method for the generation of damage map based on the integration of high resolution satellite images and LiDAR data was proposed. The results show the ability of this method in generating destruction maps based on the satellite images with high resolution and LiDAR data. In comparing similar studies, the results are satisfactory. The selection of the appropriate descriptors, correct training data, the elimination of non-building areas from the sample data, the integration of satellite images and LiDARdate can be known as the reason behind obtaining these results.