Mir Najaf Mousavi; Maral Rahimi
Abstract
Abstract[1]
Sustainable development in a country can be realized by taking into account the ecological potential, human resources, technology and financial resources belonging to that country and performing such development will only be sustainable in the mentioned environment. Therefore, the ...
Read More
Abstract[1]
Sustainable development in a country can be realized by taking into account the ecological potential, human resources, technology and financial resources belonging to that country and performing such development will only be sustainable in the mentioned environment. Therefore, the purpose of this study was to explain the role of demographic components in realizing the dimensions of sustainable development of the border regions of West Azarbaijan Province, thus, the statistical population of the research has been formed by 9 border cities of the West Azarbaijan Province in the year 1390 (2011).This research was conducted using "analytical - descriptive" method and quantitative models. Research data was collected using a library (documentary) method. In this regard, the demographic analysis of the cities of West Azarbaijan Province during the years 1355-1390 was first studied, then, ranking of the cities of the province in terms of different indexes of sustainable development in 2011 was performed using the VIKOR model (in the Excel software environment) and their ratings by taking advantage of hierarchical cluster analysis method using SPSS 16 software. The results of the calculations indicate that the ranking of cities in different parts is different, and ultimately, the cities of Urmia and Showt were known as being the most developed and the most deprived cities respectively.The coefficient of variation (CV) was used to investigate the extent of inequalities among cities. The application of this model showed that among the various indices, the highest level of inequality has been in economic indicators and the least inequality has been in environmental indicators.Based on the results of structural equation modeling using LISREL software, demographic components have had the most positive impacts on the economic dimension of sustainable development. Testing the hypotheses revealed that all of the demographic components under study, have been effective on the realization of the dimensions of sustainable development.
[1] - به دلیل کیفیت نامناسب متن چکیده مبسوط انگلیسیِ ارائه شده توسط نویسنده مسئول مقاله، نشریه به ناچار اقدام به ترجمه مجدد متن چکیده فارسی و انتشار آن به جای چکیده مبسوط انگلیسی نموده است.
Mahdi Modiri; Reza Aghataher; Mohammad Fallah Zazuli; Mohsen Jafari
Volume 22, Issue 86 , June 2013, , Pages 5-16
Abstract
Effective planning and decision-making require access to accurate and updated information. Having updated spatial information and proper application of it is one of the most important topics in the command. A C4I system is composed of several smaller systems that can help military commanders assess the ...
Read More
Effective planning and decision-making require access to accurate and updated information. Having updated spatial information and proper application of it is one of the most important topics in the command. A C4I system is composed of several smaller systems that can help military commanders assess the enemy’s information and make better decisions. Geospatial Information System )GIS( can assist commanders in achieving more rational decisions. GIS by modeling the Earth and the effect on Earth will provide a good view of the operating area for military commanders. This article reviews the role and application of Geospatial Information System in development of command and control.Using of new technologies such as mobile Geospatial Information System )Mobile GIS( and web-based Geospatial Information System )WEB GIS(, followed by locating the best places with different functions are GIS capabilities in command and control )C4I(.Thus, using Geospatial Information System capabilities by modeling of the operating area can be reached the highest rates in optimal and valid decisions for command and control.
Ali Mohammadpour; Kheder Faraji Rad
Volume 22, Issue 85 , May 2013, , Pages 5-26
Abstract
Tehran as a political and economic capital city of Iran has an unsafe structure and the safety issue have not been considered until now. However, Tehran is faced several natural disasters, including earthquikes and it is essential to identifying and addressing the specific framework of urban spatial ...
Read More
Tehran as a political and economic capital city of Iran has an unsafe structure and the safety issue have not been considered until now. However, Tehran is faced several natural disasters, including earthquikes and it is essential to identifying and addressing the specific framework of urban spatial divisions for the efficient management during crisis in Tehran. This article try to answer therefore, aims to answer this fundamental question that what the appropriate zoning of the Tehran city is, with emphasis on crisis management.
The GIS and AHP combined and qualitative method were used to answer the above question. The results demonstrated that many parts of Tehran are in the areas with high vulnerability (16446.75) hectares.
The results, also showed that (10004.25) hectares of Tehran are among the areas With medium vulnerability. In addition the above results, the article conclude that the administrative divisions of Tehran is not corresponded With the determined activites and tasks of the professional committees of organization of prevention and crisis management of Tehran city, therefore, is necessary to redefine the administrative divisions of Tehran in relation to crisis management.
Leila Servati; Mohammad Reza Valavi; Maryam Hourali
Abstract
In today's world, the importance of data, information and knowledge should not be ignored. Information superiority leads to decision making superiority and more effective actions. Within the military domain, various information systems are utilized to support commanders and troops to be aware and ...
Read More
In today's world, the importance of data, information and knowledge should not be ignored. Information superiority leads to decision making superiority and more effective actions. Within the military domain, various information systems are utilized to support commanders and troops to be aware and take control of the situation. Location information is one of the important aspects of the military information systems to provide situation awareness. Regarding the extensive applications oflocation-based military information systems, it is important to improve intelligence, knowledge base and integration of such systems. One of the current challenges in military information systems is that the applied systems are used as separate islands with no common protocol and language. In addition, computerized systems do not take the meaning of transferred terms and concepts into considerations. In other words, computerized military systems are not capable of establishing a meaningful relation at a conceptual level with each otherand also with humans. Moreover, data bases with different formats and data structures are not using similar method for modelling and storage of information and therefore, work as independent information silos. Furthermore, people knowledge is not coded and rarely modelled for future re-utilization. In this regard, one of the modelling methods is ontology that seems to be more effective than other methods. Ontologies provide an explicit and official description of common conceptualization. It implies that ontologies represent whatever exists in one information domain as concepts, relations, properties, rules and actual examples. In order to represent those concepts, ontologies employ a standardized and computerized official language. Common conceptualization inontology means that the produced concepts should be accepted by knowledge community of the interest domain so that the ontology becomes capable of providing a common language amongst people within the same domain such as military geography. Due to the importance ofthe location element, there is a need to develop specific geographical ontology to be able to use for such applications. A review of existing ontologies reveals that there are problems and challenges to employ available ontologies for military applications. For instance, some of the existing ontologies are only a hierarchy of concepts that cannot be called ontology to fulfill military requirements. Moreover, some ontologies are limited for specific geographical domains and others do not have complete coverage of geographical concepts which is required in military systems. In general, there is a need for geographical knowledge engineering and a localized geographical ontology development. Such ontology should contain geographical concepts, properties and relations with a military approach to be used in location-based military systems. In fact, development of geographical ontology provides official common language and standardization in the domain of military geography. In addition, it helps to model geographical knowledge and establish a conceptual infrastructure for location-based systems. Consequently, various geographical ontology-based systems can be developed with vertical and horizontal integration. Such ontology-based systems eliminate the problem of isolated data and information storage in separate islands, prevent missing data sources and support re-utilization of knowledge sources. Moreover, such geographical ontology-based systems can interact with each other and with humans in a higher meaningful conceptual level. In order to develop such ontology-based system, a semi-automatic method is utilized for knowledge engineering and ontology development. Applied method is a result of analyzing different methods to remove deficiencies in early stages of theontology development. Thus, a set of best applicable methods for geographical knowledge engineering is utilized for ontology development. Main clusters of geographical existence, geographical process and geographical properties are defined in the developedgeographical ontology. Other clusters of concepts include causing factor, military concepts, time concepts, situation concepts and general status related concepts. The developed ontology includes 4161 geographical concepts, 319 concept properties, 426 relations amongst concepts and 5527 actual examples of modelled geographical concepts using the proposed ontology.
In this research, the developed geographical ontology is using web protégéopen source software. The web-based version of the software enable easy access to developed environment from different locations while enables team work implementation in such a way that different people or groups of experts can access to ontology development and share thoughts and tasks. Moreover, it is possible to track changes, monitor and supervise the development process. It is also possible to program the ontology using the OWL standard language to be used in other systems. In general, the developed geographical ontology is capable of being applied in military location-based systems while it can also be deployed as support for other military and security ontologies. Finally, in order to assess quality, credibility and coverage of the developed ontology is examined and verified using a comprehensive mix of statistical, automatic and military geography expert opinions.
Javad Sadidi; Saiedeh Sahebi Vayghan; Hani Rezaiyan
Abstract
Extended Abstract 1. Introduction During the recent years, advances in data collection and management technology, have led to the creation of very large databases. In contrast to other data such as numbers and strings, raster data are considered as complicated and ...
Read More
Extended Abstract 1. Introduction During the recent years, advances in data collection and management technology, have led to the creation of very large databases. In contrast to other data such as numbers and strings, raster data are considered as complicated and contain special characteristics so that, they are classified as “big data”. Due to the nature of spatial analysis queries, the need arises to aggregate or summarize a large portions of the data to be analyzed. The main issue in the database era is the efficient query processing so that users do not spend long time for retrieving the requests. Traditional query processes return exact answers, however, the answers take more time than what is needed in real time systems. It is notable that sometimes the query running time is much more important than the accuracy, specially, in real time services. AQP (Approximate Query Processing) is an alternative method for query processing in time – consuming environments that enables the system to provide fast approximated answers. One of the most significant applications of AQP is query optimization. AQP may play a valuable role in increasing the speed of spatial queries facing robust and complicated data. It is also an efficient method for recognizing the needed data and subsequently minimizing the cost of aggregation queries. Since 1980s, utilizing the approximation methods have been initiated for decision support systems. Also, AQP has been noticed to address some problems in database era during the past decade. The current technics in various research frontiers are only useful for relational database systems (Azevedo, et al., 2007). The main idea behind in-database processing is the elimination of big data sets transmission to disjointed programs. Since, in-database processing that all analysis are implemented into database, it offers fast implementation, scalability and security. Hence, In-Database processing improves the computer network productivity and participates in well-suited designing of fast response queries. 2. Methodology The current research aims at comparing traditional and optimized Sum aggregation operation to decrease the running time of spatial queries into PostgreSQL database. To undertake the research, 60 precipitation rasters have been used. The study area is located in Lorestan province and precipitation gauging stations were used as primary data. Raster data have been created from monthly precipitation data for the period of 2010-2014 using Kriging interpolation method and entered into PostgreSQL database using Raster2pgSQl extension. Then, raster pixels are stored into their related tables. In optimized aggregation method, firstly, raster data are clustered by the written similarity function. The used functions have been written by PL/pgSQL language in PostGIS. The execution steps of Sum function are as the following: creating the similarity function, performing the function, running the optimized query and consequently, resulting the approximated query respectively. Subsequently, one raster is selected from each cluster and it is multiplied by the number of rasters belonging to the given cluster. The resulted raster is entered to Sum function as the representative of the cluster. In each cluster, the number of implemented arithmetic operations is reduced as the following formula: (number of rasters in the cluster-1) *rows*columns of the given raster). Using the mentioned method, the number of arithmetic operations is significantly reduced and prepares the fast approximate answers. Finally, for accuracy assessment, the error of each method was approximated by calculating mean relative error, DI (difference indicator) error and relative error for each raster. Finally, the achieved results were analyzed. It is mentionable that the user may make a decision whether the resulted accuracy is acceptable for a particular project or an exact query has to be executed. 3. Results and discussion In this research, to compare the traditional and optimized Sum function, five scenarios have been implemented. The results show that the optimized Sum function is 27.2 times faster than the traditional function. The average difference of pixel values between the traditional and optimized one is 0.028. Consequently, the query running time for the optimized and traditional Sum is 7.754 and 211 seconds respectively, which implies the efficiency of the used method (optimized Sum). It is notable that the accuracy of the optimized method depends on the nature and homogeneity or heterogeneity of the used rasters. The valuable decreasing of the in-database spatial query running time may be used to offer real time web-based services such as meteorology, traffic, etc., which need real time analysis and fast retrieving responses.
Morteza Miri; Ghasem Azizi; Hossein Mohammadi; Mahdi Pourhashemi
Abstract
Extended Abstract
Introduction
The limited access to the atmospheric and terrestrial data such as rainfall, temperature, humidity and soil temperature is the most important problem in studying many climatological and hydrological in many parts of the world, particularly in developing countries, rural ...
Read More
Extended Abstract
Introduction
The limited access to the atmospheric and terrestrial data such as rainfall, temperature, humidity and soil temperature is the most important problem in studying many climatological and hydrological in many parts of the world, particularly in developing countries, rural and mountainous areas. One of the solutions to overcome this obstacle is to use available gridded datasets that have proved their representativeness for many different parts of the world. Although the use of satellite data and gridded datasets is a reasonable alternative source for areas lacking station and data, since local effects can vary from region to region and can affect satellite and model performance, thus an dataset must be evaluated in a region before it is used as a decision-making tool in that region.
Materials and methods
The present study is aimed at the presentation of Global Land Data Assimilation System (GLDAS) and evaluates this model dataset against data measured by synoptic stations. The Global Land Data Assimilation System (GLDAS) has been developed jointly by scientists at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) and the National Oceanic and Atmospheric Administration (NOAA) National Centers for Environmental Prediction (NCEP) in order to produce such fields. The goal of a land data assimilation system is to ingest satellite and ground-based observational data products, using advanced land surface modelling and data assimilation techniques. The uniqueness of GLDAS is that it is a global, high resolution, offline terrestrial modelling system incorporating ground and satellite observations. The temporal resolution for the GLDAS products is 3-hourly and Monthly with 0.25 and 1 degree spatial resolution its output is the result of four land surface models: the Community Land Model (CLM), NOAH, Mosaic, and the Variable Infiltration Capacity (VIC) model.The products are in Gridded Binary (GRIB) format and can be accessed through a number of interfaces.
The representativeness and performance of GLDAS in estimate temperature amount at 66 Iranian synoptic stations distributed across the country is herein examined. To evaluate the performance of the considered dataset when compared to the observed temperature records at the considered locations we have used R squared, the Nash–Sutcliffe model efficiency coefficient (EF), RMSE, Bias, B slope of the regression and the standardized RMSE indicators. The performance of the dataset was also graphically represented through scatter plots of the established regression between GLDAS and observation at the selected stations.
Results and discussion
The results of the statistical indicators were represented through plotting the indicators over the map of Iran to ease displaying spatial tendency of the indicators and explaining the possible geographical role in controlling the spatial variation of the indicators. According to the results of the evaluation, the GLDAS data performs well in all of the studied stations with strong correlation coefficient. However, the Special physiographic and climatic characteristics is one of the main reasons for this overestimation in the coastal areas of the Caspian Sea. very likely due to not properly taking into account the complex topography of the region in its model parameterization or not being able to remove the effect of sea atmosphere in the stations nearby the seas. However, since the cloud of the estimated data for this region are distributed along the regression line, it can be said that the observed over-estimation could be resolved through establishing a statistical relationship between the observed and modeled datasets; thus such a mismatch might not be considered as a drawback of the modeled dataset. Considering that this model output is produced through combination of the modeled, observed and remotely sensed data, it could be confidentially used for mountainous areas and deserts of Iran that suffer from lack of weather stations or substantial missing values. This data-set might be considered as a superior dataset to be used for many climatological and hydrological subjects in Iran and thus should be seen as a promising tool for extending hydrological and climatological research areas in the country.
Conclusion
Statistical comparisons indicate that the GLDAS data perform well in all of the studied stations with strong Accuracy. Due to the Global coverage of the model dataset, A large number of climate-hydrological variables, and the results of this research that indicate the Good accuracy of the GLDAS model in Iran, It is suggested that all variables in the model to be evaluated.
Meysam Argany; Farid Karimipour; Fatemeh Mafi
Abstract
Extended Abstract ...
Read More
Extended Abstract Introduction Wireless Sensor Networks (WSNs) are widely used for monitoring and observation of dynamic phenomena. A sensor in WSNs covers only a limited region, depending on its sensing and communicating ranges, as well as the environment configuration. For efficient deployment of sensors in a WSN, the coverage estimation is a critical issue. Probabilistic methods are among the most accurate models proposed for sensor coverage estimation. However, most of these methods are based on raster representation of the environment for coverage estimation which limits their quality. In this paper, we propose a probabilistic method for estimation of the coverage of a sensor network based on raster models, and 3D vector representation of the environment. Then, the performance of global approaches are evaluated, and the 3D vector model is used as an appropriate model. Materials and Methods Recent advances in electro mechanical and communication technologies have resulted in the development of more efficient, low cost and multi-function sensors. These tiny and ingenious devices are usually deployed in a wireless network to monitor and collect physical and environmental information such as motion, temperature, humidity, pollutants, traffic flow, etc. The information is then communicated to a process center where they are integrated and analyzed for different application. Deploying sensor networks allows inaccessible areas to be covered by minimizing the sensing costs compared to the use of separate sensors to completely cover the same area. Sensors may be spread with various densities depending on the area of application and details and quality of the information required. Despite the advances in sensor network technology, the efficiency of a sensor network for collection and communication of the information may be constrained by the limitations of sensors deployed in the network nodes. These restrictions may include sensing range, battery power, connection ability, memory, and limited computation capabilities. These limitations have been addressed by many researchers in recent years from various disciplines in order to design and deploy more efficient sensor networks. Efficient sensor network deployment is one of the most important issues in sensor network filed that affects the coverage and communication between sensors in the network. Nodes use their sensing modules to detect events occurring in the region of interest. Each sensor is assumed to have a sensing range, which may be constrained by the phenomenon being sensed and the environment conditions. Hence, obstacles and environmental conditions affect network coverage and may result in holes in the sensing area. Communication between nodes is also important. Information collected from the region should be transferred to a processing center, directly or via its adjacent sensor. In the latter case, each sensor needs to be aware of the position of other adjacent sensors in their proximity. In recent years, Wireless Sensor Networks (WSN) has been studied in several applications such as monitoring and control different criteria from smart cities and intelligent transportation to land use planning and environmental monitoring. Sensor deployment for achieving the maximum coverage is one of the important issues in WSN. Hence, several optimization algorithms to achieve maximum coverage are used in the majority of researches. Discussion and Results In a general classification, optimization algorithms for the sensor deployment with the aim of increasing coverage, are divided into local and global optimization algorithms. The feature of global algorithms is their randomness based on an evolutionary process. In all of these algorithms, the calculation of the sensor network coverage is essential as a target function. In fact, coverage improvement is done according to the coverage calculation method. In the previous researches, a simple model was considered as the environmental model for network sensors. In this research, raster and vector modeling in 2 and 3-dimensional spaces and the optimization algorithms of global performance for optimizing the sensor layouts were compared evaluated. errorIn this study, two-dimensional and three-dimensional vector models were used as a precise environmental model. Most of the models in the previous studies considered the coverage to be binary (i.e. a point is covered by a sensor or not). For realistic modeling, this study considers the coverage as an issue, which means that the amount of coverage obtained based on parameters such as distance and angle of the sensor is expressed as a percentage between zero and one hundred. errorIn fact, all sensors are not sensed in the same way and will vary according to their various parameters. Since the purpose of this study is to compare the performance and ability of global optimization algorithms, it is therefore assumed that the study area has equal conditions. In this paper, several optimization methods such as genetic algorithms, L-BFGS, VFCPSO and CMA-ES have been implemented to optimize the location of sensors. In this study, various sensor sensing types such as omnidirectional binary sensing model, directional sensing model and probabilistic sensing model have been used and tested for the aforementioned optimization algorithms in different Raster and Vector study areas. Conclusion This paper was focused on comparing the performance of four global optimization algorithms to optimize deployment of sensors in environment using more spatial details compared to previous approaches. The innovation of this study was to use 3D raster and vector data and to implement the global optimization methods using probabilistic sensing model to optimize sensor network placement. Finally, promising results have been presented and discussed and future methods were introduced.
yasin kazemi; Saeid Hamzeh; Seyed Kazem Alavipanah; Bahram Bahrambeygi
Abstract
Extended Abstract Introduction Faults are fractures in the earth’s crust that has the ability to move. Faults are one of the most important geological structures, and since they have paths for emersion of heat from the lower parts of the earth’s crust to the surface, can be considered as ...
Read More
Extended Abstract Introduction Faults are fractures in the earth’s crust that has the ability to move. Faults are one of the most important geological structures, and since they have paths for emersion of heat from the lower parts of the earth’s crust to the surface, can be considered as one of the essential reasons of potential of geothermal energy. Geothermal energy is one of the major sources of renewable energy and compatible with the environment, which if properly utilized and bases on environmental parameters, can play an important role in the energy balance of the country and the goals of sustainable development. There are many methods that can be used to identify potential geothermal, one of which is remote sensing that is part of new technologies, and it is also cost-effective. Among the various methods of remote sensing for exploration of geothermal resources, thermal remote sensing has unique advantages. Thermal infrared remote sensing is an effective method to identify the Earth’s surface temperature anomalies whose combination with the analysis of geological and understanding of geothermal mechanism, can be an appropriate approach for exploration of geothermal areas. Materials and Methods Data used in this study included images of Landsat-8, geological map of the region and the layer of active faults as well. Images were taken on February 2015, and the reason for selecting this time of year for image processing is to reduce the impacts of solar radiation on the earth’s surface temperature and therefore less impact on the heat causes by faults. The study area of this research is the Shahdad county of Kerman city. Two faults of Shahdad and Nayband are in this region. In this research, the method of Single Chanel is used to retrieve the surface temperature. The software used in this study include ENVI5.3, ERDAS Imagine 2014, and ArcGIS 10.3. After the calculation of the Earth’s surface temperature by Landsat 8, the thermal behavior of the faults was analyzed. Results and discussion In this part of the study, two transversal profiles with an approximate length of 12 km were taken for each one of the faults, from the surface temperature map of the region. By examining the graphs of the temperature profiles, it was found that temperature changes along the profile increase with the approach to the location of the fault’s surface outcrop. The heat accumulation along the Nayband fault corresponds to the closeness to the fault central zone, but this correspondence has been less for the Shahdad fault. Also, by creating a 6 kilometer buffer around the faults, it was observed that the average temperature of the pixels of this buffer is about two degrees higher than the average temperature of pixels of the entire region. Conclusion Investigating the possibility of instrumental use of the Landsat-8 satellite’s analyzing capability of thermal data to determine the position of the fault based on the thermal anomalies created around the central zone of the faults in the present research showed that LST calculation from the aforementioned data is considered as an appropriate method for extracting the linear anomalies and tracking the possible fault zones. Also, the temperature processing on the areas surrounding the Shahdad fault and the southern part of the Nayband fault and the presence of the thermal aggregates associated with the aforementioned faults are considered as the land index areas. These thermal aggregates in transections created on the faults indicate that the amount of LST increases clearly with approaching the location of the central zone of the above-mentioned faults on the earth’s surface. Linear thermal accumulations around the faults are the effects of the superficial and deep causes, so that sometimes the basement faults of the lava exit area have been the constituent of the surface lithology of an area at the time of the formation, which are younger and have less weathering and higher capacity for absorbing the sunlight, while approaching the central zone of the faults as the eruption openings of the volcanic rocks. On the other hand, due to the depth of the faults and their depth’s access to the hot material forming the asthenosphere part beneath the earth’s crust, the geothermal gradient in the central zones of these fault is higher than the surrounding areas. Considering the lack of introducing the volcanic rocks in the geologic map of the study area, it can be concluded that the linear thermal anomaly around the existing faults in the area is mainly associated with the deep heat sources and it is less likely to be associated with the absorption of the surface heat. Regarding the evident increase in temperature on the isothermal diagrams close to the central zone of the faults in the study area, two areas with the highest slope of increasing temperature along the central zone of the faults were identified and introduced as the possible geothermal potentials for more precise studies and future surveys. These two areas are located 45 kilometers southeast and about 15 kilometers northwest of the town of Shahdad.
Geodesy
Lida Koshki; Behzad Voosoghi; Seyyed Reza Ghaffari-Razin
Abstract
Extended Abstract IntroductionEarthquake every year in the world, especially in a seismic country like Iran, causes huge human and financial losses. Earthquake prediction has become one of the great challenges of scientists in recent decades. One of the new methods is the evaluation of anomalies ...
Read More
Extended Abstract IntroductionEarthquake every year in the world, especially in a seismic country like Iran, causes huge human and financial losses. Earthquake prediction has become one of the great challenges of scientists in recent decades. One of the new methods is the evaluation of anomalies in the ionospheric parameters before the earthquake. The parameter investigated in this method is the total electron content (TEC). The study areas in this paper are the Ahar-Varzaghan earthquake with a magnitude of 6.5 and 6.3, the Sarpol Zahab earthquake with a magnitude of 6.3. In the Ahar-Varzaghan earthquake, the observations of 6 GPS stations and in the Sarpol Zahab earthquake, the observations of 5 GPS stations of the IGS network were used to calculate the ionosphere TEC. Short time Fourier transform (STFT) along with statistical parameters of mean and standard deviation have been used to detect of ionosphere time series anomalies. Also, geomagnetic and weather indicators KP, Dst, F10.7, Vsw (plasma velocity), Ey (magnetic field) and IMFBz (interplanetary magnetic field) have been investigated and analyzed to know the conditions of the days before the earthquake.Materials & Methods In recent years, the spectral analysis of ionospheric anomalies using the STFT method and its application in earthquake forecasting has become popular. The research results show that spectral methods can be a useful and reliable tool in further analysis, and the STFT method can be evaluated as a successful method for detecting ionosphere anomalies, which is also compatible with classical methods. Also, STFT is a powerful tool for processing a time series without the need for average and median values, so it can be used for other studies such as navigation, geophysics, geology and climatology. STFT is used as a modified version of the classical Fourier transform to obtain the frequency information of a signal in the time domain. This method provides the analysis of a small part of the signal at a certain time through windowing the signal. In STFT, the signal with a constant time-frequency resolution and with the same window length in all frequencies is divided into smaller parts, Fourier transform is applied on it and finally the output will be presented in two time-frequency dimensions. As a result, it is possible to obtain information about when and with what frequency each signal occurred.Results & Discussion In the Sarpol Zahab earthquake and in both classic and STFT methods, anomalies were observed on 309, 314 and 323 DOY, before the earthquake. The amount of these anomalies in the ionosphere time series was in the 0.058 to 5.44 TECU. The parameters related to solar and geomagnetic activities were also investigated in the days before and after the earthquake. Considering that the solar and geomagnetic activities (as an important factor in creating anomalies in the ionosphere time series) were calm in the days before the earthquake, these detected anomalies can be attributed to the earthquake. However, in the Ahar-Varzaghan earthquake and using both methods, in 5 to 15 days before the earthquake, anomalies of about 0.13 to 1.4 TECU were observed. In the days before the Ahar-Varzaghan earthquake, there were almost undisturbed conditions on most days, and therefore it cannot be said with certainty that the observed anomalies are completely related to the earthquake. The results of this paper showed that the STFT method is a powerful tool for spectral analysis without the need for values such as average or median. This feature of STFT is its strength compared to classical methods; because independence from these values minimizes the sources of error related to them (abnormalities, sudden variations in the ionosphere such as annual, semi-annual and seasonal variations). It is important to mention that the STFT method is more accurate in calm solar and geomagnetic conditions and provides high accuracy results.ConclusionThe results show that for the Ahar-Varzaghan earthquake, there are anomalies on the 11, 12, 13 and 5 days before the earthquake. But for the Sarpol Zahab earthquake, anomalies can be seen 6, 7, 13 and 21 days before the earthquake. The analyzes of this paper show that if all the geomagnetic and weather parameters before the earthquake are investigated, the existing anomalies can be directly observed by analyzing the time series of the ionosphere with the STFT method. It is important that on days when geomagnetic conditions and calm weather are not prevailing, the occurrence of earthquake cannot be considered as the cause of anomalies detected in the ionosphere time series.
Mahdi Modiri; Zahra Alibakhshi; Faramarz Khoshakhlaq; Ali Hanafi
Volume 21, Issue 84 , February 2013, , Pages 7-20
Abstract
In order to recognize the effective synoptic system in creation of moderate and severe frosts in Tehran and the conditions under which sever and moderate storms transform into one another, synoptic analysis has been implemented by mean sea level 850 and 700 hp maps in a period of 45 years (1961-2005). ...
Read More
In order to recognize the effective synoptic system in creation of moderate and severe frosts in Tehran and the conditions under which sever and moderate storms transform into one another, synoptic analysis has been implemented by mean sea level 850 and 700 hp maps in a period of 45 years (1961-2005). The synoptic results indicated that the transformation of moderate frost into severe and exterminator frost is because of cold weather advection from higher latitude, setting of through over region or earthly radiation. Surveying 850 hp maps, it was recognized that decreasing of temperature in research region arises from the cold advection of various systems from northwest to northeast. In mean sea level maps, extension of Siberian high pressure ridge, and in 700 hp level, setting of through over the eastern part of the district has the most frequency and share in moderate and severe frost formation.
Faeze Eslamizade; Heidar Rastiveis
Abstract
Extended abstract Introduction Given the population growth and increasing urbanization, the occurrence of natural disasters like earthquake can cause heavy losses and damages and interrupt the development of cities and countries. Among these disasters, the earthquakeisof great importance due to its unpredictability ...
Read More
Extended abstract Introduction Given the population growth and increasing urbanization, the occurrence of natural disasters like earthquake can cause heavy losses and damages and interrupt the development of cities and countries. Among these disasters, the earthquakeisof great importance due to its unpredictability and high frequency in relation to other events, as well as its location on the earthquake belt. According to the last year's estimate, Iran has been one of the 6 countries with high mortality rates in earthquakes. Therefore, finding a way to minimize the losses can be critical. Crisis managers need quick information from the affected area after the earthquake to minimize the fatalities and financiallosses. The destruction map is one of the information that helps crisis managers. These maps show the destructed buildings or roads with their degree of destruction. With these maps, the destructed buildings and roads can be found quickly. Materials & Methods Many methods are used to prepare the destruction maps, such as aerial/satellite images, LiDAR data, etc. These information can be used to determine the destructed buildings automatically or by visual interpretation. Visual interpretation for determining the degree of destruction requires operator. Although this method has high accuracy, it is less considered because it is time consuming and needs specialists to interpret the data. Therefore, researchers have focused on automated processing techniques for the production of the destruction maps. Various automatic change detection techniques are used to evaluate the destruction resulting from earthquakeby comparing satellite images in two pre and post-earthquake periods based on satellite and aerial images. LiDARdata is one of the most important sources of information to determine destructed buildings with high accuracy and speed. LiDAR data provides the possibility of 3-D demonstration of the destructed region. This information is a great help in preparing the destruction map automatically. The recent expansion of the LIDAR technology is due to the high spatial power of these data. As a result, many researchers have focused on developing an automatic destruction map using Lidardata.Although considering the textural information from the Lidar data, like homogeneity in the destructed region can be effective in distinguishing between the destroyedand undestroyed buildings. In this paper, a new algorithm is proposed to prepare the destruction map after the earthquake by integratingthe post-event high resolution satellite images and post-event LiDAR data. In the proposed method, different textural descriptors of the LiDAR image and data are extractedafter the necessary preprocessing on the satellite image andLiDAR data after the earthquake. In the next step, using the layer of buildings extracted from the map,the areas of the buildings are extracted from the satellite image and LiDAR data, as well as the satellite image descriptors and LiDARdata.Then, the textural descriptorsextracted from the satellite image and LiDAR data are combinedtogether. After that, the points inside this area are categorized into two classes of "debris" and "intact" by the method of support vector machine. Finally, based on the area of the debris class of each building, destroyed and undestroyed buildings were identified by taking a threshold limit into consideration. This algorithm is executed on each building from the destruction part to produce the final destruction map Results&Discussion In order to evaluate the proposed method,the data set was selected from the city of Port-au-Prince, the capital of Haiti, after the 2010 earthquake. According to the USGS reports, 97,294 buildings were damaged and 188,383 were destroyed in Port-au-Prince and most of the southern parts of Haiti. Furthermore, reports show that 222,570 people were killed, 300,000 were injured, and 1.3 million people were displaced. The sample data set include post-event WorldView II satellite images as well as post-event LiDAR data. The WorldView II satellite took images on January, 16 2010, and the LiDAR date was also obtained from this topography website. Obtaining LiDAR data is from January, 21 2010 to January, 27 2010. The vector map of the selected test area was generated in ArcGIS environment. By evaluating the proposed method and using the existing data, the overall accuracy of 97% and the Kappa coefficient of 92% were obtained which proved the reliability of this technique. Conclusion In this paper, a new method for the generation of damage map based on the integration of high resolution satellite images and LiDAR data was proposed. The results show the ability of this method in generating destruction maps based on the satellite images with high resolution and LiDAR data. In comparing similar studies, the results are satisfactory. The selection of the appropriate descriptors, correct training data, the elimination of non-building areas from the sample data, the integration of satellite images and LiDARdate can be known as the reason behind obtaining these results.
Hamid Akbari; Vahab Nafisi; Jamal Asgari
Abstract
Extended Abstract
The troposphere is a layer of atmosphere that consists of dry gases and water vapor, which causes a time delay in the emission of electromagnetic waves resulting in an error in determining the precise position. In space geodesy, normally, the travelling time of the wave between ...
Read More
Extended Abstract
The troposphere is a layer of atmosphere that consists of dry gases and water vapor, which causes a time delay in the emission of electromagnetic waves resulting in an error in determining the precise position. In space geodesy, normally, the travelling time of the wave between an emission source in space (a satellite for GNSS or a quasar in VLBI) and a receiver located in a geodetic station on the surface of the Earth is measured. This time is then converted to distance by the speed of light in vacuum. There are two ways to manage the atmospheric delays in space geodetic data analysis; either, external measurements of the atmospheric delays are used to correct the measurements or, the atmospheric delays are estimated in a least square adjustment as the unknowns. To model this error using the first strategy, several methods have been proposed, among which, the most prominent ones are the three dimensional (3-D) ray tracing and the use of mapping functions. 3D ray tracing is considered to be a direct method for this estimation and the actual path of the wave which is curved, is estimated by using Eikonal equation and is compared with the theoretical straight path. It means we can find the amount of correction which should be considered. Contrary to this method, the use of mapping functions is considered as indirect method and by using proper functions, the amount of delay (hydrostatic and non-hydrostatic) in the vertical direction is plotted in the desired direction. The basic assumption in this method is to take the azimuth symmetry for the troposphere into considerations and therefore, the amount of delay will depend only on the elevation angle. In this paper, comparisons have been made between this method and the method of using mapping functions that are commonly used.
Based on this research, it is determined which method to be used in different conditions to achieve the desired accuracy for space geodesy techniques such as GNSS and VLBI, and which method is the priority. In this comparison, VMF (Vienna Mapping Function) and GMF (Global Mapping Function) which are used widely in space geodetic techniques were used. Coefficients in these mapping functions are obtained based on Numerical Weather Models (NWMs) and specially ECMWF. VMF is called Isobaric Mapping Function (IMF) based on the older mapping function, and GMF function has been created with changes in VMF, so that it can be used offline in some software including VieVs software. To this end, data from two observation campaigns managed by IVS (International VLBI Service) namedCONT08 and CONT11 have been used in the years 2008 and 2011 for VLBI stations. These two campaigns involve 15 days continuous observations. The results of this study show that the station with the highest humidity (KOKEE) requires ray tracing at all intervals of accuracy and mapping functions cannot be used to produce reasonable accuracy. Also, in order to achieve the accuracy of less than 10 mm in the station's elevation, it is necessary to use ray tracing at almost all angles of elevation, or in order to achieve the accuracy of less than 20 mm, it should be used at the angles of elevation of approximately 0 to 35 degrees, and mapping functions can be used for the rest of the angles of elevation.
Based on the diagrams and tables presented in this paper, the following results can also be extracted:
-Use ray tracing method at the station with the highest humidity (KOKEE)
-Use ray tracing to achieve an accuracy of better than 10 mm for the stations’ elevation at all angles of elevation
- Use ray tracing to achieve an accuracy of better than 20 mm for station’s elevation of approximately 0º to 35 º of the angles of elevation. Mapping functions can be used for the rest of the angles of elevation.
-Use ray tracing to achieve an accuracy of better than 30 mm for the station’s elevation of 0 to 26 degrees of the angles of elevation. Mapping functions can be used for the rest of elevation angles.
-At angles of elevation which do not require ray tracing, mapping functions are used. It can be said that in 2008, finally, GMF mapping function achieved a better accuracy than the VMF, while in 2011, the performance of VMF mapping function was better than that of GMF.
Mehrdad AhangarCani; Mahdi Farnaghi
Abstract
Introduction
Introduction and Objectives: Cutaneous Leishmaniasis (CL) is a vector-borne disease, endemic of the Middle East. The spread of CL is highly associated with the socio-ecological interactions of vectors, hosts and environmental conditions. CL is the most frequent vector-borne disease in Iran ...
Read More
Introduction
Introduction and Objectives: Cutaneous Leishmaniasis (CL) is a vector-borne disease, endemic of the Middle East. The spread of CL is highly associated with the socio-ecological interactions of vectors, hosts and environmental conditions. CL is the most frequent vector-borne disease in Iran and especially in the north-eastern province, Golestan, which has long been known as one of the most important endemic areas for CL dispersion. Therefore, Golestan province was selected as the study area of this research. The main objectives of the study are to analyze annual spatial distribution of CL, investigate the relations between environmental/climate factors and incidence rate of CL and also provide a model to predict CL distribution at rural district level in Golestan province.
Materials and methods
Data: CL incidences, census data, environmental and climate factors have been used in this study to provide a model and produce a map to predict the CL distribution. The CL incidences are continuously recorded by the Center for Disease Control and Prevention (CDC) of Golestan province. The population and census data for 2013-2015 period were obtained from Iranian Statistical Center. Environmental and climate data such as vegetation, average humidity, average temperature, precipitation, number of rainy days, number of freezing days, maximum wind speed and evaporation rate were used as parameters affecting the model.
Methodology
The statistical and geo-statistical analyses were used to investigate the relation between environmental/climate factors and CL incidence rate, and to investigate the existence of spatial autocorrelation between CL cases, respectively. Additionally, Multilayer perceptron (MLP) neural network was used to model the relation between the distribution of CL incidences with environmental/climate factors, and also to generate the risk maps of CL. MLP is a type of neural network which consists of multiple layers of neurons or processing elements connected in a feed forward fashion. It encompasses three types of layers: input, hidden, and output. It has a unidirectional flow of information. Generally, information flow starts from input layer, goes through hidden layer, and then to output layer, which provides the response of the network to the input stimuli. In this type of network, there are generally three distinct types of neurons in layers. The input layer contains some neurons as the input variables. The hidden neurons, which are contained in one or more hidden layers, process and encode information within the network. The hidden layer receives, processes, and passes the input data to the output layer. Number of hidden layers and number of neurons within each layer affect the accuracy and functionality of the network. The output layer contains target output vector. In this study, effective parameters along with CL incidence rate of 2013-2014 were fed to the MLP as training data. The trained MLP was used afterward to generate the risk map of 2015 and test accuracy of the model. In order to determine the optimal parameters of the MLP, the grid-search and cross-validation techniques were used on 25% of the training dataset in the training phase. The performance of MLP was investigated using the root mean square error (RMSE), mean absolute percentage error (MAPE) and area under curve (AUC) of receiver operating characteristic (ROC) measures. Sensitivity analysis was also used to determine most effective variables regarding predictive mapping of CL distribution
Results and Discussion
Results of global Moran’s I index indicated that there is spatial autocorrelation among CL cases, and also distribution of CL cases in Golestan province in each 3 years is clustered. Moreover, statistical analyses showed that majority of the incidences belonged to rural districts of Gonbad-Kavos and Maraveh-Tappeh. Based on the results of statistical analyses (including Pearson correlation and Spearman rank correlation), positive correlations were observed between the CL incidence rate and average temperature, maximum wind speed and evaporation. In addition, negative correlation was found between the CL incidence rate and average humidity, precipitation, number of rainy days, number of freezing days and vegetation. According to the results of evaluation criteria including RMSE, MAPE and AUC, the trained MLP model was able to generate risk maps of CL in 2013-2015 for each rural district with acceptable accuracy. Additionally, results of sensitivity analysis indicate that vegetation and average humidity are the most influencing variables in the incidence of CL and in predictive mapping of CL distribution in Golestan province.
Conclusion and Future works
In this study, the global Moran’s I index indicated the presence of spatial autocorrelation among CL cases, and clustered distribution of disease in the study area. The statistical analyses showed that environmental and climate factors greatly affect the spatial distribution of CL. The MLP method, used to generate CL distribution risk maps, was able to generate the study area risk maps with acceptable accuracy. Results highlight the potential high risk areas requiring special plans and resources for monitoring and control of the disease. As a future work, we suggest that the effects of other environmental and socio-economic parameters should be evaluated to improve the accuracy of the model. It is also recommended that other methods such as regression and other neural network techniques be used to generate CL risk maps.
Mohammad Ali Sharifi; Abbas Bahroudi; Saleh Mafi
Abstract
Extended Abstract Introduction Attitude determination of the fault planes and slid movements occurring on these planes are among the topics of interest to geoscientists. Among the methods that have been introduced to determine the attitude of the fault planes so far, the use of geological tools for justifying ...
Read More
Extended Abstract Introduction Attitude determination of the fault planes and slid movements occurring on these planes are among the topics of interest to geoscientists. Among the methods that have been introduced to determine the attitude of the fault planes so far, the use of geological tools for justifying the geometry of the faults with surface outcrops, and examining the changes of the stress field and the displacements appeared on the Earth’s surface can be mentioned. The slip rate is calculated using the displacement of the sedimentary rock layers relative to the displacement time and the simulation models. Materials & Methods In this research, a geometric method is presented to calculate the slip rate of Zagros faults. We consider each fault as a continuous set of fault fragments whose surface positions are known. Given that most of the Zagros faults are hidden, locating thefaults is carried out using the geologicalmap of Iran’s faults. The first issue in performing these calculationsisto determine the attitude of the fault planes in the Zagros seismogenic layer. The seismogenic layer is that part of the earth's crust whose deformation is elastic, and the major fractures caused by the earthquakes occur in this part. In order to determine the attitude of the fault’sfocal plane, we use the focal coordinates of the earthquakes occurringaround each fault segment. In performing these calculations, the focal locations of the earthquakes are transferred to the geodetic coordinate system and, the equation of the fault plane is calculated using the least squares method in the Cartesian coordinate system. One can obtain the azimuth of the strike of the planes relative to the astronomical north by calculating the coefficients of the fault planes. To determine the azimuth, we first obtain the unit vector of the strike line by cross product of the geodetic z-axis (normal vector of the horizontal plane) and the normal vector of the fault plane. The fault plane azimuth will then be the angle between the strike line vector and the north vector.The north vector is the vector which is determined by connecting the point located on the center of each faultfragment to the intersection point of the horizontal plane and thez-axis. Variation in dynamic mechanisms of thefaults in the region creates fractures with different directions on the ground. We obtain theslip angle (rake) of the fault from the difference of the fault direction and the direction of thesurface fracture and the type of thefault (strike-slip, dip slip and oblique).By calculating the slip angle, we now calculate the unit vector of the slip direction from the rotation ofthe strike line vector as much asthe rake angle. Results & Discussion In order to calculate the slip rate of each fault, we consider Zagros crust as an integrated object, which deforms uniformly by imposing the stress. Based on this assumption, we project the velocity vectors of the Zagros geodynamic network on the fault planes and calculate the slip rates using the slip direction vectors. It should be noted that the velocity vectors of the geodynamic network have been defined in the navigation coordinate system. According to the definition of the fault plane equations, it is necessary to transfer the velocity vectors to the geodetic coordinate system. The resulting slip rate is a parameter which is calculated for each fault fragmentindividually. Considering the effect of the systematic errors in the focalposition of theearthquakes, (including the error of the focal depth and the epicenter location), the slip ratesobtained for the fault fragmentsalways have systematic errors. Therefore, we define an average slip rate for each fault in order toreduce the error effect. In this study, velocity vectors of seventeen permanent stations of the Zagros geodynamic network provided by the National Cartographic Center (NCC) are used. The focal positions of the earthquakes are also published by the International Institute of Earthquake Engineering and Seismology (IIEES). Conclusion The obtained results showed that the regions with high fault slip rate usually have dense earthquakes. In addition, the seismicity potential of any region can be found by comparing the slip rate of each fault and the density of its earthquakes in the region. According to the changes in the slip rate obtained in Zagros, faults in the western part of Zagros, especially in Ilam province, have low slip rates. However, the province is considered asone of the seismic areas of the state in terms of earthquake density.It means that most of the slip movements occurringon the faults of the western region have been accompanied by vibration.
Mehrdad Kaveh; Mohammad Saadi Mesgari
Abstract
Extended Abstract Introduction Site selection for health centers and hospitals in proper locations and the allocation of population to them is an important issue in urban planning. The location and allocation of health and medical facilities including hospitals, have long been an important issue ...
Read More
Extended Abstract Introduction Site selection for health centers and hospitals in proper locations and the allocation of population to them is an important issue in urban planning. The location and allocation of health and medical facilities including hospitals, have long been an important issue for urban planners that has become more complicated with the growth of population. Location and allocation of hospitals is basically planned to ensure the availability of proper and comprehensive health services as well as the reduction of the establishment costs. Improper planning of the health centers has created multiple problems for big cities in developing countries in recent years. In the present study, the Genetic Algorithm (GA), Hybrid Particle Swarm Optimization algorithm (HPSO), Geospatial Information System (GIS) and Analytic Hierarchy Process (AHP) have been used for selecting proper sites of hospital and allocating the demanded locations to these centers in District 2 of Tehran. Materials & Methods The main goal of this research is to compare and evaluate the performance of the Genetic Algorithm (GA) and Hybrid Particle Swarm Optimization algorithm (HPSO) for determining the optimal locations of hospital centers and allocating the population blocks to them. In order to limit the search space, the analyzing capabilities of the Geospatial Information System (GIS) and Analytic Hierarchy Process (AHP) have been used to select the candidate sites satisfying the initial conditions and criteria. The locations of such candidate centers are the input of the optimization section. The accuracy of the entire process strongly depends on the selection of these candidate sites. Hence, in this paper, the Analytic Hierarchy Process (AHP) method has been used to select the candidate centers. Then, two optimization algorithms were applied in choosing six optimum sites from the candidate locations and allocating the population to them through minimizing the overall distances between the centers and their allocated blocks. In this study, to improve the Particle Swarm Optimization, a simple neighborhood search has been proposed for better exploitation of the elite particles. The main purpose of this neighborhood search is to increase the convergence rate of the algorithm without decreasing the random search. Since the neighborhood search has a specific definition proportional to each issue, and the issues of location and allocation are spatial issues as well, therefore, the geographic principle of appropriate distribution of the centers in space has been used to define the neighborhood search (the distance between the centers should not be less than a certain amount). In an elite particle, two centers with the lowest distance are selected and one of them is replaced by a new and randomly selected center. If such a change provides a better objective function, the newly created solution in the elite particle is replaced. To calibrate the algorithms parameters, a simulated data set has been used. Having proper values for those parameters, the algorithms were tested on the real data of the study area. Results & Discussion Given the results of algorithms on real data, the performances of both algorithms are highly dependent on the initial population and the allowed number of iterations. In general, lower numbers of iterations and more populations brings better results than the higher iterations and lower populations. The results show that the Hybrid Particle Swarm Optimization (HPSO) has better performance than the Genetic Algorithm (GA). The convergence rate of the Hybrid Particle Swarm Optimization (HPSO) algorithm is faster than the genetic algorithm (GA), which can be attributed to the particle’s motion toward the best personal and global experiences. Furthermore, the proposed neighborhood search has caused the HPSO algorithm to converge earlier. To evaluate the repeatability of the algorithms, they were performed 40 times for both simulated and real data. Both algorithms have displayed high levels of repeatability, but the Hybrid Particle Swarm Optimization (HPSO) algorithm is more stable. However, the use of Genetic Algorithm (GA) on simulated data has shown more stability than its use on real data. For both the simulated data and real data, the Hybrid Particle Swarm Optimization (HPSO) algorithm performs faster than the Genetic Algorithm (GA). Conclusion Simplicity and repeatability of the algorithm are among the important factors which are very significant from the user’s point of view. In this research, the HPSO algorithm has not only been repeatable and simple, but has performed faster than the GA. Therefore, considering these criteria, regarding the special case of this research, the HPSO seems to be more promising than the GA.
Mir Reza Ghaffari Razin; Behzad Vosooghi
Abstract
Extended Abstract Introduction Development of reliable models for estimation and prediction of changes inTotal Electron Content (TEC) of the ionosphere is still considered to be a real challenge for geodesists and geophysicists. This ispartly due to the nonlinear behavior of the physical and geophysical ...
Read More
Extended Abstract Introduction Development of reliable models for estimation and prediction of changes inTotal Electron Content (TEC) of the ionosphere is still considered to be a real challenge for geodesists and geophysicists. This ispartly due to the nonlinear behavior of the physical and geophysical parameters affecting the TEC variations, as well as the difficulty in accurate measurement of some of these parameters. Due to its specific nature, as well as its physical and geophysical properties, quantity of TEC hasspatio-temporal variations, which can be attributable to daily, and seasonal variations, various anomalies, or periods of solar activity. Total Electron Content is the quantity which can be used to study ionospheric activities, as well as the spatio-temporal variations in electron density of this layer. In fact, TEC is the total number of free electrons in the path between the satellite and the receiver in a one square meter column. The measurement unit of TEC is TECU, which is equivalent to 1016electrons/m2. Due to inappropriate spatial distribution of GPS receivers and their limited number, as well as observationaldiscontinuity in the time domain, TEC values and electron density obtained from theGPS measurements will be spatiallyand temporallyconstrained. In order to calculate TEC value in areas lacking observation or appropriatestation distribution, TEC value obtained from GPS measurements must be interpolated or extrapolated in a suitable manner. Materials and Methods By combining wavelet localization features with standard neural networks, Wavelet Neural Networks (WNN) have emerged as a new mathematical method for modeling and predicting the behavior of different phenomena.In WNNs, the output parameter is usually calculated by the following equation: (1) wherex is the inputobservations vector, is a the multi-variablewavelet whichcan be calculated by the tensor productof m (basic function of single variable wavelets), ë is the number of neurons in the hiddenlayer, and ù shows the network weight. Unlike the Backpropagation (BP) algorithm, PSO is a global search algorithm that can optimize the initial weights and introduce the appropriate structure for the network. Equations used in this algorithm are as follows: (2) (3) In which, shows the initial weight, represents the particle’s velocity i in repetition t, c1 and c2, indicate the particle acceleration coefficients, is the current position of particle i in repetition t and gbest represents the best particle position. The present study took advantage of a smoothing algorithm to determine STEC observations. Observed STEC values are as follows: (4) To obtain TEC value along the zenith, the following mapping function can be used: (5) Which we will have: (6) Elev. in relation (6) is the satellite’s elevation angle. Results and Discussion Observations of 37 Iranian GeodynamicNetworkson 2012.08.11 (DAY 224) were used to evaluate the efficiency of WNN and PSO training algorithm in modeling and predictingspatio-temporal variations of TEC in Iran. Of the 37 stations, 5 were used as test stations, 2 were used to evaluate the wavelet neural network, and the rest were used to train the network. Four different combinations of input observations are examined in this paper. Number of input observations selected from the Iranian Permanent Geodynamic Network(IPGN) to train the WNN using PSO algorithm was25, 20, 15 and 10, respectively.Table 1 shows the characteristics of different combinations evaluated in this paper. Table 1. Characteristics of the observations used in the different combinationsevaluated To evaluate the accuracy of the results obtained from IRI and WNN model, all results were compared with TEC observations obtained from GPS. Table 2 shows the correlation coefficient for different scenarios. Table 2. correlation coefficient for different scenarios According to Table (2), the first scenario in WNN method with GPS hasthe highest correlation coefficient. Even when the number of observations in the databasedecreases in the third scenario, theWNN method still has a higher correlation coefficient compared to the IRI2012 model. In the fourth scenario, the correlation coefficient for WNN method is reduced to some degree. The average relative and absolute error values at the 5 test stations were calculated for the four different scenarios and presented in Table3. Table 3. Comparison of mean relative error and absolute error values at 5 test stations for four different scenarios. Statistical analysis of relative and absolute error showssuperiority of WNN method in TEC modeling as compared to the IRI2012. Conclusion To model total electron content of the ionosphere, 4 combinations of observations were evaluated. 25, 20, 15 and 10 stations were used to train the wavelet neural network. 300, 240, 180, and 120 observations(latitude and longitude, observation time)were considered in the database, respectively.Results of the analysis indicated that with a decrease in the number of observations in the database, the absolute and relative error increase, while correlation coefficient decreases. This decrease was not evident before 180 observations, but relative and absolute errorreached up to twice their values with 120 observations. It should be noted that even with 120 observations (10 stations for training), results of the wavelet neural network model are more accurate than the results of the IRI2012 model.
Geographic Information System (GIS)
Mina Karimi; Mohammad Saedi Mesgari
Abstract
Extended Abstract1. IntroductionIn GIScience, spatial information has usually been presented in the form of space. However, human reasoning, behavior, and perception are mainly based on place, not space. Places are usually ambiguous and context-dependent and are related to the human experience of the ...
Read More
Extended Abstract1. IntroductionIn GIScience, spatial information has usually been presented in the form of space. However, human reasoning, behavior, and perception are mainly based on place, not space. Places are usually ambiguous and context-dependent and are related to the human experience of the world. Place functionality as a context in place descriptions is one of the main and distinguishing features of the place. Today, with the increasing use of users of social networks, volunteered geographic information (VGI) and crowdsourcing information has grown significantly. However, information obtained from social networks, e.g. check-ins, often does not have a complete and clear view of the concept of place and it does not include spatial information between phenomena, land uses, and points of interest (POI). It ultimately limits their ability to work with the concept of place. In this case, GIS should detect the place functionality that does not necessarily exist simply and clearly in the stored data.2. Materials and MethodsTo address these issues, this paper aims to extract place functionality based on analysis of user-generated textual contents. In order to achieve this goal, first places and user’s reviews about places in TripAdvisor website are collected through web crawling. The advantage of these data over other place-based data is their independence from formal descriptions of place. These data were collected in October 2020, and only English reviews are considered. New York City (NYC) is selected as our case study area. At first, for each place type, we extracted all corresponding places. Then, for each place, we extracted a maximum of 1000 top reviews. To prepare data, places without geographic coordinates, places out of the study area, duplicates or places whose type is unknown are removed. There are five types of place categories on TripAdvisor, including Attraction, Food Serving Place, Hotel, Shop, and Vacation Rental. Then, different natural language processing (NLP) methods are used to preprocess the reviews. First, each review is converted to lower case and tokenized, then punctuations and stop words are removed. Afterward, all tokens are stemmed and lemmatized. In the next step, proper features should be selected for knowledge discovery. We use a bag-of-words (BoW) feature selection method which features values are weighted using TF-IDF scores for each user’s review. Finally, in a supervised method, these values and place functionalities are trained using a logistic regression classifier to predict place functionality on the test dataset.3. Results and DiscussionWe randomly assigned 75% of the data set to train the model and 25% to test the results. Finally, the results are evaluated using common machine learning evaluation measures by computing confusion-matrix. The evaluation results demonstrate that the overall accuracy of the proposed method is about 96% which is remarkable. For Food Serving Place, the predictions are so close to reality that in 98% of cases the algorithm was able to correctly predict Food Serving Places. Also, about 0.8% of them are considered as Attractions. In the case of Hotels, the accuracy is 97%. However, about 1.8% of Hotels are incorrectly categorized as Food Serving Places. Attractions are also 93% correctly predicted and about 3.8% of them are mistaken for Food Serving Places. In the case of Shop, the accuracy is about 74%, because the number of reviews related to this type of functionality is lower, although this issue has been partially resolved by weighting the samples. Secondly, in many cases, people visit the shopping malls for entertainment and not just shopping, which has led to about 15% of Shops being classified as Attractions. Also, about 11% of these Shops are considered as Food Serving Places. One of the most important reasons for this is the action of buying food in these places, which is a kind of purchase. In addition, in some shopping malls there are places to serve drink and food. Since the reviews of the Vacation Rentals was less than other functionalities, the lowest accuracy (about 65%) is related to them. In 25% of cases, Vacation Rentals are classified as Hotels. This result is not too far-fetched, as Vacation Rentals and Hotels are very similar in function and are often used to accommodate travelers and tourists. Also, 4.8% and 4.6% of them are classified as Attractions and Food Serving Places, respectively. The maximum precision and F1-score is achieved for Food Serving Places while Vacation Rentals show the least precision and F1-score since their functionality is similar to hotels, however, their results are also reliable and satisfactory.4. conclusionIn this study, we tried to extract the place functionality by analyzing the user-generated textual contents shared on the TripAdvisor website by users. To achieve this purpose, different NLP methods were used to prepare and preprocess the data. The bag-of-words constructed for each user's review was then modeled to a logistic regression classifier, and the place functionality on the test data was predicted. In future works, the efficiency of other feature selection methods as well as other classifiers in extracting place functionality can be evaluated and compared. In addition, the place functionality should be extracted in more detail where different types of attractions can be distinguished.
Geographic Data
Zahra Moradi; Mohammad Sadi Mesgari
Abstract
Extended Abstract-Introduction: The growing importance of housing is not hidden from anyone in terms of the profound and significant effects it has on the various social, political, and economic dimensions of countries; Therefore, accurate and reliable price estimation definitely facilitates policy-making ...
Read More
Extended Abstract-Introduction: The growing importance of housing is not hidden from anyone in terms of the profound and significant effects it has on the various social, political, and economic dimensions of countries; Therefore, accurate and reliable price estimation definitely facilitates policy-making in this field. Hundreds of factors may affect property prices in different situations as a subset of structural, spatial, and socio-economic factors. Therefore, considering these factors, property pricing should be done efficiently. Due to the complex nature of the real estate market, research has used common deep learning algorithms such as DNN, RNN, CNN, etc., but these algorithms are not very suitable for tabular data. On the other hand, the deep learning models in property pricing are also completely definite and do not take into account data uncertainty.Materials & Method: In this article, we have tried to pay attention to the tabular structure of real estate data in applying deep learning methods. The TabNet deep new architecture is used for this purpose. In addition, at the same time as the learning process, it makes feature selection fully interpretable. In this study, also using existing combination techniques, fuzzy logic is combined with deep learning algorithms to learn complex problems faster and more accurately, to overcome the shortcomings of the certainty of deep learning models and not consider the inherent uncertainty of the data in this models. In this study, using the existing combination techniques, also using spatial information system (GIS) to provide a clearer evaluation to ensure full visualization of the spatial pattern of property properties as well as the relationship between these properties and pricing and spatial variables are included in the valuation model. In order to evaluate the proposed methods, real estate data of District 5 of Tehran were used.Results & Discussion: The order and prioritization of the impact of features on the pricing of Tehran residential properties by the TabNet algorithm indicate the significant impact of spatial factors. So that in this ranking, after the area, the two spatial characteristics of latitude and longitude have the second and third ranks, respectively. Basically, latitude and longitude indicate the criteria of neighborhoods and the type and prestige of different places in the city, and the social class of different streets and neighborhoods in the city, which is clearly a factor in influencing the price. Finally, TabNet, DNN, CNN, RNN, LSTM, Autoencoder algorithms as well as XGBoost machine learning algorithms were used for the Tehran data set, and RMSE, MA and evaluation criteria were compared, which according to the criterion, a 5% improvement in accuracy was achieved by using TabNet. Finally, the RMSE of the FuzzyTabNet hybrid algorithm for Tehran data decreased by 4/65% compared to the basic TabNet algorithm. The fuzzy Autoencoder network also improved by 5/52% compared to the common Autoencoder network.
Remote Sensing (RS)
Nazanin Hassanzadeh; Reza Hassanzadeh; Mahdieh Hosseinjanizadeh; Mehdi Honarmand
Abstract
Extended AbstractIntroductionAir pollution is one of the most crucial environmental problem in the glob and its impact on human live and ecosystem is undeniable. The International Agency for Research on Cancer introduced air pollution as one of main causes of cancer. Therefore, by monitoring air pollution ...
Read More
Extended AbstractIntroductionAir pollution is one of the most crucial environmental problem in the glob and its impact on human live and ecosystem is undeniable. The International Agency for Research on Cancer introduced air pollution as one of main causes of cancer. Therefore, by monitoring air pollution would be a necessity in industrialized cities. Air quality index include evaluation of the amount of NO2, SO2, O3, CO and Aerosol in the air. As, ground station has limited ability to assess the amount and distribution of these harmful gases in the urban and rural areas, therefore, remote sensing technology become a popular tool in assisting research to shed light on this subject. The current study evaluates air pollution caused by Khatoonabad Copper Smelting Factory using Sentinel P5 satellite images.Materials & MethodsThis research investigates the air pollution created by Khatoonabad Copper Smelting Factory and determines its impact radius, using Google Earth Engine system and Sentinel P5 satellite images. Khatoonabad Copper Smelting Factory is located in the northwest of Kerman province at the latitude of 29 Degree 59 Minute to 30 Degree 32 Minute and longitude of 54 Degree 52 Minute to 55 Degree 55 Minute. By performing the coding operation in the Google Earth Engine system, the images related to the average air pollution for So2 and No2 in the area of 50 km from the factory and in a period of 30 months from 07/04/2018 to 12/30/2021 were obtained. The amount and distribution of pollutants were examined based on one-day, seven-days, fourteen-days, one-month, two-months, three-months, six-months and twelve-months’ time periods from December 2020 to assess the concentration of pollution in the cold months of the year, also for the same time periods from June 2021 to assess the concentration of pollution in the warm months of the year.In order to map distribution of each pollutants, Natural Break Classification and Hot Spot Analysis methods were performed on the images obtained from Google Earth Engine in GIS. Natural Break Classification method is based on Jenk optimization and classify spatial data based on statistical properties of each input where variances between classes maximize. Hot Spot Analysis methods is a spatial and statistical method that consider spatial autocorrelation among the spatial data to classify the data according to statistical significance of each class. Points that surrounded by high values and they are statistically significant called hot spot and areas that are surrounded by low values and have high negative Z score and low P values ( P value < 0.05) are called cold spot.Results & DiscussionThe results based on an averaged image for the period of 30 months indicated that the amount of So2 from 0.0000987 to 0.000698 (mol/m2) and the amount of No2 from 0.00005854 to 0.00006932 (mol/m2) in the study area that by increasing the distance from the factory, the amount of So2 and No2 decreased. Furthermore, analyzing the average amount of So2 and No2 in different period of daily, weekly, two weeks, and monthly have showed dispersed spatial distributions in warm and cold season of the year. Therefore, Sentinel 5P data in short-term periods such as daily, weekly, two-week and even one-month cannot provide accurate information on the spatial distribution of No2 and So2 in the study area.In the data obtained from the two-month, three-month, six-month and one-year intervals, the amount of sulfur dioxide concentration has less dispersion than the short-term intervals, and as the time interval increases, the images show less dispersion of sulfur dioxide gas in polluted areas. Therefore, the obtained results indicate that Sentinel 5P images with longer time intervals of two months are able to provide more accurate and logical information about the concentration of sulfur dioxide gas in the area. However, in case of nitrogen dioxide, the imaged longer than two weeks can provide accurate information regarding the spatial distribution of this pollutant in the area.Hot spot analysis was also performed on the images obtained in one-day, seven-day, fourteen-day, one-month, two-month, and three-month intervals from June in order to investigate the concentration and dispersion of pollution in the hot days of the year. Then the maps obtained from the hot months were compared with the maps of the same period from the cold months of the year. This comparison showed that in the maps obtained from the short-term intervals related to the hot months of the year, the density of hot spots was more observed in areas prone to the presence of sulfur dioxide gas. For example, the one-day image from December showed a lot of dispersion, while the one-day image from June indicated less dispersion and more density of gases in polluted areas. In addition, in the one-week, two-week and one-month maps from December hot spots and cold spots show much greater dispersion compared to similar maps in the same periods from June. However, by comparing the two-months and three-months hot spot maps of the cold months to the same maps of the hot months of the year, almost similar results were obtained, even more density were observed in the hot spot map of longer periods (more than two months) in winter time. The same trend happened by analyzing nitrogen dioxide in the studied area. ConclusionThe results obtained from the classification of images related to sulfur dioxide gas showed that the concentration of sulfur dioxide gas in the area around the desired factory has the highest concentration value and as the distance from the factory increases, the concentration of sulfur dioxide gas decreases. Also, according to the minimum and maximum concentration of sulfur dioxide in the studied area, it is concluded that more sulfur dioxide is observed in the cold months of the year than in the warm months of the year. However, in the cold months the concentration of sulfur dioxide has a greater range of changes than the hot months of the year.According to the results, the dispersion of sulfur dioxide concentration in short time intervals such as daily, weekly, fortnightly and even one month was very high in these time intervals. As a result, Sentinel 5P images are not able to provide logical and accurate information about the distribution of atmospheric sulfur dioxide concentration in daily, weekly, two-week and one-month intervals. In order to obtain accurate and logical information, images with time intervals longer than one month should be used, and the longer the time interval is, the more reliable the results will be.The results of the hot spot analysis of the images related to sulfur dioxide concentration also indicated a high concentration of sulfur dioxide gas in the area around the factory. According to the obtained results, the activity of the studied factory can be a reason for the increase in the concentration of sulfur dioxide gas in this area, which has affected a radius of about 4 to 6 kilometers and an area of about 10,700 hectares around the factory.The results obtained from the classification of images related to nitrogen dioxide gas show that the concentration of nitrogen dioxide in the area around the factory has a higher limit. According to the minimum and maximum concentration of this gas in the study area, it can be concluded that in the hot months of the year, the concentration of nitrogen dioxide gas is higher than in the cold months of the year. Considering the rapid spread of nitrogen dioxide gas in the atmosphere by the wind due to the high dynamics of this gas (Vîrghileanu et al., 2020), it can be concluded that the images obtained from the time intervals of two weeks of more can provide more information about the concentration of nitrogen dioxide in the atmosphere.The results of the hot spot analysis of the images related to nitrogen dioxide gas showed that in the time intervals of two weeks to two months in the cold months of the year, there are hot spots that indicated the presence of nitrogen dioxide gas in the atmosphere located above the factory. However, in long-term intervals such as three months, six months, one year and thirty months, in the cold and hot months, hot spots are observed towards the northwest and at a distance from the factory.The result of this research can assist environmentalist and researchers in using and interpreting Sentinel 5P data by considering different periods in cold and warm seasons for making informed decisions.
Extraction, processing, production and display of geographic data
Misagh Sepehry amin; Hassan Emami
Abstract
Extended AbstractIntroductionA digital orthophoto is a reliable, accurate, and low-cost map for acquiring knowledge, including geolocation, distance, area, and changes in imagery features. It is now considered one of the most widely used and sophisticated digital photogrammetry products. Orthophoto map ...
Read More
Extended AbstractIntroductionA digital orthophoto is a reliable, accurate, and low-cost map for acquiring knowledge, including geolocation, distance, area, and changes in imagery features. It is now considered one of the most widely used and sophisticated digital photogrammetry products. Orthophoto map creation is substantially faster than traditional topographic map production because of the development of powerful algorithms for processing aerial, drone, ground, and satellite imagery. To begin, orthophoto is a result of photogrammetry processing that employs the Digital Terrain Model (DTM), which is commonly observed in classic aerial photogrammetry. In orthophotos, you will frequently notice an effect in which the terrain representation is very accurate, but there is a tilt in the buildings and other tall structures, which is caused by the use of DTM, which only maps the natural shape of the earth, excluding vegetation and all man-made objects and structures. A true orthophoto map provides a vertical view of the earth's surface, eliminating building tilting and providing access to practically any location on the ground. Traditionally, measuring digital surface models has been highly complex and costly. It is generally accomplished through the use of LiDAR or ground measurements. The end result of drone photogrammetry is known as an orthomosaic. In actuality, an orthomosaic is comparable to a true orthophoto (since it is formed using a digital surface model), but it is often not based on a metric camera with accurate focal length and internal dimensions, as they are expensive and not readily accessible for UAVs. Furthermore, orthomosaics may be generated using both nadir and oblique images. Drone-based orthomosaics are created based on the digital surface model rather than as a separate survey like traditional aerial photogrammetry. The DSM is produced by drone photogrammetry based on the 3D point cloud, which is the initial output of data processing. Materials & MethodsThe huge success of online services like Google Earth, Google Maps, Bing Maps, and so on increased demand for orthophotos, resulting in the development of new algorithms and sensors. It is commonly understood that orthophoto quality is determined by image resolution, camera calibration, orientation accuracy, and DTM accuracy. Because digital cameras produce high-resolution imagery, one of the most important consequences in orthophoto generation is the spatial resolution of the DTM: standing objects, such as buildings, plants, and so on, exhibit radial displacement in the final orthophoto. In practical applications, orthophotos are utilized as small and medium scale maps; updated earth surface maps; three-dimensional urban scene reconstruction; village surveying; land planning; precision agriculture; desertification monitoring; land use surveying; and other sectors. True orthophotos are orthophotos that have been improved to minimize tilt inaccuracy and projection discrepancies. The true orthophoto is exceedingly stringent with the original image; the heading overlap and side overlap are at least 80% and 60% overlap, respectively. Due to the reduction of displacements produced by camera tilt and height difference, the use of orthophoto as a spatial data format with high geometric accuracy has found growing applications in recent years. With the growing relevance of geographic information systems, particularly in metropolitan areas, the use of orthophoto in conjunction with spatial data has grown. Because orthophoto contains correct spatial and textural information about complications, it is feasible to produce virtual reality by integrating it with 3D models, where it is able to properly quantify the height and plane location of complications during 3D viewing. In this research, a novel approach for generating orthophotos from Google Earth imagery for specific purposes was developed and qualitatively and quantitatively compared to orthophotos created from UAV images.Results and discussionThe result demonstrated the total error of orthomosaic generation from Google Earth imagery and UAV data to be 0.124 and 0.059 m/pixel, respectively. Moreover, the visual findings reveal that the edges of low-height barriers in the orthophoto generated from Google Earth images are superior to those in the orthophoto generated from drone imagery, but the edges of high-height obstacles, particularly those with noticeable shadows, are of poor quality. The findings of statistical parameters in quantitative surveys using randomly selected points in non-building regions revealed that the errors in the orthophoto derived from Google Earth data are 1.10 meters and 1.34 meters in terms of mean error and root mean square error (RMSE), respectively. In addition, the orthophoto generated from UAV data and Google Earth showed a 95% correlation and a 91% determination coefficient. In contrast, in building regions, the average height error and average square root error in the orthophoto generated from Google Earth data compared to UAV data were around 9 meters and 5 meters, respectively. Statistical metrics in these locations also revealed a low correlation of 80% and a determination coefficient of 65%.ConclusionsIn this research, a novel approach for generating orthophotos from Google Earth imagery for specific purposes was developed and qualitatively and quantitatively compared to orthophotos created from UAV images. As a result, as the height of the obstacles and the presence of lengthy shadows increase, so does the inaccuracy of the height component of the orthophoto derived from Google Earth imagery. Therefore, it is advised that orthophotos for special applications, flat regions, and hills be created using Google Earth images. Additionally, Google Earth data offers the following advantages: free of charge; the utilization of historical imagery to generate orthophotos; and nearly four times less processing time and volume.
Reza Mansouri; Ezzatollah Ghanavati; Mohammad Reza Servati
Abstract
The vast country of Iran has diverse geographical conditions with 11 out of 13 known climates in theworld. This hasresulted in many environmental,ecotourist, recreational, and economic capabilities. The tourist infrastructure of any region is the nature of that area and as one of its parts,geotourism ...
Read More
The vast country of Iran has diverse geographical conditions with 11 out of 13 known climates in theworld. This hasresulted in many environmental,ecotourist, recreational, and economic capabilities. The tourist infrastructure of any region is the nature of that area and as one of its parts,geotourism has in recent years, experienced a dramatic and significant upsurge in the world and has had a great influence on regional development. Geomorphological and geological visit is one of the main aspects of Geotourism. Ilam province with a surface area of 19,086 KM2ranges from 31 deg. 58 min. to 34 deg. 15 min. North Latitude from the Equator and 45 deg. 24 min. to 48 deg. 10 min. East Longitude from the Greenwich. The province is considered as one of the most prone areas for development of geotourism with its geotourist, geomorphological and geological attractions such as caves, mountains, rivers, waterfalls and springs. In this regard, attention to the natural capacities, conservation, presentation and utilization of geotourism attractions can play an important role in the sustainable development of the province,while it can be effective in solving problems such as unemployment and creation of employment in the province.This research studies the characteristics of Ilam province in a library, field, descriptive - analytical method and by using maps and satellite images,
Remote Sensing (RS)
Hosein Nesari; Reza Shah-Hosseini; Amirreza Goodarzi; Soheil Sobhan Ardakani; Saeed Farzaneh
Abstract
Extended Abstract
Introduction
Atmospheric aerosols are a colloid of solid particles or liquid droplets suspended in the atmosphere. Their diameter is between 10-2 to 10-3 micrometers. They directly and indirectly affect the global climate by absorbing and scattering solar radiation, and they also ...
Read More
Extended Abstract
Introduction
Atmospheric aerosols are a colloid of solid particles or liquid droplets suspended in the atmosphere. Their diameter is between 10-2 to 10-3 micrometers. They directly and indirectly affect the global climate by absorbing and scattering solar radiation, and they also have a serious impact on human health by emitting harmful substances. In addition, high concentrations of aerosols on a local scale due to natural or human activities have adverse effects on human health, including cancers, pulmonary inflammation, and cardiopulmonary mortality. Monitoring the temporal and spatial variability of high concentrations of aerosols requires regular measurement of their optical properties such as aerosol optical depth (AOD).
Materials & Methods
Algeria is a large country with little knowledge of the spatial and temporal diversity of AOD, and the low spatial resolution of existing products makes it very difficult to predict aerosols (airborne particles) at the local scale, especially in arid southern regions. As a result, AOD recovery with data with higher spatial resolution is crucial for determining air pollution and air quality information. Several AERONET stations have been installed in Algeria. The Tamanrasset_INM station has been selected based on its location and the availability of historical AOD data for the period (2015-2016).
In this study, Landsat-8 / OLI image from tile 192/44 was used for satellite images. To this end, 23 TOA-corrected L1G-level Landsat-8 / OLI cloudless scenes were downloaded from January 2015 to December 2016 in the study area. DN values are converted to TOA reflections using the scaling factor coefficients in the OLI Landsat-8 metadata file. In this study, the minimum monthly reflectance technique was used to recover AOD in this area. As a result, LSR images were used in the recovery process in different months of 2015 and 2016. The process of selecting reference LSRs was initially based on the selection of clear, foggy / cloudless sky images. The selected images were then used to construct artificial images in which each pixel corresponds to the second lowest surface reflection of all selected monthly images to be the LSR pixel for the respective month. The AOD retrieval method developed in this study is based on a LUT, using the 6S radiative transfer model. The advantage of using the 6S model is its ability to estimate direct components and scattering using a limited number of inputs for each spectral band in the entire solar domain. The effect of the viewing angle is limited because Landsat data are usually obtained with a fixed viewing angle. Surface reflectance can be estimated from a pre-calculated LSR database. The accuracy of AOD recovery depends on the use of the appropriate aerosol model. A continental model was selected from the available aerosol models. Other atmospheric parameters such as ozone, carbon dioxide, carbon monoxide and water vapor are considered by default. The AOD values used to make LUT are set as follows: 0.0, 0.05, 0.1, 1.5, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0, 1.2 and 1.5. The zenith angles of the sun and the sensor range from 0 to 70 degrees with a step of 5 degrees and the range of azimuth angles from 0 to 180 degrees with a step of 12 degrees. Using these parameters, the radiative transfer equation was run in forward to obtain the TOA reflection. Different combinations of input and TOA output parameters are stored in LUT. AOD retrieval is based on a comparison between the TOAs estimated with the model and the observed items using the best fit approach. Using such an approach, the estimated AODs are simulated in accordance with those used in the production of TOAs, using a competency function that minimizes the distance.
Results & Discussion
In this study, the AODs recovered at 550 nm in a 5-by-5-pixel window around the AERONET site were averaged. The considered AERONET values are the average of all measurements taken within ± 30 minutes of image acquisition time. Observation regression results (AOD from Landsat 8 images and AERONET stations) showed that the correlation coefficient is about 84%. This study shows a good fit of the model on the research data and shows the high capability of the model. This study showed a strong recovery of AOD against AERONET data of more than 70% at . The differences can be attributed to a limited number of points or hypotheses related to the aerosol model used in this study. The assumption of using a pre-calculated LSR does not limit the accuracy of this method because we have shown that in arid regions where the change in land cover in different months of the year is small, a pre-calculated LSR image can be representation used the share of surface reflection in the radiative transfer model throughout the month.
Conclusion
In this study, an AOD derived from a high-resolution satellite at an urban scale was produced in the city of Tamanrasset, Algeria. The developed method assumes that the change in land cover is minimal and the temporal change in LSR is not significant. A pre-calculated LSR image is created to show the surface reflection in the retrieval process. Based on the 6S radiative transfer model, an LUT was constructed to simulate the TOA reflection of the built-in LSRs and a set of geometric and atmospheric parameters. The retrieved AODs were compared with the AERONET ground data. The results show that this approach can achieve reasonable accuracy in AOD recovery, which reaches about 70.9% at . In addition, this approach is suitable for estimating AOD in urban areas compared to existing AOD products with low spatial resolution. The results of this study show a 4% improvement compared to the results of Omari et al. (2019). The results of this study showed that ignoring the monthly changes in LSR values leads to good results in AOD recovery.
Seyyed Yahya Safavi
Volume 9, Issue 35 , November 2000, , Pages 4-6
Abstract
The importance of each geographic region is considered in dealing with seasonal, cyclic, accidental and sudden changes, and commanders and military experts of high rank should make assessments according to missions, situations, forces of either side, and specialties and functions of technology. The influence ...
Read More
The importance of each geographic region is considered in dealing with seasonal, cyclic, accidental and sudden changes, and commanders and military experts of high rank should make assessments according to missions, situations, forces of either side, and specialties and functions of technology. The influence of natural, human and military/political geography on plans, programs and military operations at any level indicates the necessity of comprehensive examination and quick analysis and access to geographic information.
Seyyed Yahya Safavi
Volume 8, Issue 32 , February 1999, , Pages 4-7
Abstract
Political geography is one of the branches of geographical sciences that examines all natural, cultural (human) and environmental effects on policies, military plans and combative / supportive operations at global, regional and local levels.Political geography is a science that expresses the effects ...
Read More
Political geography is one of the branches of geographical sciences that examines all natural, cultural (human) and environmental effects on policies, military plans and combative / supportive operations at global, regional and local levels.Political geography is a science that expresses the effects of geographic factors of a country or area on military movements.Political geography is a part of military sciences that concerns environmental characteristics of the area of operation.Political geography includes application of geographic analysis method. It is worth noting that various definitions have been presented for political geography, but two major points are necessary to be considered in order to reach a comprehensive definition.
A. Scale and application of military geography The application of military geography for political leaders, national security planners, commanders and military hierarchies that play major roles in formulation of military strategies, which has specific definition and a certain scope. B-The military geography has its own value against internal and external threats and various geopolitical conditions of any region. In fact, military geography faces threats by presenting their direction and exploitation of geographic factors against these threats. Military geography is an effective guide in national and macro planning in a country including land preparation and allocation, so that appropriate locating of development plans and constructive measures of the country such as infrastructure and industrial establishments take defense-security considerations into account in all circumstances.
Mahdi Modiri
Volume 6, Issue 22 , August 1997, , Pages 4-6
Abstract
Remote sensing is a science that provides valuable information on objects and land features by measuring distances from afar and without physical contact.
In remote sensing, information can be obtained by measuring and recording the reflections of electromagnetic waves of atmosphere and ground level, ...
Read More
Remote sensing is a science that provides valuable information on objects and land features by measuring distances from afar and without physical contact.
In remote sensing, information can be obtained by measuring and recording the reflections of electromagnetic waves of atmosphere and ground level, which are received by sensors installed on satellites, and after their analysis, the necessary information is extracted.
There are three major factors of reflection, absorption and passage in the collision of electromagnetic waves with any phenomenon, each of which depends on the wavelength of radiated energy and physical and chemical properties of the phenomenon, and the energy reflection from any phenomenon on the Earth is a function of wavelength, molecular and intracellular properties of the phenomenon and other physical characteristics of the objects under measurement. The satellite data originally contains various geometric and radiometric errors that are affected by satellite, sensor and atmospheric conditions, as well as errors in recording, transmission of information and other related issues.
Satellite data become valuable and useful after making geometric and radiometric corrections. By performing geometric corrections, satellite information is readily to for analysis and utilization.
Fatemeh Razi'ee
Volume 6, Issue 21 , May 1997, , Pages 4-7
Abstract
Soil and erosion information are among the most valuable data for environmentalists and supporters of development of agricultural activities. However, there are still no standards regarding data collection. Currently, a project is being conducted that is aimed at gaining data related to the world’s ...
Read More
Soil and erosion information are among the most valuable data for environmentalists and supporters of development of agricultural activities. However, there are still no standards regarding data collection. Currently, a project is being conducted that is aimed at gaining data related to the world’s status. The Europe Hun Soter is also being under preparation and implementation.
Various efforts have been made over a period of one hundred fifty years to record soil data in Hungary. The Kreybig Project (1932-51) is one of the most prominent of such attempts. This project, which in practical terms is similar to other methods and especially to GIS, paved the way for launching the Soter in 1986. This international project, which has been conducted for presentation of main numerical data related to soil and land features on a scale of 1: 1,000,000, combines the data from common and modern technologies in order to make digital maps. The main Pilot Soter data are presented around the world with the hope that they will eventually lead to creation of stable and homogeneous soil erosion data that can be easily updated.
Rahim Sarvar
Volume 5, Issue 20 , February 1996, , Pages 4-15
Abstract
Today regional imbalances within the national space have made it necessary to investigate urban network and hierarchy system in order to address the factors affecting the location of cities, the pattern of distribution of urban population and, finally, the balance and imbalance in the urban hierarchy ...
Read More
Today regional imbalances within the national space have made it necessary to investigate urban network and hierarchy system in order to address the factors affecting the location of cities, the pattern of distribution of urban population and, finally, the balance and imbalance in the urban hierarchy system of each region, and suggest solutions based on the knowledge of the status quo. The purpose of this paper is to investigate the spatial distribution of population and the balance of the urban hierarchy system in the southern coast of Iran. In this paper, first the position and natural characteristics of the region are introduced, and then the location of cities, urban population developments, spatial distribution and pattern of urban population dispersion are studied, and finally the urban hierarchy system is considered.
Saeed Movahedi; Mohammad Mosayyebi
Volume 5, Issue 18 , August 1996, , Pages 4-18
Abstract
Chaharmahal and Bakhtiari province is located at the heart of the Zagros Highlands between north latitudes of 31 degrees,14 minutes to 32 degrees, 47 minutes, and between eastern longitudes of 49 degrees,51 minutes to 51 degrees, 34 minutes (Refer to maps 1 and 2). The altitude of the province is very ...
Read More
Chaharmahal and Bakhtiari province is located at the heart of the Zagros Highlands between north latitudes of 31 degrees,14 minutes to 32 degrees, 47 minutes, and between eastern longitudes of 49 degrees,51 minutes to 51 degrees, 34 minutes (Refer to maps 1 and 2). The altitude of the province is very high, so that Shahrekord is 2066 m, Zaman Khan bridge 2000 m and Hamgin 2150 m above the sea level. The temperature in the region increases from north to south, but in general it is one of the coldest parts of the country. For example, the annual temperature in Shahrekord is 12.1 ° C and the average minimum temperature in five months of year in this city is below zero degrees (See Figures 1 and 2). The average annual temperature of Adl Borujin station is 10.4 degrees and the average of the minimum temperature during the five coldest months of year is below zero. The rainfall in the region is high in comarison with the rest of the country; for example, the annual rainfall is 323 mm in Shahrekord, 469.5 mm in Borujen and 530 mm in Lordegan (see Figures 3 and 4).
In this paper, while identifying the climatic factors of Chaharmahal and Bakhtiari province and their effects on the design of residential spaces, we tried to answer the following questions.
• What are the characteristics of the province’s climate and what is its condition in terms of temperature? • What is the impact of climatic conditions on residential space design? • What is the amount of cooling and heating energy required in different seasons, and under which conditions can energy requirements be brought to a minimum? • How can residential spaces, streets and alleys be arranged to receive maximum solar energy during cold season and the minimum of this energy during warm season?
Alireza Azmudeh Ardalan
Volume 3, Issue 11 , November 1994, , Pages 4-11
Abstract
GPS (or Global Positioning System) is a NAVATAR (or Navigation System Using Time and Distance). This system is being developed by the U.S. Department of Defense. The creation of this system has begun since 1973 by combining the U.S. Air and Navy Forces’ projects. The plan of these two projects ...
Read More
GPS (or Global Positioning System) is a NAVATAR (or Navigation System Using Time and Distance). This system is being developed by the U.S. Department of Defense. The creation of this system has begun since 1973 by combining the U.S. Air and Navy Forces’ projects. The plan of these two projects had been approved separately since 1960 for creation of a one-way navigation system. GPS is formed by three parts, namely satellites, system controller, system users.
Alireza Azmudeh Ardalaan
Volume 1, Issue 4 , May 1992, , Pages 4-5
Abstract
Landsat's recent images from Kuwait suggest that firefighting teams are ahead of the determined schedule. At the beginning of firefighting operations on burning oil-wells, oil-fire experts estimated the time needed for full control of wells’ fire to be five years. According to recent Landsat imaging, ...
Read More
Landsat's recent images from Kuwait suggest that firefighting teams are ahead of the determined schedule. At the beginning of firefighting operations on burning oil-wells, oil-fire experts estimated the time needed for full control of wells’ fire to be five years. According to recent Landsat imaging, one third of the burning wells are now either extinguished or under control, the fact which was confirmed by the Kuwaiti Oil Minister.
Hasan Alayii
Volume 1, Issue 3 , August 1991, , Pages 4-9
Abstract
Man has to find a means for communicating with others, transferring knowledge and findings and expressing his wishes. The first thing humans found for fulfilling these needs was a direct dialogue. The conversation had the disadvantage that its audience was limited because a different language was used ...
Read More
Man has to find a means for communicating with others, transferring knowledge and findings and expressing his wishes. The first thing humans found for fulfilling these needs was a direct dialogue. The conversation had the disadvantage that its audience was limited because a different language was used in any region and, on the other hand, it was not possible to store information and findings for later ages and users in other places. Therefore, human beings were forced to invent writing in order to be able to publish information and thoughts. Although writing eliminated some of the above problems, because writing was also a function of language of the authors of books, it was not possible that these books be used by people of other areas, , so it would not have been possible to cover all the people again. With the advancement of technical knowledge and the construction of boats and ships, which were the first means of human travel, the need to know the geographical and climatic conditions of remote locations became more tangible; therefore, some tourists began to draw basic maps and write books about the geographic conditions of other parts of the world. These geographic books were also not easy to be used by everybody for the reasons mentioned, so a device had to be devised that would not require deep technical information, and would be easily accessible to everyone.
Volume 1, Issue 2 , February 1990, , Pages 4-11
Abstract
An Abbreviation of the Speech by Major General Zahirnejad, Head of the Department of Military Advisers of the Supreme Leader of the Forces, held at the first seminar on surveying, remote sensing and geography, under the title "Map in defense, map in developmental construction", which was held in May ...
Read More
An Abbreviation of the Speech by Major General Zahirnejad, Head of the Department of Military Advisers of the Supreme Leader of the Forces, held at the first seminar on surveying, remote sensing and geography, under the title "Map in defense, map in developmental construction", which was held in May 1990 in the Geographic Organization of Armed Forces .
Hasan Shamsi
Volume 1, Issue 1 , May 1990, , Pages 4-15
Abstract
The history of maps and surveying in the world and in Iran is scattered and written in different perspectives, which should be gradually developed and completed by the efforts of the experts. The following two articles by Dr. Hasan Shamsii are efforts in this regard. Professor Shamsii is one of the veterans ...
Read More
The history of maps and surveying in the world and in Iran is scattered and written in different perspectives, which should be gradually developed and completed by the efforts of the experts. The following two articles by Dr. Hasan Shamsii are efforts in this regard. Professor Shamsii is one of the veterans and well-known figures in the field of surveying. He has taught theoretical and practical surveying at the Faculty of Engineering of the University of Tehran for many years. His book about surveying, published in the years when the compilation and publication of technical books was difficult, shows his interest and diligence to serve the engineers and developmental projects of the country. After him the surveying curriculum of the Faculty of Engineering of the University of Tehran was assigned to Mr. Shams Malekara, who in turn left valuable works and services.
The writings of Shamsii are simple, honest, and documented, so that while studying you will feel that they are part of the history of maps in Iran directly and with first-hand information, which in turn adds to the authenticity and validity of this history, which is basically the memories of the Master, and will be a reliable and valuable reference for young researchers.
Soheila Irankhah; Mojtaba Ghadiri Masum; Masud Mahdavi
Volume 23, Issue 89 , May 2014, , Pages 5-16
Abstract
Rural development is the most sustainable policy and development strategy of every country. This policy is closely related with intellectual and attitude system. There are different tools and methods for the institutionalization of this policy. Issuance of ownership document for residential or rural ...
Read More
Rural development is the most sustainable policy and development strategy of every country. This policy is closely related with intellectual and attitude system. There are different tools and methods for the institutionalization of this policy. Issuance of ownership document for residential or rural real estate along with other developmental policies in rural areas are among these tools and methods. In this regard, second, third and fourth economic, social and cultural development programs introduce rural construction as one of their goals, and regard issuance of ownership documents as an executive solution to reach this goal. Ground is full of buried treasures and wealth, so it is economically valuable and satisfies different needs of human society (including construction works).Therefore, houses and places in rural areas are among capital benefits of rural societies and owners need to feel legal safety in regard to their properties. On the other hand, the need for ground-related information is a basis for development and control of land resources, and it prioritizes issuance of ownership document for each unit (Tarshizian & Athari, 2010). This article seeks to scrutinize different countries’ experiences and performance in issuance of ownership documents for rural residential units and investigates their success rate. To do so, analysis and evaluation were performed in two different geographical points using field and case study, which provided different results.
Seyyed Yahya Safavi
Volume 13, Issue 51 , November 2004, , Pages 5-8
Abstract
The road network and access routes are among the most important elements of military geography. Classification of road network, type of access and appropriate transportation conditions for transfer of human forces and freight on the one hand, and reliable access to road lines and passageways of bridges, ...
Read More
The road network and access routes are among the most important elements of military geography. Classification of road network, type of access and appropriate transportation conditions for transfer of human forces and freight on the one hand, and reliable access to road lines and passageways of bridges, tunnels and road structures on the other, are of great importance. The status of roads, railways, ports, airports and inland waterways which facilitate military operations and allow for support of forces, have a special place in military geography.
Seyyed Yahya Safavi
Volume 11, Issue 44 , February 2002, , Pages 5-8
Abstract
Political geography is one of the branches of geographical sciences that examines all natural, cultural (human) and environmental effects on policies, military plans and combative / supportive operations at global, regional and local levels.Political geography is a science that expresses the effects ...
Read More
Political geography is one of the branches of geographical sciences that examines all natural, cultural (human) and environmental effects on policies, military plans and combative / supportive operations at global, regional and local levels.Political geography is a science that expresses the effects of geographic factors of a country or area on military movements.Political geography is a part of military sciences that concerns environmental characteristics of the area of operation.Political geography includes application of geographic analysis method. It is worth noting that various definitions have been presented for political geography, but two major points are necessary to be considered in order to reach a comprehensive definition.
A. Scale and application of military geography The application of military geography for political leaders, national security planners, commanders and military hierarchies that play major roles in formulation of military strategies, which has specific definition and a certain scope. B-The military geography has its own value against internal and external threats and various geopolitical conditions of any region. In fact, military geography faces threats by presenting their direction and exploitation of geographic factors against these threats. Military geography is an effective guide in national and macro planning in a country including land preparation and allocation, so that appropriate locating of development plans and constructive measures of the country such as infrastructure and industrial establishments take defense-security considerations into account in all circumstances.
Seyyed Yahya Safavi
Volume 9, Issue 33 , May 2000, , Pages 5-9
Abstract
Military geography is a science that is defined as a branch of geography according to one of the comprehensive definitions of this field. It studies the effect of natural and cultural environment on military / political attitudes, plans, programs and various kinds of military and support operations on ...
Read More
Military geography is a science that is defined as a branch of geography according to one of the comprehensive definitions of this field. It studies the effect of natural and cultural environment on military / political attitudes, plans, programs and various kinds of military and support operations on global, regional and local scales. The key factors discussed in this science sometimes directly or indirectly affect vast parts of military activities, including:Strategies; military tactics, techniques and deployments; beliefs’ bases; commandment and control; structures of organization; optimum combination of land utilization; marine, air and ground forces; fire aiming, collection of (operational or intelligence) data; research and development; provision, repair and allocation of armaments and equipment and, of course, overhaul, medical support and training.
Hamid Malmirian
Volume 8, Issue 30 , August 1999, , Pages 5-13
Abstract
Remote sensing is the art of obtaining information about an object, area or phenomenon through analysis of data gained by tools that are not in physical contact with the case studied. In many ways, remote sensing can be considered as a “reading” process. Using different sensors, data that ...
Read More
Remote sensing is the art of obtaining information about an object, area or phenomenon through analysis of data gained by tools that are not in physical contact with the case studied. In many ways, remote sensing can be considered as a “reading” process. Using different sensors, data that can be analyzed with the aim of obtaining information about phenomena studied is collected remotely. Such data might be in different forms, including changes in distribution of forces, propagation of sound waves or electromagnetic energy. Finally, these data are processed for users who need to use them for their decision-making systems. In this paper, the basic rules of this field are studied under the title “remote sensing processing”. The discussion begins with bases of electromagnetic energy, and then the collision of energy with the earth’s atmosphere and surface features are examined. In addition, the role of reference data in analysis methods is evaluated. These bases will help us identify an ideal system of remote sensing. The limitations of remote sensing systems can be studied based on this framework. GIS bases will be discussed briefly as well. Eventually, it is hoped that reader of this paper will gain a general understanding about principles, concepts and applications of remote sensing and the close connection between this technology and GIS.
Mohammad Bagher Choukhachizadeh Moghaddam
Volume 3, Issue 10 , August 1994, , Pages 5-10
Abstract
The Persian Gulf is a sea of warm water with an area of 240,000 square kilometers and a volume of 6,000 cubic meter, which is connected to the Oman Sea by the Strait of Hormuz. The length of the Iranian Persian Gulf shores from Bandar Abbas to the mouth of Shat-e-Arab is 1259 km and its length ...
Read More
The Persian Gulf is a sea of warm water with an area of 240,000 square kilometers and a volume of 6,000 cubic meter, which is connected to the Oman Sea by the Strait of Hormuz. The length of the Iranian Persian Gulf shores from Bandar Abbas to the mouth of Shat-e-Arab is 1259 km and its length from the mouth of Arvand River in the northwest to the Strait of Hormuz in the southeast is at about 805 km. Its length from Shatt al-Arab to the shores of Abu Dhabi is 830 km and the length of the Arabian coast is about 1740 km. The average width of the Persian Gulf is 210 kilometers, with a minimum width of 185 kilometers and a maximum of 355 kilometers.
Mohammad Bagher Chukhachizadeh Moghaddam
Volume 3, Issue 9 , January 2018, , Pages 5-11
Abstract
The movements of the earth's crust can be divided into two general categories of orogenic and epeirogenic movements. The orogeny movement refers to those crust movements that cause rapid deformation of large masses of rocks with short duration of its impact on a geological scale and high intensity; these ...
Read More
The movements of the earth's crust can be divided into two general categories of orogenic and epeirogenic movements. The orogeny movement refers to those crust movements that cause rapid deformation of large masses of rocks with short duration of its impact on a geological scale and high intensity; these movements cause faults, folds and mountains. The epeirogenic movements include movements of the earth's crust whose duration of impact is long and whose severity is low, such as downward movement of crust and formation of basins, as well as rise of parts of the crust. The epeirogenic movements cause the seas’ receding and advancing.
Abdalkarim Gharib
Volume 2, Issue 8 , February 1993, , Pages 5-16
Abstract
The magnetic properties of some iron minerals were known since very old times. In Chinese legends, there is reference to a very simple magnetic needle that was used to determine orientations on the earth four thousand years ago. In Europe, the properties of magnetic needle were known since the 11th century, ...
Read More
The magnetic properties of some iron minerals were known since very old times. In Chinese legends, there is reference to a very simple magnetic needle that was used to determine orientations on the earth four thousand years ago. In Europe, the properties of magnetic needle were known since the 11th century, and in the 13th century they built the first simple compass. The invention of compass in an era that saw numerous geographical explorations helped geographers greatly, and since then, the study of earth's magnetism has been considered.
Akbar Torkan
Volume 2, Issue 6 , February 1992, , Pages 5-16
Abstract
The following is the first part of the excerpt from the speeches of the Minister of Defense and Support of the Armed Forces, which was delivered with a comprehensive look at the developments of the recent two-hundred years of the world. The second part is presented in the next issue of journal to students ...
Read More
The following is the first part of the excerpt from the speeches of the Minister of Defense and Support of the Armed Forces, which was delivered with a comprehensive look at the developments of the recent two-hundred years of the world. The second part is presented in the next issue of journal to students and enthusiasts of historical and political geography.
Seyyed Yahya Safavi
Volume 15, Issue 59 , November 2006, , Pages 6-9
Abstract
The Persian Gulf has a prominent geographic position. Natural conditions such as little depth, water salinity and high evaporation on the one hand, and limited communication with open lands of the world on the other, have created a special ecosystem within the Persian Gulf and its coasts. The Persian ...
Read More
The Persian Gulf has a prominent geographic position. Natural conditions such as little depth, water salinity and high evaporation on the one hand, and limited communication with open lands of the world on the other, have created a special ecosystem within the Persian Gulf and its coasts. The Persian Gulf has long been regarded as one of the most important strategic regions of the world. The Persian Gulf region is a geopolitical unit located in the wider geostrategic basins of the Indian Ocean. The geographic region of the Persian Gulf acts as one of the most active economic centers in the world. It’s major exports are oil and gas and it’s imports mainly consist of industrial and food products. Natural potential and vast oil and gas resources have contributed to the establishment of a single-product economic system in coastal countries.
Seyyed Yahya Safavi
Volume 15, Issue 58 , August 2006, , Pages 6-9
Abstract
Familiarity with the geographical situation of neighboring countries is essential in different aspects, such as the interaction of geographical factors (human and natural), the relative position of the neighboring states' territories, the type of government and the international and regional obligations, ...
Read More
Familiarity with the geographical situation of neighboring countries is essential in different aspects, such as the interaction of geographical factors (human and natural), the relative position of the neighboring states' territories, the type of government and the international and regional obligations, participation in politico-military treaties and dependence on imperialist powers. Other important issues such as: Cultural Commonality; trading conditions and relations; common economic resources; determination of foreign policy goals; recognition of spatial conditions and the strategic importance of threating positions; positions and conditions important for imperialist powers; and other issues such as inclination of the region towards crisis, and the imbalance of the political system which is followed by problems such as the imposition of displaced people in neighboring countries emphasizes the importance of attention to geographic considerations of neighboring countries. In this paper, the importance of the basic role of spatial relations in recognition of the geography of neighboring countries are discussed, with emphasis on position, size, shape and neighborhood.
Seyyed Yahya Safavi
Volume 13, Issue 49 , May 2004, , Pages 6-10
Abstract
The bridges, passages, tunnels and underpasses are weak links and can be dangerous in times of war or peace, and therefore their classification and identification require particular attention to their characteristics.
Read More
The bridges, passages, tunnels and underpasses are weak links and can be dangerous in times of war or peace, and therefore their classification and identification require particular attention to their characteristics.
Seyyed Yahya Safavi
Volume 11, Issue 41 , May 2002, , Pages 6-8
Abstract
The highest level of seawater is continuously dynamic due to the Earth's rotation, gravity of the Sun and the Moon, concentration of water, temperature, earthquake activities of the Earth, and the effects of the Earth's magnetism. Currents, tides, waves, water flooding and sea ice are among the most ...
Read More
The highest level of seawater is continuously dynamic due to the Earth's rotation, gravity of the Sun and the Moon, concentration of water, temperature, earthquake activities of the Earth, and the effects of the Earth's magnetism. Currents, tides, waves, water flooding and sea ice are among the most significant signs and physical appearances considered by the navy and military strategists that plan, prepare and direct military operations.
Rasul Ghorbani (Translator)
Volume 10, Issue 39 , November 2001, , Pages 6-12
Abstract
Rosario in Argentina has different classes in terms of housing quality and access to physical and social infrastructure. To evaluate the needs of recently decentralized departments, utilization of new management tools is needed to analyze facilities and constraints. The GIS application can contribute ...
Read More
Rosario in Argentina has different classes in terms of housing quality and access to physical and social infrastructure. To evaluate the needs of recently decentralized departments, utilization of new management tools is needed to analyze facilities and constraints. The GIS application can contribute to this assessment. It can serve public housing by revealing problematic areas and improving resource allocation, and so contributing effectively to the efficiency of housing efforts and determination of spatial inequalities associated with social infrastructure that afflict most deprived groups. It can also help estimate the apparent demand and compare it with the demand extracted from indicators. According to the assessment, the city of Rosario contains basic disparities within its internal parts. The combined application of extractive and apparent demand for housing shows that in some cases the indices suggest less than real demands, or ignore them altogether. The growth of demand for GIS tools and access to statistical data in digital form will improve and unite the assessments of the need for housing and its impacts on spatial inequalities.
Seyyed Yahya Safavi
Volume 9, Issue 34 , August 2000, , Pages 6-13
Abstract
In military geography’s studies the emphasis is on the regional analysis method, but in most cases the studied areas are examined following political and province-based divisions of the country.
Read More
In military geography’s studies the emphasis is on the regional analysis method, but in most cases the studied areas are examined following political and province-based divisions of the country.
Khosrou Khajeh (Translator)
Volume 8, Issue 31 , November 2009, , Pages 6-8
Abstract
Today the managers and planners have a need for three-dimensional (3-D) display of objects, especially in urban environments, which is inherent in relevant processes. In order to fulfil such need, a digital camera is set on a Total Station Theodolite that is able to perform accurate video recording of ...
Read More
Today the managers and planners have a need for three-dimensional (3-D) display of objects, especially in urban environments, which is inherent in relevant processes. In order to fulfil such need, a digital camera is set on a Total Station Theodolite that is able to perform accurate video recording of vertical surfaces such as building facades. In this paper, while examining an example of theodolites equipped with digital video cameras, the research problems and constraints in this regard were discussed.
Ali Akbar Rasuli
Volume 8, Issue 29 , May 1999, , Pages 6-13
Abstract
Geographic information system (GIS) is a phrase for expressing a deep, broad concept. Basically, geographic information system is a computer technology including hardware and software which has emerged in recent decades with the aim of reception, organization, analysis and finally design and production ...
Read More
Geographic information system (GIS) is a phrase for expressing a deep, broad concept. Basically, geographic information system is a computer technology including hardware and software which has emerged in recent decades with the aim of reception, organization, analysis and finally design and production of various models (such as maps of underground resources) and which is currently evolving toward completion. The determining factor which distinguishes this technology from other systems of information storage and recovery is its attitude toward geographic places. This quality has helped GIS as a new technology to turn into a dynamic industry in analysis of quantitative and qualitative data and design of various types of geographic images.
In this paper, it is tried to briefly consider space imaging using GIS and to introduce the principles of geographic image design.