Creation and Spatial Analysis of 3D City Modeling based on GIS Data

The 3D city model is one of the crucial topics that are still under analysis by many engineers and programmers because of the great advancements in data acquisition technologies and 3D computer graphics programming. It is one of the best visualization methods for representing reality. This paper presents different techniques for the creation and spatial analysis of 3D city modeling based on Geographical Information System (GIS) technology using free data sources. To achieve that goal, the Mansoura University campus, located in Mansoura city, Egypt, was chosen as a case study. The minimum data requirements to generate a 3D city model are the terrain, 2D spatial features such as buildings, landscape area and street networks. Moreover, building height is an important attribute in the 3D extrusion process. The main challenge during the creation process is the dearth of accurate free datasets, and the time-consuming editing. Therefore, different data sources are used in this study to evaluate their accuracy and find suitable applications which can use the generated 3D model. Meanwhile, an accurate data source obtained using the traditional survey methods is used for the validation purpose. First, the terrain was obtained from a digital elevation model (DEM) and compared with grid leveling measurements. Second, 2D data were obtained from: the manual digitization from (30 cm) high-resolution imagery, and deep learning structure algorithms to detect the 2D features automatically using an object instance segmentation model and compared the results with the total station survey observations. Different techniques are used to investigate and evaluate the accuracy of these data sources. The procedural modeling technique is applied to generate the 3D city model. TensorFlow & Keras frameworks (Python APIs) were used in this paper; moreover, global mapper, ArcGIS Pro, QGIS and CityEngine software were used. The precision metrics from the trained deep learning model were 0.78 for buildings, 0.62 for streets and 0.89 for landscape areas. Despite, the manual digitizing results are better than the results from deep learning, but the extracted features accuracy is accepted and can be used in the creation process in the cases not require a highly accurate 3D model. The flood impact scenario is simulated as an application of spatial analysis on the generated 3D city model.


Introduction
Applications of 3D city models have increased steadily due to significant improvements in data acquisition technologies, such as space airborne images, UAVs and lidar data. Meanwhile, there is a pounce in computer graphics programming that generates every feature in the 3D model perfectly with high accuracy. 3D city modeling is the spatial representation of each feature on Earth terrain on a 3D scale [1]. Spatial representation and generalization of 3D models are the keys to efficient visualization of multiscale 3D city models [2]. In recent times, it has become imperative to produce a large number of accurate 3D city models in a short period of time. The 3D model of the city plays a key role in the transition from a traditional city to a smart city. Where 3D simulation of the current situation helps to develop sustainable and future case-based scenarios [3]. Moreover, developing digital twins or smart cities is the main mission for governments. Therefore, constructing a 3D city model is necessary, as it will be the base map for digital twin or smart city development [4].
To satisfy this massive increasing demand for 3D city models, there is a requirement to make city models in a quick, accurate, detailed and economical way [5]. Therefore, a variety of algorithms and methods have been developed to meet the challenges of 3D city models, and efficient visualization is one of the most important issues. Generalization methods are commonly applied to 3D city models to improve the efficiency of visualization by minimizing the amount of data and highlighting objects that matter to users. The realization of 3D city models is based on a variety of technologies, which will be divided into two categories: general computer technology and 3D city modeling [6]. The application areas of 3D GIS vary according to the required models, the related database systems and their purpose of use. Biljecki (2015) reviewed the present status and statistics of the application of 3D city models and categorized the usage and applications of 3D city models into 29 use cases [7]. From the statistics mentioned in this research, it is very clear and evident that the topic of 3D modeling has become the trending research topic in most journals. Combining 3D modeling and geographic information systems can provide a wide range of applications, such as machine learning, restoration works, shadow analysis, urban planning, and landscape planning, and enable users to deal with the limitations and restrictions of 2D GIS.
Al-Hanbali and Rawashdeh (2006) discussed the workflow used for generating 3D models for different use cases with (a) variant level(s) of detail (LODs) to show the effective utility of 3D GIS modeling using photogrammetry techniques. A digital terrain model (DTM) is used to generate the orthophoto, extracting buildings' footprint. The 3D scene is generated using ArcScene and Socet set software [8]. Piccoli (2013) reconstructed the ancient city of Koroneia using a procedural modeling technique. All the available is Combined to visualize Koroneia's development over time.
The main aim of the 3D visualization model was to test the reconstruction hypotheses on the ancient city layout in the virtual environment. CityEngine software was used, and all of its functionalities were evaluated.
Another reversed method could be implemented for reconstructing the 3D building model depending on the types of data available, such as aerial photography, LIDAR, or vector GIS data such as buildings' footprint [9]. In general, LiDAR is better suited for smaller, more accurate surveys, and photogrammetry is best suited for large-scale, human-readable surveys.
Gavankar and Ghosh (2018) proposed a novel morphological-based automatic approach for extracting building footprints from high-resolution satellite imagery using a morphological top-hat filter and the K-means algorithm to extract buildings with bright and dark rooftops. Furthermore, extracted bright and dark rooftop building segments have been combined to obtain the final output that contains the final extracted building segments. The weakness of the approach that some buildings have an equal reflection factor as extracted false-detected buildings [10]. Kahraman and Abdul-Rahman (2013) developed a web-based 3D informative system and investigated many functionalities in this system, such as identifying, querying and analyzing functions. The data sources used in the creation process were CAD files with 2D features in addition to generating 3D models manually using Sketchup software. The 3D city model components are converted to CityServer3D [11]. Kale and Al-Donus (2019) used aerial photogrammetry to generate a 3D model for a mosque using PIX4D software. Study area data were obtained using an unmanned aerial vehicle (UAV) [12]. Jovanovic (2020) showed up the process of developing a 3D for the campus area of Novi-Sad University using lidar survey data. He followed the CityGML database standers and apply the noise map application on the generated model [13].
Girindran and Robinson (2020) developed a 3D model for two different case studies: Shanghai in China and Nottingham in the UK from open data. The data used in this study is OSM, AW3D DSM, DTM. He used these two cases as they are contradictory in the topographical and urban morphologies [14]. Moser and Kosar (2010) highlighted the wide range application of 3D analyses and the benefit of integrating 3D city models in the working processes of communities. Four realistic case studies are simulated to describe possible 3D analysis functions that are likely to be applied in 3D city modeling. The analysis functions mentioned in these case studies are categorized as "proximity", "spread analyses", "3D density" and "visibility analysis" [15]. The analysis functions are used to support decisions for the development of virtual city models. One recently proposed method of procedural techniques is computer-generated architecture (CGA). CGA is in a position to induce visually convincing buildings. This technique saves time and cost, as the generation process of geometry is done in an imperative programming language. Geometry is created by calling procedures on data, loops and conditionals are allowed. Like any other programming language, the attribute data are declared as a parameter. The customization of any generated model can be done easily with lower time consumption than creating it from scratch. Despite all these advantages of procedural modeling, it needs more improvements in the interface, as the textual grammar interface is not well suited for because it is not perfect in scripting [16]. Artificial intelligence GIS (Geo-AI) technology is currently an important research area because it is a combination of AI technology and spatial functions, including spatial data processing and analysis. It is a generic term for a series of interoperable technologies for artificial intelligence and geographic information systems. Recently, GeoAI has gradually become the main focus of geoscience research and application [17].
Before inquiring about the applied method to create the 3D city model in this study, some significant factors must be taken into consideration, such as geographic area, map scale, accuracy and quality required and the purpose of using the resulting 3D map. Data availability is a major obstacle, especially in developing countries, because it requires a large budget. Many techniques perfectly simplify the creation of architecture. Therefore, the main objectives of this research are to:  Different techniques were applied in the creation process of the 3D city model for the study area based on satellite imagery and GIS data.
 Different scenarios of data availability are investigated, evaluate the input spatial data obtained from the satellite imagery and DEM and compare it with accurate geodetic data resulting from field observations to be used in generating a 3D city model.
 Study the availability of using deep learning algorithms as an effective alternative to the traditional techniques in automatic extraction for the study area features and discuss its accuracy especially with large scale areas like cities.
 The generated 3D city model was validated by applying several applications, such as publishing the 3D model to the cloud and using it in a web application and flood-impact scenario to the study area to determine the impacted buildings.

Study Area (Mansoura University Campus)
The study area of this research is the Mansoura University campus, which is located in the city west of Mansoura, Egypt (Figures 1 and 2). Mansoura city is the capital of Dakahlia Governorate, with a total area of 25 km 2 and an average elevation equal to 15 m above sea level. It is located 111.7 km northeast of Cairo, Egypt, and lies on the east bank of the Damietta Nile branch within the Delta region. The study area (Mansoura University campus) has a latitude of 31° 2' 31.329"N and a longitude of 31° 21' 28.2888"E according to the World Geodetic System (WGS1984) and lies in the (36 N) UTM grid zones of the world. The Mansoura University campus was selected as a small prototype for a 3D city model with a total area of 253 acres (1.012 km 2 ). The study area has several types of buildings that have various land uses with different heights and different ground levels. It contains 19 faculties, four hospitals and approximately 11 medical centers.

Methodology of the Study
The 3D city model needs to be generated prudently so that applications built based on the resulting 3D city model in the future can operate properly. In some cases, traditional geodetic observations are not suitable for generating 3D models, especially for large-scale areas such cities. Therefore, alternative methods must be investigated to collect data used in the creation process. In this study, a 3D-generated model is conducted in LOD2 based on the digital elevation method (DEM), building footprints extracted from high-resolution satellite imagery and building heights. To evaluate the accuracy of the DEM and 2D features extracted from the satellite imagery, a comparison with geodetic observations (using the total station instrument) was performed. Then, the modeling process starts after comparing the data use and calculates its accuracy and efficiency to be used to generate a 3D city model. Comparing the accuracy of the input data is investigated, as it is very significant in determining the applicable scenario applications based on the 3D city model. All the data used in this study are free available data on the internet to overcome the problem of data availability for developing countries. This study is divided into four main approaches which are: 1-Data acquisition and evaluation, 2-Data processing and stored as a geo-database, 3-Modeling process, and 4-Applications of the generated model.

Terrain
DEM is compared with the grid leveling observation to evaluate DEM efficiency, and if it meets the required accuracy needed for some needed applications, it will operate on the generated model in the future or not.

2D Features
For the purpose of evaluating the accuracy for the used data, two techniques are used to obtain the 2D geometries and then compare both of them with the traditional field observations obtained using a total station instrument.
The two techniques used are:  Manual digitizing from satellite imagery;  Deep learning algorithms to automatically extract the feature.
The used software in this approach is:  A global mapper is used to download the satellite imagery.
 QGIS and ArcGIS Pro are used to manually digitize the satellite imagery and the spatial functionalities.
 TensorFlow and Keras Frameworks (Python deep learning API) are used to deploy the satellite classification deep learning model.

Buildings' Height
Building heights are calculated using equations consisting of the ground height and floor height and the number of floors.

Data Processing and Stored as a Geodatabase
To store the spatial data and assign the non-spatial attributes to the geometries and insert these data in a geodatabase, the software used in this stage is the ArcGIS geodatabase.

Modeling Process
Procedural modeling techniques are used in the creation process. In this study, CityEngine software is used as the written shape grammar rule files assigned to geometries. The 3D generated model using CityEngine can be converted into other formats using a feature manipulation engine (FME). Thus, the creation process in this study is done "CityEngine" as the generated model is compatible.

Applications of the Generated Model
Since the resultant model proposes an interactive visual support system for decision-makers, two applications are simulated: flood impact analysis scenario and indoor mapping. Figure 3 shows the overall methodology and steps starting from data preparation, data processing, and spatial analysis functionality until creating a realistic 3D city model.

Data Acquisition
To generate a 3D city model, it is necessary to have the terrain model, 2D spatial data and feature elevations. They are the minimum requirements to build the 3D city model. Two approaches are applied to get the terrain data. First, grid levelling was set out for the study area using a LEICA digital level instrument model (DNA03v) with an accuracy of 2 mm. Second, the digital elevation model (STRM 12.5 m-DEM) was downloaded. The 2D data is obtained from various sources. One of them uses a total station to obtain the coordinates of building boundary points, and the other data source is the manual digitization generated from downloaded satellite imagery with a resolution of 30 cm. The total station used in the survey works is SOKKIA CX-103, which measures angles equal to 3 arc-seconds and measures distances up to 1,500 meters with an accuracy of 3 mm ± 3 ppm. The final method used to obtain the 2D data is using computer vision for automatic building extraction. The above data are essential requirements for building a 3D city model.

Terrain of the Study Area
There are several techniques for getting the detailed terrain of the study area as following:

a) Using Digital Elevation Model (DEM)
The digital elevation model (DEM) is a digital representation of terrain surface using points distributed in high density that have detailed information about the terrain characteristics. NASA SRTM Digital Elevation Model downloaded from NASA dataset (site: https://earthdata.nasa.gov /earth observation-data). The downloaded raster is 56 col and 32 rows covering the whole area. Each cell carries its data like longitude, latitude, altitude, category of land use, spectral value and other data. The terrain model obtained is represented in a 3D topographic map in Figure 4. Spectral values are used in satellite and aerial images and stored in the raster dataset to represent the light reflection.

b) Using Grid Levelling
Grid levelling is the determination of the point's altitude on the earth surface relatively with the mean sea level (MSL). The grid is plotted according to map scale, required accuracy and terrain condition. The grid levelling is done to have an accurate data to be as a reference to be compared with the open used data from the internet.
The region of the study area is divided into square, taking a grid node every 30 meters and measured the reduced level for each point. After making the necessary linear misclosure and have the altitude for every point. Export the data to ArcGIS software and draw the contour map with contour interval. The measured points with (X, Y, Z) represent the topology for the area, but it doesn't cover all-terrain surfaces. To convert it to the surface, the altitude for each point on the surface must be known, so the point's interpolation is done by using one of the surface interpolation methods to generate an elevation surface from the set of points' data. After doing the interpolation, the terrain surface is created.

a) Using Total Station
Total station is used to establish control networks (Traverse networks). Traverse networks have been established to place survey stations over the study area and then using it as a base for observing the buildings' footprint points. After finishing the traverse works, angular and linear closure is calculated and distributed with the least square error method. Figure 5 shows the surveying works for landscape areas and buildings' footprint is done related to the established control network with scale1:2000. The features are manually digitized from the downloaded high-resolution imagery of 0.3 m (HRI). The Georeferencing and the digitizing process were done using ArcMap software. The study area includes various urban features, such as roads, vegetation, and buildings. The digitized buildings are categorized according to the land use purpose ( Figure 6). Manual digitizing accuracy does not meet our requirements in some cases as some buildings are not clearly classified. Specialists may think it is a complex building and others think is a separate building but overlapped

c) Using Deep Learning
The massive production of satellite, aerial imagery and drone maps makes them difficult to analyze and deal with by the traditional human system [18]. Significant progress has been observed in recent years in the field of artificial intelligence (AI). AI is the ability of the computer to carry out functions and tasks that demand the same level of intelligence as humans. Deep learning is an approach in AI that applies computer vision to the data input which improve the accuracy of automatic features extraction from remote sensing images. It meets or transcends human accuracy in some cases. In the recent years, a significant progress has been made in automatic features extraction especially, when GIS meets deep learning. Automatic extraction for land features as roads, buildings and landscape areas is very useful especially, for the developing countries which do not have up-to-date data or a budget to have an accurate real-life data. Moreover, it is difficult mission to obtain field data for areas when natural disasters occur. Wherefore, deep learning comes to solve many obstacles that existed before. Using deep learning for extracting features from the satellite image saves the effort, time, cost spent by manual work, but the accuracy is a questionable point [19,20]. In our use case, buildings and landscape areas are extracted and find their geometries using Object Instance Segmentation which is a new approach developed to overcome some challenges faced before. It combines object detection task which is extracting features with bounding boxes and the semantic segmentation task which classifying each pixel according to categorized classes.
In this study, a deep learning model is generated to extract the buildings' footprint from multiband satellite imagery. Mask R-CNN model architecture is implemented to detect the features in the image and generate a high-accurate segmentation mask for each instance. There are two stages of mask R-CNN (Figure 7) starting from the input image till produce the bounding boxes, classes and masks for each similar categorized object. First stage, scan the feature map and generate proposals about the regions where may contain objects. Second stage, another neural network takes proposed regions by the first stage and assign them to several specific areas of a feature map, predict the class of the detected object, bounding box and generate a mask in pixel level for each object based on the first stage proposal. To generate the model, some steps should be done before. Firstly, exporting training data is the step to prepare data to be used in deep learning. Secondly, training a deep learning instance segmentation model where a pre-defining class schema exists. In the generated model, features are classified as Buildings, roads and landscape area. Figure 8 shows the classification results for the map of the studied area. Finally, the model deployment step where the detection process is done. The generated model is generic and can be applied to areas that have the same style of Egyptian buildings not European or 112 American style. The model is expected to work on imagery with 8-bit, three-band high-resolution 10-55 cm.

Building's Height
Buildings' elevation is calculated based on the number of floors of the building. The elevation is calculated through this equation: where Bld_Height is height of the building, GFl_Height is Building's ground floor height, Fl_Height is Building's floor height, and N is no of floors. The campus Geo-database is generated and 'Bld_Height' calculated value is assigned to the elevation attribute, additional attributes are created also as university name and the category of this faculty either it is Theoretical or practical faculty or related to medical facilities, etc.

Generation of 3D Model
There are two techniques us in the creation of the 3D city model which are 1-procedural modeling and 2-manual modeling.
C: Convolutional Layer P: Pooling Layer

Procedural Modeling
Procedural modeling is a superset term for a number of techniques in computer graphics that describes the creation process of 3D modeling using algorithms and sets of shape grammar rules [21]. Procedural Modeling, Literally One approach emerges. When the term "procedural" is taken literally as in "procedural programming": The generation of geometry is done in an imperative programming language [22,23]. Geometry is created by calling procedures on data, loops and conditionals are allowed. One example of such an approach is the Generative Modeling Language (GML).
The main idea is to assist content creation by providing ways to automate common modeling steps. These ways are known as procedural modeling techniques. Procedural modeling uses CGA (Computer Generated Architecture) shape grammar files to programmatically generate the city with high and realistic visual quality and geometric detail with spatial queries to help with visual and spatial analysis. Procedural modeling saves time and cost as the rule file can be applied to different geometry. As a result of the parametric attributes and conditions in the rule file, the created model is easy to update and edit dynamically by changing the model parameters. It is also reusable for other applications that have the same features and theme. The generated 3D model generated in CityEngine is flexible and adaptable with other 3D models' formats. It can use static SketchUp, Revit models and accept a lot of other formats as kmz, dxf, dae, obj, osm, shp, gdb, kml, etc. It is very flexible with other environments. After generating the model, it can be exported to unreal engine software (unity) for more realistic or upload to other platforms like Google Earth.

Manual Modeling
Manual modeling is the creation of a three-dimensional object using polygonal tools by representing the object surface using polygon meshes which is suitable for real-time computer graphics. Polygonal shape creation tool is the main set for manual modeling while we can create and extrude the polygon shapes, editing their scales, orientation to generate the model and then applying textures. A lot of software can be used to generate the 3D model manually but it takes a long time-production and not adaptable to changes like procedural modeling.
The process of generating the city model starts after the completion of preparing the geodetic database [24]. There is much software that can help me to produce 3D city models like Cesium, Blender, CityEngine (CE), etc. CityEngine software is used in this study where the 2D GIS data transformed to 3D accurate and visualized features in the generated 3D model as it depends on three main topics which are feature geometry, feature attributes and the pre-defined rules (26). There are two methods to create the 3D city model first is manual modeling, second is procedural modeling which is used in this study.
As mentioned earlier that CGA is the best method for 3D city model generation. Therefore, using CGA rules files as it is a significant part of the creation process of the 3Dcreation process. Terrain model, 2D GIS data and CGA rules are the major elements to produce the 3Dcity model. The creation process workflow is shown (Figure 9), starting with having a terrain model and 2D spatial data till generating the realistic model meets the requirements. Aligning the terrain to buildings' footprints and streets gives realistic 3D visualizations.

Generation of 3D Block Model of Building
To simplify the process of generating the 3D buildings, the building is divided into some shapes. It should apply some shapes operation to each of them to get the final required model. There are some basic operations to generate the 3Dmodeling like Extrusion, Comp, Split, and Texture. The model hierarchy followed to generate the building model after divided it into shapes is shown in Figure 10:

Figure 10. The functionalities used to generate building-block
To apply the previous hierarchy, there is a very important concept in 3D modeling that must be known. It is level of detail (LOD) [19]. The LOD shows the proximity and usability of the model in the real world and facilitates efficient visualization and data analysis for the building model. LOD focuses primarily on the geometry of buildings. Geometrics' accuracy increases as the level of detail increases. The first level of detail (LOD0) defines the 2D coordinate information represented as the footprint of the buildings. The Second Level of detail (LOD1) defines blocks of building model with flat roof structure without any textures (Figure 11a). The third level of detail (LOD2) including doors and windows for the exterior is the most similar to reality (Figure 12a). After the un-textured 3D blocks are created using the extrude operation ( Figure 12). Split every floor to wall, windows, Beams and doors. Finally, apply texturing operation. Texturing with the Shape Grammar consists of three functions, first, setupProjection() which defines the UV coordinate space, second, set(material.map) which sets a texture file, third, projectUV() which applies the UV coordinates. After applying this operation, a realistic 3D model similar to reality is created ( Figure 13). All textures used are really on-site.

WEB-GIS
Web GIS is an architectural technique to put into effect present-day GIS. It's powered by web services that deliver data and communicate between components as GIS server and the client-side even it was a web or desktop or mobile client. Web GIS is not new, it has been developing for a long time. But indeed, it reached a tipping point where innovation in GIS and related technologies has made it not only possible but also essential [25]. After generating the 3D model, it can be shared on the cloud and published. Now our model is sharable and can be shared easily via its link on the cloud. Some many other specifications and applications can be used; besides visualization, queries and analysis are more functionalities can be applied to the model. The main objective of this technology is to allow users to dynamically interact, access, share, and process geospatial data on the web regardless of the platform, the protocol used, or the user is expertise or not.

Publishing Online
After the process of generating the model is finished, the 3D model with slpk (scene layer package) extension or web scene can be uploaded to one of the platforms of GIS, which is ArcGIS online (scene viewer). The published scene allows the user to dynamically interact and identify any feature, the information of the feature stored in the Geodatabase is displayed when identifying the feature. The published scene can be embedded in any web Application. This is the code used to include the scene view using the API inside the web application:

Data Analysis
The purpose of this section is to introduce the results and analysis of accuracy of the data obtained from many techniques, the following analysis and results are drawn.

Terrain
The interpolated terrain surface from the grid levelling is subtracted from DEM. Figure 14 shows the calculated subtraction difference for each cell. The differences range between a minimum value (zero) m which means that the altitude from DEM and the grid levelling for this cell is equal, while the maximum difference is 0.85meter. It is found that 33 % of the cells' difference is under 0.2m, 49% is under 0.4 m, 60% is under 0.65 m and 90% is lower than 0.85m ( Figure 15) after removing the outliers' values.

 Manual digitizing VS field survey measurements
For evaluating the building accuracy of the manual digitized vector data obtained from satellite image against building footprint from site survey measurement using Georeferencing function. The evaluation of RMSE, as shown in Equation 2, is used for the accuracy measurement: where d is difference of the site survey measurements and satellite imagery, n is number of measurements. The RMSE values of the objects produced from the satellite imagery when georeferenced with the field survey measurements was approximately 3.7 m.

 Deep learning metrics
The results of the satellite imagery classification are obtained due to these specifications The Model. The dataset is divided to 80% training set and 20% validation set. The calculated precision, recall, and the F1 score are the metrics values of the trained model. Precision represents the portion of the predicted, Recall represents the ratio of correctly and The F1 score is the harmonic mean of precision and recall. The equations for these metrics are provided below in Equations 3 to 5:   (5) where TP is True-Positive, FP is False-Positive, and FN is False-Negative. The true or false, negative or positive is a representation for the data and it is determined according to the following rule.
-Retrieved data: Positive || Not-Retrieved: Negative -Relevant data: True || Irrelevant data: False Table 1 shows the analysis of the model per class metrics (after 8 epochs):  Figure 16 shows the labelled classes detected by the model for each object according to the pre-defined classes. Figure 17 shows the Ground truth/Predictions after the deployment of the model, the output of the trained model is the bounding box and accurate instance mask for each detected object. Figure 18 shows the loss function diagram for both training and validation set as the loss function represents how well the model performs for these two sets. Unlike accuracy, loss is not a percentage. It is a summation of the errors that occurred in the training or validation sets. From the previous results, the features which are manually digitized from high-resolution satellite imagery is georeferenced to others from the geodetic field working with total root mean square error (RMSE) 3.7 m as there is a displacement between buildings footprint as shown in Figure 19 as the digitized one is drawn on the top of the building not to base. Comparing the automatic extracted buildings footprint and the other from manual digitizing with the buildings came drawn from the survey works, it is found that the distortion for the digitized building is about 4. 6% (Table 2). It was caused by the oblique capturing angel for the satellite, the human error. Also, the resolution of the satellite image has made some edges not clear enough to be digitized accurately. Analogizing the buildings extracted fusing deep learning to other from survey works, it is found that there is a distortion of about 17% ( Table 2). The main reason for the large percentage of distortion in the results of deep learning is the existing shadow of the buildings. This made the machine recognizes it as a part of some buildings. It can also be recognized separately as a part of the road because the color of the shade is similar to the color of the asphalt. The resolution of the satellite imagery also plays a fundamental difference in the large or smallness of the distortion rate. The higher the resolution of the imagery, the lower the distortion rate and vice versa. The shadow and the imagery resolution changed the geometry of the building footprint.

Applications of 3D City Model
The application areas of the 3D City Model varies according to the models, the accuracy of data sources, the database systems and the purpose of use. Two applications will be applied as following:

Development of New Cities
In the recent period, the Egyptian government has tended to establish many new cities in order to increase the availability of urbanization from 6 to 12% to accommodate the annual increase of the population until 2030. Designing these cities requires a lot of time and effort; however, creating a model using the procedural modeling method will take a quarter of the estimated time comparing with using traditional modeling methods. A CGA rule file is applied on the geometries of 2D features for the city of new Mansoura which is divided to four stages, each stage has their own theme. The first stage in the city has the same characteristics, a single rule file of their own is created for all of them. The creation for the first stage consumes just 70 second to be fully rendered ( Figure 20). The modification or the recreation for the generated model would be easy any change occurs to the design parameters; it simply modifies this property only in the rules file and re-creates the model in seconds.

Flood Simulation
Flood analysis impact is a considerable issue that governments and municipalities seek to understand and work to create a pre-model for possible possibilities. Providing the model in advance leads to an effective response to the flood event. Preparation of proactive data using systematic ways a clearer picture of the extent of the flood and its impact on the infrastructure and the changes that may be required for emergency response plans provides. The flood simulation model is generated using the contour map for the interest area. The contour map is converted to a surface raster map and then convert the raster to polygon, all the operations are done using the following Geoprocessing tool Created by ModelBuilder created in ArcGIS ( Figure 21). After the flood shapefile is created (Figure 22), it is imported to the city engine and makes the flood model.

Application Analysis
For the flood simulation Application, Figure 23 shows the affected buildings when the mean sea level is 14 m. As the water level variable changes, the simulated model adapts. We can increase or decrease the Mean Sea-level (MSL) attribute and check how many buildings are affected.

Figure 23. No of the affected building with various MSL (Mean Sea Level)
Validating the model using CGA shape grammar file is very easy, just write a piece of code to define the required validation condition. The following lines of code are used to display the affected buildings at Mean Sea Level 14 m. The affected building from the flood impact is highlighted with red color and the non-affected is highlighted with yellow color (Figure 24).

8.3.Indoor Mapping
Internal buildings models are created by rule driven. After the external building models are produced, it divided to floors according to the No of Floors and floor height attribute fields. Each floor is divided to blocks according to the type of labs in this block. Figure 25 identifying an element in the internal models shows the floor no for this element, Room No and the Room use. More data can be stored in the database and provide more spatial query tasks. Legend for flood layer MSL: [6][7] MSL: [8][9] MSL: [10][11] MSL: [12]

Conclusions
This study presents different techniques for generating and analysis 3D city model based on GIS spatial data and high-resolution satellite imagery. Procedural modeling is applied to the 2D digital map to create a realistic 3D model for Mansoura University campus, Egypt. Based on the results arrived, the following conclusions can be drawn:  The 3D city model is generated using the specified modelling approach integrated with GIS spatial data using Esri CityEngine software while the written rule file is assigned to the database features of the study area. The consumed time of generating 3D city model is 20 % of the time consumed using the traditional modeling process. Procedural modeling approach is perfect for generating large-scale data as it is speeding up the modeling process, reusable and easy to update as 3D creation process which is written in a rule file.
 The strength of Esri CityEngine software has a strong connection with GIS data, but also it has weakness point as Exported files is very large as (OBJ) files and Collada files (DAE). Therefore, it may be an obstacle if the applied model is real-time updated especially for large scale area with contain more details.
 Multiple scenarios can be applied to the generated model as it is a visual support tool for analysis like flood impact analysis, validate the max height of building in the areas near airports and, etc. The generated specified city model is published to the cloud and used in online visualization analysis for non-specialist or using in on-site navigation.
 According to the results from the trained model which used Mask R-CNN algorithm ,the model is good in land scape area detection as the precision is 0.89 ,but not perfect in detecting streets as the precision is 0.62, and has a moderate accuracy in detecting buildings footprint as its precision is 0.78 .
 The distortion in buildings' area is 4% from manual digitizing and 17% for the same features when using deep learning. When analyzing the cause of this distortion, it is found that the low-resolution satellite imagery causes more distortion percentage. Building shadows cause the machine to detect it as a feature on the map; wherefore, it is the main reason for the big distortion percentage when using deep learning for detecting map features automatically.

Data Availability Statement
All data, models, or code that support the findings of this study are available from the corresponding author upon reasonable request.