Urban Remote Sensing. Группа авторов
Even for small area collections, lidar data are broken up into tiles, much like orthoimagery, to reduce the computational burden of handling the data. Not surprisingly, visualizing and processing these datasets can be difficult. Geographical Information System (GIS) and remote sensing software packages, though, continue to improve at integrating these datasets with other geospatial information. Due to these issues, conversion from point cloud to raster format persists to simplify the data into a more useable format (also, most analysis tools process raster data instead of raw point clouds).
Point clouds, without any processing, provide powerful visualizations that enable geospatial professionals and nonexperts alike to view and better understand the 3D layout of build‐up in an urban space. For example, Figure 2.2 shows both the raw point cloud data for downtown Austin, Texas in 2015 as well as a simplified version consisting of extruded buildings derived from 2006 lidar data. These visualizations and underlying datasets enable analyses related to urban planning such as solar radiation/interaction (Yu et al. 2009), potential for solar panel placement on building rooftops (Lukac et al. 2014), and more. Other researchers use the point cloud data directly to algorithmically detect and characterize specific built‐up shapes (Dorninger and Pfeifer 2008; Golovinskiy et al. 2009; Babahajiani et al. 2015). This type of analysis remains difficult in terms of algorithm development and computational needs. Further, for change detection, point cloud comparison analyses are becoming more commonly supported by open‐source software such as CloudCompare. Even in these cases though, point cloud data is often transformed to 3D mesh models (like triangular irregular networks or TINs in GIS) to analyze the data. Related to this and other approaches, Figure 2.3 summarizes the common lidar data workflows, data products, and eventual analyses conducted within urban remote sensing. Notice that when analyzing the point cloud directly, point cloud filtering is often still required.
FIGURE 2.2 3D lidar‐derived visualizations of downtown Austin, Texas looking northwest using raw point cloud data from 2015 (a), and extruded building footprints from 2006 (b).
Point cloud filtering is a process whereby all individual points within the point cloud are assigned to a class to better differentiate the point data (Shan and Toth 2018). The basic approach assigns points to either ground or nonground classes using a filtering algorithm based on trends in point heights (Axelsson 1999). Return number for individual points can also be utilized to aid this filtering effort. LAS file specifications allow points to be assigned to many other classes (e.g. high vegetation, building, low point noise, etc.) through more nuanced algorithms and/or manual efforts. Once filtered and assigned class designations, points can be analyzed directly as discussed above (upper‐right of Figure 2.3) or further processed to create raster Digital Elevation Models (DEMs).
FIGURE 2.3 Lidar data processing workflows, data products, and analysis approaches for urban remote sensing. For the purpose of this figure, DSM, Digital Surface Model.
Lidar‐derived raster surfaces, referred to generally as DEMs, provide a more approachable way in which to utilize lidar data. Note that DEMs are created with other elevation data and are not lidar‐specific datasets. Specific types of DEMs include the following:
Digital Terrain Model (DTM): a raster representing the bare Earth surface. Absolute elevation values from mean sea level are stored in pixels.
Digital Surface Model (DSM): a raster representing the bare Earth surface as well as all surface features such as buildings, tree canopies, etc. Absolute elevation values from mean sea level are stored in pixels. For this chapter, we elect not to use the DSM acronym for this dataset because it conflicts with another acronym we use in upcoming sections.
Digital Height Model (DHM): a raster surface containing all features like the DSM but with relative elevation values from ground‐level stored in pixels. DHMs are also referred to as Normalized Digital Surface Models (nDSMs).
DTMs are generated using heights of ground‐classified points (which may or may not require spatial interpolation to fill data gaps depending on point density and types of features within the area) while DSMs utilize heights of all points (ground and nonground) to create the raster surface. The DHM is calculated by subtracting the DTM from the DSM (see Eq. (2.1)):
Figure 2.4 provides examples of each of these datasets at a 1 m spatial resolution for Detroit, Michigan. The DSM (Figure 2.4b) and DHM (Figure 2.4d) appear similar because they both contain surfaces features but a difference can be spotted between the two as you move inland (to the north) where the DHM low lying areas (i.e. streets, residential yards) appear black and not gray. The DTM is a smoother surface representing the bare Earth without the surface features (Figure 2.4c), and in this case includes artifacts such as highway overpasses and bridges attesting to the complexity of point cloud filtering.
As for built‐up analyses using lidar‐derived raster data (refer back to Figure 2.3), the DHM provides the ideal dataset because it is normalized and conveys building height data from ground level. Pixel values for buildings, therefore, are representative and useful. Using building footprints (vector polygons), individual building heights can be extracted and extruded to visualize only the built‐up environment as solid objects (see Figure 2.2). Building footprints are highly useful ancillary data for urban analyses and are often freely available through local cadastral mapping sources or can be generated using the DHM (and other data such as aerial imagery) through Object‐Based Image Analysis (OBIA). OBIA segmentation provides a semi‐automatic procedure to create vector polygons of ground features. In the urban environment, especially where buildings are quite tall and protrude from the surrounding landscape features, OBIA segmentation is effective (Teo and Shih 2013). Imagery‐lidar fusion (i.e. adding the DHM data as an additional band within an image stack) improves the accuracy of OBIA classification results within urban areas compared to imagery alone (Ellis and Mathews 2019). Lidar intensity information is also useful as an additional band for further differentiation of surface features in OBIA analyses.
Building footprint data are also helpful in the calculation of built‐up volume. As Figure 2.5 illustrates, the input point cloud data (a) are used to create a DHM raster that is further altered by extracting only pixels within building footprint extents – notice the tree canopies along the streets (b) are no longer visible within the clipped DHM (c). Importantly, this removes all nonbuilt‐up pixels from the analysis for accurate volume estimation. At this stage, volume calculation is conducted at a per‐pixel or per‐building scale (refer to Figure