Imagery and GIS. Kass Green

Imagery and GIS - Kass Green


Скачать книгу
photographs and then solving least squares adjustment formulas. In the 1990s, the number of control points required was again reduced by the advent of accurate GPS positioning of the aircraft that effectively added control points in the air, further reducing the control required. The advent of lower-cost precise IMUs (inertial measurement units) has further reduced the number of control points required, so that for many applications sufficient accuracy can be achieved using only highly accurate GPS and IMU systems, which is referred to as direct georeferencing. These orientation parameters are used in image orthorectification (see chapter 6) to geometrically correct the images so that coordinates in the imagery accurately represent coordinates on the ground.

       Agility

      Agility refers to the ability of the platform to change position and can be characterized by 1) reach or the ability of a platform to position itself over a target, which is sometimes referred to as field of regard; 2) dwell time, which is how long the platform can remain in the target area working; and 3) the ability to slew across the target area.

      Fixed platforms such as a traffic-light pole above a street intersection have no agility. Satellites are tied to their orbits, which restricts their agility. However, some satellites are pointable (e.g., able to slew off nadir), which makes them much more agile than nonpointable satellites. This, coupled with their ability to quickly orbit the earth, provides them with a long-range reach around the globe, which is not available to aircraft.

      Within their range, aircraft and fixed-wing UASs are more agile than satellites, and helicopters are more agile than fixed-wing aircraft. The hovering abilities of helicopters and rotor-winged UASs allow them to obtain more target specific data than fixed-wing aircraft can collect, and they can more easily reach targets in a congested airspace. Blimps and remote-controlled balloons have greater mobility than hot-air balloons because they have engines and are more maneuverable.

       Power

      Power refers to the power source that runs the platform. The more powerful the engine or engines, the faster and higher the platform can travel and the greater payload it can carry. Satellites are propelled into space by launch vehicles to escape the earth’s gravity. Afterward, they use electric power derived from solar panels for operation, and stored fuel for orbital maneuvering. Of critical importance is the amount of power remaining after launch for the sensor to operate. Size, weight, and power, coupled with communication bandwidth (the ability to offload the image from the focal plane) are the biggest drivers in satellite sensor design.

      Fixed-wing aircraft are powered by piston engines, turbocharged piston engines, turboprops, or jet engines in single- or twin-engine configurations. High-altitude piloted aircraft platforms are usually powered by twin jet engines or turboprops. The high power of these aircraft and their ability to fly at high altitudes with large payloads results in large operational costs, but this can be offset by their broad spatial coverage abilities and fast data collection (Abdullah et al., 2004). Single-engine platforms are lighter and have fewer logistical concerns and lower operational costs, while twin-engine platforms offer more power and weight for larger payloads (Abdullah et al., 2004). Many low-altitude platforms employ a dual sensor configuration for collecting multiple types of data (e.g., lidar and optical), but aircraft with less powerful engines are less likely to be able to carry multiple sensors because the power requirements are too high and the combined payload becomes too heavy for the plane. However, over the last 10 years the weight, size, and power requirements of many sensors have rapidly decreased, making multiple sensor configurations more feasible.

       Collection Characteristics

      The components of sensors and the features of platforms combine to determine the collection characteristics of an image: its spectral resolution, radiometric resolution, spatial resolution, viewing angle, temporal resolution, and extent. Table 3.1 provides definitions of commonly used categories of the three most important collection characteristics: spatial, spectral, and temporal resolution.

Images

       Spectral Resolution

      The spectral resolution of an image is determined by the sensor and refers to the following:

       The number of bands of the electromagnetic spectrum sensed by the sensor

       The wavelengths of the bands

       The widths of the bands

      Panchromatic sensors capture only one spectrally wide band of data, and the resulting images are shades of gray, regardless of the portion of the spectrum sensed or the width of that portion. Panchromatic bands always cover more than one color of the electromagnetic spectrum. Multispectral sensors capture multiple bands across the electromagnetic spectrum. Hyperspectral sensors collect 50 or more narrow bands. Traditionally, multispectral bandwidths have been quite large (usually 50 to 400 micrometers), often covering an entire color (e.g., the red portion). Conversely, hyperspectral sensors measure the radiance or reflectance of an object in many narrow bands (usually 5 to 10 micrometers) across large portions of the spectrum, similar to imaging spectroscopy in a chemistry laboratory.

      Film images are stored as negative or positive film or paper prints. Remotely sensed digital data files are stored in a raster or rectangular grid format. When imaging, each picture element, or pixel, collects a digital number (DN) corresponding to the intensity of the energy sensed at that pixel for each specific band of the electromagnetic spectrum. Panchromatic data is stored in a single raster file. Figure 3.13 shows example infrared DNs for a small area.

Images

      Figure 3.13. Example infrared digital number (DN) values

      Multispectral images store each band as a separate raster. Each band is monochromatic, but when they are combined they can be displayed in color. Figure 3.14 shows four separate bands of airborne digital imagery collected over a portion of Sonoma County, California. Each band is monochromatic. Figure 3.15 combines the bands to create true color and color infrared displays.

Images

      Figure 3.14. Red, green, blue, and near infrared bands of airborne multispectral imagery captured over Sonoma County, California (esriurl.com/IG314)

Images

      Figure 3.15. True color and infrared combination of bands of airborne multispectral imagery collected over Sonoma County, California (esriurl.com/IG315)

      The bands shown in figures 3.14 and 3.15 are in the red, green, blue, and near-infrared portions of the electromagnetic spectrum. Each pixel of the imagery contains four numbers, one for the DN recorded in each of the four bands. Table 3.2 presents the range of DN values for each band of the different land-cover types depicted in figure 3.15.


Скачать книгу