Imagery and GIS. Kass Green

Imagery and GIS - Kass Green


Скачать книгу
the photos. In the 1960s, digital sensors were developed to record electromagnetic energy as a database of numbers rather than a film image. This enabled the development of sensors that can sense electromagnetic energy across the range from ultraviolet to radio wavelengths. Now, most remote sensing systems use digital arrays instead of film. Because the values of the reflected and emitted energy are stored as an array of numbers, computers can be trained to turn the imagery data into map information by discovering correlations between variations in the landscape and variations in electromagnetic energy. While manual interpretation is still very important, objects that are spectrally distinct from one another can be readily mapped using computer algorithms.

      The imaging surface of a digital camera is an array of photosensitive cells that capture energy from incoming photons. Each of these cells corresponds to a pixel in the resulting formed image. The pixels are arranged in rectangular columns and rows. Each pixel contains one to three photovoltaic cells or photosites, which use the ability of silicon semiconductors to translate electromagnetic photons into electrons. The higher the intensity of the energy reaching the cells during exposure, the higher the number of electrons accumulated. The number of electrons accumulated in the cell is recorded and then converted into a digital signal.

      The size of the array and the size of each cell in the array affect the resolving power of the sensor. The larger the array, the more pixels captured in each image. Larger cells accumulate more electrons than smaller cells, allowing them to capture imagery in low-energy situations. However, the larger cells also result in a corresponding loss of spatial resolution across the image surface because fewer cells can occupy the surface.

       Source of Energy: Active versus Passive Sensors

      Passive sensors collect electromagnetic energy generated by a source other than the sensor. Active sensors generate their own energy, and then measure the amount reflected back as well as the time lapse between energy generation and reception. Figure 3.5 illustrates the difference in how active and passive sensors operate.

Images

      Figure 3.5. Comparison of how passive and active sensors operate

      Most remote sensors are passive sensors, and the most pervasive source of passive electromagnetic energy is the sun, which radiates electromagnetic energy upon objects on the earth that either absorb/emit, transmit, or reflect the energy. Passive energy can also be directly emitted from the earth, as from the eruption of a volcano or a forest fire. Examples of passive remote sensors include film aerial cameras, multispectral digital cameras, and multispectral/hyperspectral scanners. Passive sensors are able to sense electromagnetic energy in wavelengths from ultraviolet through radio waves.

      Passive sensors fall into three types: framing cameras, across-track scanners, and along-track scanners. Framing cameras either use film or matrixes of digital arrays (e.g., UltraCam airborne sensors, PlanetLabs satellite sensors). Each frame captures the portion of the earth visible in the sensor’s field of view (FOV) during exposure. Often, the frames are captured with greater than 50 percent overlap, which enables stereo viewing. Each image of a stereo pair is taken from a slightly different perspective as the platform moves. When two overlapped images are viewed side by side, each eye automatically takes the perspective of each image, enabling us to now “see” the overlapped areas in three dimensions. With stereo frame imaging, not only can distances be measured from the aerial images, but so can elevations and the heights of vegetation and structures, discussed in detail in chapter 9.

      Most across-track scanners (also called whisk broom scanners) move an oscillating mirror with a very small instantaneous field of view (IFOV) side to side as the platform moves. Each line of the image is built, pixel by pixel, as the mirror scans the landscape. Developed decades before the digital frame camera, across-track scanners were the first multispectral digital sensors and were used in multiple systems including the Landsats 1-7, GOES, AVHRR, and MODIS satellite sensors, and NASA’s AVIRIS hyperspectral airborne system.

      Along-track scanners (also called push broom scanners) rely on a linear array to sense entire lines of data simultaneously. Rather than mechanically building an image pixel by pixel or by groups of pixels, the along-track scanner builds an image line by line. Along-track scanners have higher spectral and radiometric resolution than across-track scanners because the sensor can spend more time (termed dwell time) over each area of ground being sensed. Like across-track scanners, along-track scanners often also use a dispersing element to split apart the incoming beam of electromagnetic energy into distinct portions of the electromagnetic spectrum to enable the collection of multispectral imagery. Developed 30 years ago, along-track scanners are a more recent development than across-track scanners. Many multispectral satellite systems (e.g., WorldView-3, Landsat 8) rely on along-track sensors, as do the Leica Airborne Digital Sensors.

      Active sensors send out their own pulses of electromagnetic energy, and the sensor measures the echoes or returns of the energy as they are reflected by objects in the path of the pulse. For example, consumer cameras with flash attachments are active systems. Active remote sensors include lidar (light detection and ranging) systems, which generate laser pulses and sense electromagnetic energy in the ultraviolet to near-infrared regions of the spectrum, and radar (radio detection and ranging) systems, which generate and sense energy in the microwave range. An advantage of active systems is that they do not rely on the sun, so acquisitions can be made at times when the sun angle is low or at night. An additional advantage of radar systems is that the long wavelengths of microwaves can penetrate clouds, haze, and even light rain.

       Wavelengths Sensed

      Passive Sensors

      Most images are collected by panchromatic or multispectral passive sensors that are able to sense electromagnetic energy in the visible through infrared portions of the electromagnetic spectrum. To separate different optical and midinfrared wavelengths from one another, passive remote sensors place filters or dispersing elements between the opening and the imaging surface to split different wavelengths or “bands” of the electromagnetic spectrum from one another. Filters are usually used with framing cameras and include the following:

       Employing a Bayer filter over the digital array, which restricts each pixel to one portion of the electromagnetic spectrum, but alternates pixels in the array to collect at different wavelengths. The computer then interpolates the values of the non-sensed wavelengths from the surrounding pixels to simulate their values for each frequency at each pixel. This is how consumer cameras and many of the high-resolution small satellite constellations (e.g., Planet Doves) collect multispectral imagery.

       Placing separate filters on multiple cameras, each filtered to accept energy from a distinct portion of the electromagnetic spectrum, allows each focal plane to be optimized for that portion of the spectrum. Many four-band (red, green, blue, and infrared) airborne image sensors (e.g., Microsoft Ultracam and Leica DMC sensors) use this approach, which requires that the images simultaneously captured with the separate cameras be coregistered to one another after capture.

       Placing a spinning filter wheel in front of one camera so that each exposure of the image surface is in one portion of the electromagnetic spectrum. This approach is very useful for fixed platforms, however it requires very complex postcollection registration for systems with moving platforms and is rarely used in remote sensing systems.

      Alternatively, a dispersing/splitting element can be placed between the lens and a series of CCD arrays to split the incoming energy into its discrete portions of the electromagnetic spectrum. Many multispectral and most hyperspectral sensors employ dispersing/splitting elements (e.g., Leica Airborne Digital Sensors, NASA AVIRIS).

      Figures 3.6 to 3.8 illustrate how Bayer filters, framing cameras, and dispersing elements are typically used to create multispectral images.


Скачать книгу