Imagery and GIS. Kass Green
lidar requires 30 to 50 times the amount of data storage as discrete return lidar.
Historically, lidar systems have been able to transmit energy in only one wavelength. However, recent advancements in lidar technology allow for transmitting energy in multiple wavelengths, making multispectral lidar images possible (Teledyne Optech Titan system). Additionally, new technologies such as Geiger-mode (Harris) and Single Photon (SigmaSpace/Hexagon) have been introduced that significantly improve the rate of data collection and resulting point density by increasing the sensitivity of the lidar sensors.
Lenses
Objects emit or reflect electromagnetic energy at all angles. The angles between an object and an imaging surface change as the imaging surface moves closer to or farther from the object. The purpose of a lens in a camera or in an eyeball is to focus the electromagnetic energy being emitted or reflected from the objects being imaged onto the imaging surface. By moving the lens back and forth relative to the imaging surface, we can affect the angle of electromagnetic energy entering and exiting the lens, and thereby bring the objects of interest into focus.
Most remote sensing systems capture electromagnetic energy emitted or reflected from objects at a great distance from the sensor (i.e., at an effectively infinite distance), from hundreds of feet for a sensor in an aircraft to hundreds of miles for a sensor in a satellite. Because these distances approach infinity relative to the focal length, the lenses have a fixed focus.
The combination of the sensor’s lens and the resolution of the imaging surface will determine the amount of detail the sensor is able to capture in each image—its resolving power. The resolution of a digital image is determined by the format size of the digital array of the imaging surface.
Openings
The purpose of a sensor opening is to manage the photons of electromagnetic energy reaching the imaging surface. Too large an opening results in the imagery being saturated with photons, overexposing the imaging surface. Too small an opening results in not enough photons captured to create an image.
Our irises manage the amount of light reaching our retinas by expanding and shrinking to let more or less light onto our retinas. In a camera, the diameter of the opening that allows electromagnetic energy to reach the imaging surface is called the aperture, and the speed at which it opens and closes is called the shutter speed. Together, aperture and shutter speed control the exposure of the imaging surface to electromagnetic energy. In a digital camera, the CCD array is read and cleared after each exposure.
Bodies
Remotely sensed imagery can be used for visualization—to obtain a relative concept of the relationship of objects to one another—or to measure distances, areas, and volumes. For either visualization or measurement, the geometry of the lenses, opening, and imagery surface within the camera body must be known. In addition, for measurement the location and rotation of the imagery surface when the image is captured must also be known.
Sensor Summary
While remote sensor components share similarities with our eyes and consumer cameras, they differ in the following fundamental ways:
Imaging surfaces must be absolutely flat to minimize any geometric distortion.
The energy sensed may be passively received by the sensor from another source (commonly the sun) or actively created by the sensor and then received back by the sensor.
Because most remotely sensed images are taken from high altitudes, their lenses are commonly designed for an infinite object distance; i.e., the lenses are fixed.
Shutter speeds are usually extremely fast because most platforms are moving at high speeds.
Remote sensor camera bodies must be able to withstand the extreme temperatures and vibrations encountered by the vehicle, boat, aircraft, or satellite platform. Additionally, for mapping purposes, the precise internal geometry of the sensor components within the body must be known as well as the location of the imaging surface when an image is collected so that the imagery can be accurately terrain corrected and georeferenced to the earth.
Platforms
This section reviews remote sensing platforms by examining platform features. Seven major features distinguish platforms from one another: whether they are manned or unmanned, and their altitude, speed, stability, agility, and power.
Different Types of Platforms
Geosynchronous — 22,236 miles
Satellites that match Earth’s rotation appear stationary in the sky to ground observers. While most commonly used for communications, geosynchronous orbiting satellites like the hyperspectral GIFTS imager are also useful for monitoring changing phenomena such as weather conditions. NASA’s Syncom, launched in the early 1960s, was the first successful “high flyer.”
Sun synchronous — 375-500 miles
Satellites in this orbit keep the angle of sunlight on the surface of the earth as consistent as possible, which means that scientist can compare images from the same season over several years, as with Landsat imagery. This is the bread-and-butter zone for earth observing sensors.
Atmospheric satellite — 100,000 feet
Also known as pseudo-satellites, these unmanned vehicles skim the highest edges of detectable atmosphere. NASA’s experimental Helios craft measured solar flares before crashing in the Pacific Ocean near Kauai.
Jet aircraft — 90,000-30,000 feet
Jet aircraft flying at 30,000 feet and higher can be flown over disaster areas in a very short time, making them a good platform for certain types of optical and multispectral image applications.
General aviation aircraft — 100-10,000 feet
Small aircraft able to fly at low speed and low altitude have long been the sweet spot for high-quality aerial and orthophotography. From Cessnas to ultralights to helicopters, these are the workhorse of optical imagery.
Drones — 100-500 feet
Drones are the new kid on the block. Their ability to fly low, hover, and be remotely controlled offer attractive advantages for aerial photography, with resolution down to sub-1 inch. Military UAVs can be either smaller drones or actual airplanes.
Ground based/handheld — ground level
Increasingly, imagery taken at ground level is finding its way into GIS workflows. Things like Google Street View, HERE street-level imagery, and Mapillary; handheld multispectral imagers; and other terrestrial sensors are finding applications in areas like pipelines, security, tourism, real estate, natural resources, and entertainment.