Bird Senses. Graham R. Martin
different information about the world in which they sit. This diagram is based upon the tubular-shaped eyes found in owls. (Diagram by Nigel Hawtin, nigelhawtin.com.)
It can be seen immediately that there is much scope for changing the overall image-forming properties of an eye by virtue of small changes in the absolute size, curvatures, and relative positions of these two optical components.
Although we cannot know the optical properties of the very first, relatively crude, camera eyes it is easy to understand that they must have varied in their properties with respect to a number of parameters. Two parameters were key, the brightness of the image (how much light is captured to make the image) and the quality of the image. The precision with which light from a point in the world is brought to a focus in the image determines how faithfully the image reproduces the world that it represents. Surprisingly small variations in optical structure result in marked differences in the way optics represent the world, and in eyes these small optical variations have been rich sources for the operation of natural selection. Selection for subtle differences in optics was the beginning of the process by which eyes have evolved to match the demands of different tasks and different light environments. Today we can identify eyes with marked differences in the brightness and quality of their optical images. Some examples will be discussed in later chapters that look at specific examples of the sensory ecology of birds which face different perceptual challenges.
Variation in image properties
By definition an image is never perfect. It is a simulacrum that always lacks some information about the world. The quality of the image, and hence the information that it contains, usually varies across the image surface. Image quality is usually closer to perfection along, or close to, the optic axis of the lens system. This is the line about which the optical elements of the system are arranged; in camera eyes it is the direction about which the cornea and lens are symmetrically aligned (Figure 3.2).
Moving away from the optic axis results in an image of progressively poorer quality. It is here, in the more peripheral parts of the image, where obvious distortions and aberrations occur. This is something that is readily apparent in simple hand lenses or in camera lenses at the cheaper end of the market. To correct for these peripheral distortions requires elaborations and refinements of the optical system, hence the high prices asked for camera lenses which maintain high quality across a broad section of the image.
The image produced by peripheral optics is often masked out in human-made cameras, and not presented for analysis by the film or photodiode array. However, peripheral optics cannot be ignored when trying to understand the visually guided behaviour of many vertebrate animals, including most birds. This is because in these species the eyes are placed on the side of the head and the visual field is often maximised, which requires use to be made of the entire image (Figure 3.3).
FIGURE 3.3 A diagrammatic section through the head of a bird showing a typical arrangement of eyes in the skull and how the visual fields of each eye combine. In all birds the eyes project laterally so that the axes of the eyes always diverge; no birds have forward-facing eyes. The fields of view of the two eyes are combined to give the total field of view, with a sector in front of the head where the two fields overlap to give a binocular field. A wide degree of variation in these basic arrangements is found in birds, resulting in different degrees of overlap, different width blind areas behind the head (though some birds have no such blind area). Just small variations in the width of the field of view of each eye, and of eye position in the skull, can result in large differences in visual fields between species. (Diagram by Nigel Hawtin, nigelhawtin.com.)
In some species, full use is made of an image from each eye that is more than 180 degrees wide. This gives the birds maximum visual coverage of the space around them. In many birds the width of the visual field behind the head is maximised in order to enhance the chances of detecting a predator, but this again can be achieved only by using peripheral optics (Figure 3.4).
FIGURE 3.4 Examples of the extremes of visual fields found in birds. In a Tawny Owl Strix aluco the axes of the eyes project laterally and forwards. The field of view of each eye is relatively narrow, and the eyes sit in the skull to give a relatively large degree of binocular overlap and an extensive blind region behind the head. In a Pink-eared Duck Malacorhynchus membranaceus the fields of each eye are extensive, a little over 180 degrees, and they project laterally to give the bird a small degree of binocular overlap both in front of and behind the head. This means that it sees all around its head in the horizontal plane. In fact the binocular region extends right above the head, and the duck has panoramic vision of the hemisphere around and above its head.
This lateral placement of the eyes in the skull is quite unlike the situation in ourselves. The optic axes of our eyes, and hence the best-quality optics, project directly forward. We do not try to look in the direction that we are travelling out of the sides of our eyes. This is, however, what all birds do to some extent. No bird species, not even owls, have eyes positioned to face directly forwards, and many birds look forward with the very periphery of their eyes’ optical systems. The consequence of this arrangement is that the best-quality optics in all birds projects laterally, away from the axis of the head, in some species markedly so. This has important consequences for understanding both the foraging behaviour and the role of vision in the control of locomotion in birds. It will be discussed in Chapter 5.
Another important property of the image is how much of the world is imaged at one instant. Does the imaging device have a wide or narrow field of view? This is important, since it determines from how much of the world around an animal’s head information can be gained at any instant.
The image analysis system
It is the retina that starts the process of image analysis. The retina extracts the essential information that the image contains, encodes it neurologically, and sends it via the optic nerve for further analysis by the brain.
When looking into any eye we see the pupil as a black void. We are looking through the optical system and through the thin transparent neurological layers of the retina to the uniformly black surface, the pigmented epithelium, that lies behind it. As outlined above, each retina contains many millions of individual neural cells arranged in distinct layers. The most prominent of these layers contains the photoreceptor cells, the well-known rods and cones, and these are discussed in detail below.
When light photons reach a photoreceptor, a neural signal is generated and relayed to a ganglion cell which in turn relays that information through the optic nerve to the brain. Of crucial importance at this first level of image analysis are the number, density, and distribution of photoreceptor and ganglion cells across a retina. The actual numbers of photoreceptors are very high. For example, in the eyes of eagles (and humans) the total number of cells in the whole retina probably exceeds 100 million. In all retinas, however, the number and density of photoreceptor cells are far from uniform. In some locations photoreceptor cells are packed close together, in others they are more sparse, and large differences in density can occur between locations less than 1 mm apart. In an eagle’s retina density can peak at about 450,000 photoreceptors per square millimetre, and in humans it reaches about 200,000, but less than 1 mm from the site of peak receptor density it drops to 16,000 photoreceptors per square millimetre. However, these changes in receptor density do not occur randomly across the retina; they occur in distinct patterns in the eyes of different species.
The patterns of photoreceptors can be revealed and characterised using isodensity contour maps (Figure 3.5). These link locations across a retina which have the same cell densities. In the same way that contour maps link locations of the same elevation and allow us to quickly appreciate the topography of a landscape, these density maps of retinal cells provide a quick way to compare retinas in different species and hence are a ready means of comparing some basic aspects of vision between species. A number of examples