Haptic Visions. Valerie Hanson

Haptic Visions - Valerie Hanson


Скачать книгу
practice and production create rhetorical possibilities.

      The STM as Visualization Technology: Context, Characteristics, and Rhetorical Significance

      Instruments are not only important because of their everyday use in the laboratory. Instruments are also important because they are situated within broader, complex trends in scientific and technical development. Situating an instrument within broader trends can illuminate cultural influences that are common to more than one instrument; therefore, analysis of one instrument may provide insights for the study of other instruments. The STM shares some rhetorically significant characteristics with other recent scientific and medical visualization technologies, such as PET scans, CAT scans, MRI, fMRI, and ultrasound. These rhetorically significant characteristics include the presence of complex mediation and interpretation practices, as well as the use of images to arrange and deliver large amounts of data.

      A brief summary of how the STM works illustrates the complexity that Galison and Crary discuss, and introduces characteristics common to recent digital visualization technologies. To use the STM, researchers apply a weak current to the conductive metal tip of the STM. The tip is then brought close to the surface of a conductive substance (i.e., the sample). Although a gap exists between the tip and the surface atoms, when an electric charge is applied to the tip, the electrons of the surface atoms “tunnel” through the space (frequently a vacuum) between the tip and the surface to interact with the atoms on the tip (hence, the “tunneling” in the microscope’s name). The tunneling, a quantum effect, thus changes the voltage of the tip because the tunneling electrons, like all electrons, are charged. The electron cloud is exponentially sparser the further the electrons travel from the atom’s nucleus, and the voltage changes according to the density of the cloud, so that the voltage change is consistent with the distance from the surface atoms. The tip passes just above the surface of the sample using a piezoelectric element (or piezo) that either slightly expands or contracts when voltage is applied, and that produces an electric current when pushed, enabling fine control and measurement of voltage changes produced by the different densities of electrons at each spot. The STM operates in one of two modes: a constant-current mode, which keeps the current of the tip constant by allowing the tip to move up and down, depending on the electron densities the tip encounters from the surface; or a mode that measures the current of a tip that does not move up and down, but instead keeps a fixed distance from the sample. The STM then collects measurements of changes in either the movement of the tip or of the current at fixed intervals as the tip scans the surface.

      To create an image from these measurements, the STM displays the data on a computer monitor, arranging the data in a matrix in the order of the sampled measurements and assigning each a value. Values are then sent to a computer monitor that assigns each value to a pixel.18 The measurements are expressed in pixels as different values in a gray scale or false color scale (“false” because atoms do not have colors: light waves are too large to optically register atoms) to create more visible variation; the accumulated matrix of values, presented in pixels, comprises the image. Pixels can be arranged in numerous formations to construct and communicate data in spatial arrangements (what we think of as a digital image, generally) or as histograms that graph numerical frequencies of the values assigned to the pixels. The imaging of data by the STM highlights the differences between measurements, and so makes visible the topographic or electronic properties of the sample.

      Figure 3. “How an STM Works.” David Beck, (c) Exploratorium, www.exploratorium.edu.

      Mediation and Interpretation Practices

      As this summary of the operation of the STM suggests, the instrument uses extensive processes to mediate between data and image in converting measurements of phenomena into data, and then into images. Many scientific and medical visualization technologies that produce digital images engage in similarly complex processes of mediation to visualize the invisible—whether invisibility is due to location, such as within a living body (as ultrasound, CAT, PET, and fMRI technologies visualize); or to size, such as below the threshold at which light operates (as probe microscopes such as the STM visualize); or some other invisibility.19 Digital scientific visualization technologies frame phenomena (such as tunneling data) as measurable and mathematically describable—an important component of scientific work (Lynch, “Externalized Retina” 170). This is often accomplished through non-lens methods that measure non-optical attributes, such as using radioactive tracers to follow molecule or atom flows (e.g., PET scans, MRI, fMRI), ultrasound waves to measure tissue density (e.g., ultrasound), or probes to measure non-visual properties such as atom interactions (e.g., STM) or friction (e.g., atomic force microscope).

      The mediation practices of the STM, like the mediation practices used in visualization technologies mentioned above, operate within broader scientific practices of using visualizations. Drawing from an ethnographic study of image-making practices in a scientific laboratory, science studies scholars Klaus Amann and Karin Knorr Cetina document the process of producing visual data in science as first seeing and then deciding what the data is; this is followed by determining what the evidence is. Each of Amann and Knorr Cetina’s three modes of practice holds different goals and practices that are complex and socially organized. Amann and Knorr Cetina comment, “just as scientific facts are the end product of complex processes of belief fixation, so visual ‘sense data’—just what it is scientists see when they look at the outcome of an experiment—are the end product of socially organized procedures of evidence fixation” (86). The mediation practices that researchers use to operate the STM and other visualization technologies fit within Amann and Knorr Cetina’s modes of practice. The practices that Amann and Knorr Cetina identify include multiple processes of mediation that are not only based on scientific practice, but may include broader social practices, thus indicating, one way in which visualization technologies form points of intersection for the “mechanical techniques, instrumental requirements, and socioeconomic forces” that Crary mentions (8). Therefore, the practices of mediating between data and image also form a site for analyzing the rhetorical work of instruments.

      Along with extensive mediation practices, inscriptions tend to require extensive interpretive practices to understand what is being shown; although, visual inscriptions may present phenomena in ways that look simple or apparent. Amann and Knorr Cetina’s study underscores the complexity of the processes of visualization and interpretation in making scientific knowledge. The apparent simplicity of the image as a form for presenting data sometimes leads to confusion about how to read visual inscriptions made by instruments, even by experts.20 For example, in the early days of the STM, researchers occasionally misinterpreted what the images showed (Mody, Instrumental 12–13; Woodruff 75). In a study of PET scans, Joseph Dumit explains that some of the difficulties researchers have in interpreting scans involve not only habits of seeing, but also theoretical questions of interpretation (68–69).21 Thus, practices of mediation and interpretation of the productions of instruments form two aspects of the complexity of digital visualization technologies used to create inscriptions; interpretive practices, like mediation practices, suggest possible directions for analyzing rhetorical functions.

      Images of Data

      Galison’s account of the merging of two traditions of presenting evidence in microphysics in the early 1970s points to an origin for the development of recent scientific and medical visualization technologies that present large amounts of data in image form, such as the STM, MRI, and PET scans, for example (Image and Logic 570). Galison argues that the development and use of electronic images in physics for the first time allowed researchers from these two traditions to combine methods and see images as evidence. Because large amounts of data could compose images, researchers who had focused on images like photographs for evidence could use the same methods—images—as those researchers who relied on statistical data for evidence. The use of images to present evidence authoritatively within the field was so important, Galison argues, that “the controllable image” has become the main form that data has taken in the sciences (Image and Logic 810).22 Indeed, many scientific and medical visualization technologies that rely on data in image form were developed and used in the 1970s.23 The STM, too, was developed within the trend of using data to compose images: The inventors of the STM first filed for patents in 1978 in Switzerland,


Скачать книгу