Digital Universe. Peter B. Seel
the decade between 1965 and 1975. Moore included a table (see Figure 2.3) that featured a logarithmic scale demonstrating this doubling of components on a chip from 1962 to 1965 and then extending this plot into the future. I have reversed the X and Y scales in the version below (with time on the Y scale) for the sake of clarity. Note that this calculation was based on just four confirmed data points (1962 to 1965), and was quite a bold prognostication given the predicted doubling of components at yearly intervals. Yet Moore’s prediction for this remarkable technological feat proved to be prescient, even if the doubling intervals were to be closer to 18 to 24 months and are now approximately 30 months.
Figure 2.3 Moore’s Law Re-plotted. Source: Modified by author after the original in Electronics, Vol. 38, Number 8.
Three years later, Moore left his position as director of the research and development laboratories at Fairchild to start a new company with partners Robert Noyce and Andrew Grove. Its name was short and memorable – the Intel Corporation. In 1975, Moore revised the time-frame for chip evolution from one year intervals to two years in a speech he gave to the Institute of Electrical and Electronics Engineers (IEEE).10 For several decades, Moore modestly declined the honor of having the law named after him and attributes the name to California Institute of Technology computer scientist Carver Mead.11 It was to provide a prescient method of reference for the exponential growth in the power of integrated circuits over the following 60 years. It is important to note that the meaning of the law has evolved over time from a literal count of the transistors on a chip, to a contemporary interpretation related to the processing power and speed of the components on a multifunction chip.12
Computer users understand and appreciate the improvements in processing speed and reduced power demands in these chips, especially those developed in the last decade. Other uses of IC technology are less obvious. Automobiles, for example, have a number of computer chips that govern critical functions such as fuel injection, collision-avoidance features, and wireless sensors that automatically sync to a mobile phone for hands-free use that reduces driver distraction. The development of driverless vehicles by Tesla, Google, and other manufacturers is reliant on a host of sensors mounted around the car or truck to track barriers and obstacles and then relay this information instantaneously to AI-driven processors that control steering and braking. The sensor-processor network can respond to potential collisions much faster than a distracted human can react and take evasive action.
Implications for Computing and the Digital Universe
Computer scientists commonly refer to “ubiquitous computing” to describe a world that is filled with “intelligent” devices. The increase in integrated circuit speed and power, combined with the dramatic drop in price per transistor, has made it possible to embed powerful chips in almost every device or tool that uses electricity. These embedded devices make it possible to add a remarkable variety of intelligent functions to what were previously “dumb” tools and appliances. The telephone is an ideal example. What was previously a very simple device that could be used intuitively by raising the handset to one’s ear and then dialing the number with a rotary wheel or a keypad is now a much more complex instrument. My triple-camera-equipped mobile phone with a hi-res screen came with a 79-page instruction book. In the near future, mobile phone users may have to take a short course in phone feature programming to learn how to use all the functions built into their mobile phone/computer/camera. The new generation of chips used in mobile phones after 2022 will be as powerful as those used in desktop and tablet PCs.13
There was a time when a person could walk into an unfamiliar home and easily make a phone call, turn on the television, or perform a simple task such as boiling a kettle of water. We are confronted today by appliances with astonishing capabilities and with equally complex operational learning curves. The future will see greater applications of artificial intelligence (AI) applied in product design to ease the stress on users, but as the cliché states, “there is a great future for complexity.” The challenge for engineers and product designers in coming decades will be to create “smart” devices that have great functional power, but are also intuitive to operate.
The implications of Moore’s law for citizens of nations that use advanced digital technology will be significant in the future. Since internet access is now available to more than 60 percent of the global population of over 7.7 billion (improved from 29 percent in 2011), this includes a significant portion of humanity.14 Chip performance will increase while device prices will continue to fall. Storage of digital content on chips is now so cheap that electronic devices can have enormous storage capacity, especially phones and cameras. Chips will be embedded in a wide range of products that will have remarkable levels of intelligence, a trend known as the Internet of Things (I.o.T.). The complexification of the telematic world will increase at a steady pace, with consumers happy if these devices are easy to use and maintain, and disgruntled if they are not.
In addition to complexification, concern over the diminishment of privacy in this digital universe will become a significant issue in many nations of the world. With cameras embedded in every mobile phone and surveillance systems observing almost every commercial transaction, there are already well-publicized concerns about their negative effect on personal privacy. Advances in digital face-recognition technology that enable users quickly to log into their phones and tag friends in social media posts, have taken on dystopian applications in nations that use it for social control and surveillance purposes. We will examine these and related digital privacy issues in Chapter 12.
Technological Determinism
Technological Determinism is a point of view that a society’s technology determines its history, social structure, and cultural values. It is a term that is used to criticize those who credit technology as a central governing force behind social and cultural change as overly “reductionist.” Author Thomas Friedman, in his book The World is Flat, freely admits to being a technological determinist, stating that “capabilities create intentions” in regard to the role that technology plays in shaping how we live.15 Examples he cites are the internet facilitating global e-commerce, and work-flow technologies (and the internet) making possible the off-shoring and out-sourcing of disaggregated tasks around the world. Friedman states:
The history of economic development teaches this over and over: If you can do it, you must do it, otherwise your competitors will (and) … there is a whole new universe of things that companies, countries, and individuals can and must do to thrive in a flat world.16
It is rare to find an observer of modern life willing to go on record in this regard, and I commend Friedman’s courage in doing so. His perspective is worth our critical consideration. While it is clear that a wide range of factors influence social change, including culture, economics, and politics, among many others, Friedman advances technology to a privileged position due to its ubiquity in contemporary life and he is correct in his assessment that “capabilities create intentions.” The development of the MP3 compression format for music files makes a good case study. When recorded music was only available on vinyl records, there were few options available for copying songs. As technology evolved, one could make a cassette tape of a record, but the copy was of poor quality and one had to fast forward and rewind the tape to find a desired song. Once digital technology appeared with the advent of music on compact discs, users could “rip” individual songs onto a computer’s hard drive as digital files.
Copyright holders such as record companies and musicians were not immediately concerned since users had to buy the CD to copy the music. However, with the rapid spread of the MP3 file format17 users of this technology developed large libraries of songs in this format. It wasn’t long until a company, Napster, developed a unique technology for users of their service to copy music files to their own computer from another user who had the desired songs. Then