Making Sense of AI. Anthony Elliott
interwoven with interdependent complex systems.
If AI intrudes into the realms of personal life, lifestyle change and the self, one development which is especially prominent is the ever-increasing automation of large tracts of everyday life. ‘Automated society’ and ‘automated life’ are intimately interwoven. In contemporary algorithmic societies, the automation of forms of life and sectors of experience is driven by an apparently invincible socio-technical dynamic. Automation in this sense has a profoundly transformative impact for almost everyone, a phenomenon which carries both positive and negative consequences. On the positive side of the equation, the promises of automated life include significantly improved efficiency and new freedoms. In the area of healthcare, for example, more and more people now wear self-tracking devices, which monitor their bodies and provide data on sleep patterns, energy expended, heartbeat and other health information. Medical sensors worn by patients provide medical practitioners with biometric information – such as monitoring glucose in diabetics – that is vital for the management of chronic diseases. Advances in medical imaging facilitate the automated exchange of data from hospitals to doctors anywhere in the world. Medical robots can be used to conduct operations using real-time data-collection over indefinite distances and time differences. A parallel set of developments is occurring in education. International collaborative projects can now be conducted with researchers and students communicating with each other anywhere in the world through real-time language translation using applications like Microsoft Teams and OneNote. Personalized learning that deploys AI to adapt teaching methods and pedagogic materials to students studying at their own pace has been rolled out by various online higher education institutions. Automated grading software that frees schoolteachers from the repetition of assessing tests is now commonplace, freeing up time for educators to work more creatively with students.
Such developments have obvious advantages, and many commentators argue that algorithmic intelligent machines bring consistency and objectivity to public service delivery, thus creating huge benefits for society as a whole. Automated systems also provide revolutionary changes, it is argued, to many routine tasks of everyday life. AI is used to automatically craft personalized email and write tweets or blog posts. Smart homes are directly automated environments, from climate control to air conditioning to personal security systems. At work, professionals and senior managers increasingly make decisions powered by automated tools, including software allocating tasks to subordinates as well as the automated evaluation of their performance. In retail, shoppers scan barcodes and pay at the checkout with their smartphones, consumers reserve products and arrange delivery without ever having to interact with store staff, and ever-rising customer expectations and complaints are processed by automated customer-care centres. Indeed, the rise of AI in reshaping everyday life has led Stanford computer scientist John Koza to speak of the age of ‘automated invention machines’. Koza underscores the arrival of a world where smart algorithms don’t just replicate existing commercial designs but ‘think outside the box’, creating new lifestyle options and driving consumer life in entirely new directions.
The recent explosion in data-gathering, data-harvesting and data-profiling underlies not only challenges confronting everyone in terms of lifestyle change and the politics of identity, but also institutional transformations towards a new surveillance reality. A number of interesting questions arise about the quest for collection, collation and coding of ever-larger amounts of data, especially personal data, as regards the rise of digital surveillance. What are the tacit assumptions that underpin contemporary uses of AI technologies on the one hand and questions about data ownership on the other hand? Are people right to be worried that the digital collection of public and private data – from companies and governments alike – appears to become ever more intrusive? How are AI technologies marshalled by companies to manipulate consumer choice? How have governments deployed AI to control citizens? What are the human rights implications inherent in the current phase of AI? What implications follow from AI-powered data-harvesting for self, social relationships and lifestyle change? How can AI and other new technologies be used to counter unfair disadvantages people routinely encounter on the basis of their race, age, gender and other characteristics? What are the emergent connections between data-collection on the most intimate aspects of personal experience and the changing nature of power in the contemporary world? Chapter 7 addresses all of these issues.
When looking at the institutional dimensions of digital surveillance, certain immanent trends are fairly clear. In the contemporary period, ‘surveillance’ refers to two related forms of power. One is the accumulation of ‘mass data’ or ‘big data’, which can be used to influence, reshape, reorganize and transform the activities of individuals and communities about whom the data is gathered. AI has become strongly associated with this codification of data – from the predictive analytics of consumer behaviour to the tracking and profiling of various minority groups. The increasingly automated character of data traces has led many critics to warn of the erosion, undermining and even collapse of human rights in the contemporary age. Bruce Schneier, in Data and Goliath, argues that the capacity of corporations and governments to expand the range of available data pertaining to citizens, consumers and communities is historically greater than it has ever been. An example of this is the modes of data collected by tech giants such as Google, Facebook, Verizon and Yahoo; such large masses of digital data pertaining to the lives of individuals are used, increasingly, to generate profit as well as future commercial and administrative value. Digital surveillance is thus central not only to the AI-powered data knowledge economy, but also to government agencies and related state actors.
Another aspect of digital surveillance is that of the control of the activities of some individuals or sections of society by other powerful agents or institutions. In AI-powered societies, the concentration of controlled activities arises from the deployment of digital technologies to watch, observe, trace, track, record and monitor others through more-or-less continuous surveillance. As discussed in detail in chapter 7, some critics follow Michel Foucault in his selection of Jeremy Bentham’s Panopticon as the prototype of social relations of power – adjusted to digital realities with ‘prisoners’ of today’s corporate offices or private residences kept under a form of twenty-four-hour digital surveillance. Certain kinds of technological monitoring – from CCTV cameras in neighbourhoods equipped with facial recognition software to automated data-tracking through Internet search engines – lend support to this notion that digital surveillance is ever-present and increasingly omnipotent.
Yet in fact the control of social activities through digital surveillance is by no means complete in algorithmic modernity, where data flows are fluid, liquid and often chaotic, and many forms of self-mobilizing and decentred contestation of surveillance appear. This development has not come about, as followers of Foucault contend, because of panoptical power advancing digitalization-from-above techniques of surveillance. Rather, what occurs in the present-day deregulated networks and platforms of digital technology is users sharing information with others, uploading detailed personal information, downloading data through search, retrieval and tagging on cloud computing databases, and a host of other behaviours which contribute to the production, reproduction and transformation of the dense nets that I call disorganized surveillance. We should understand this development in terms of organized, control-from-above, Panopticon-style rule of administered surveillance being phased out and decommissioned. Disorganized surveillance is not so much a control of activities of subordinates by superiors as the dispersion and liquidization of the monitoring of self and others in coactive relation to automated intelligent machines.
Notes
1 1 There have been some notable exceptions to the mainstream retelling of AI history, and important contributions which seek to narrate alternative histories of AI. The work of Genevieve Bell is of special significance in this connection. See, for example, Paul Dourish and Genevieve Bell, ‘“Resistance is Futile”: Reading Science Fiction and Ubiquitous Computing’, Personal and Ubiquitous Computing, 18 (4), 2014, pp. 769–78; and Genevieve Bell, ‘Making Life: A Brief History of Human–Robot Interaction’, Consumption Markets & Culture,