Making Sense of AI. Anthony Elliott
href="#u373026e4-e8ef-5909-bdb0-dde95db9cfa8">chapter 4. In seeking to demonstrate the power interests realized in and through artificial intelligence, it is necessary to characterize the complex systems of AI. Over the course of the twentieth century and into the twenty-first century, a number of interdependent complex systems served to create a major field of AI, spun off from economic, bureaucratic, industrial and military forces, and each typically providing major resources for the advancement of AI in the contemporary world. The interdependent complex systems, as I discuss at length in chapter 4, include:
1 the scale, scope and extensity of AI in terms of research and innovation, industry and enterprise, as well as technologies and consumer products;
2 the intricate interplay of ‘new’ and ‘old’ technologies, and of the role of established technologies persisting or transforming within many modes of more recent AI and automated intelligent machines;
3 the globalization of AI and the centrality of AI technologies and industries in high-tech digital cities;
4 the growing diffusion of AI in modern institutions and everyday life;
5 the trend towards complexity, at once technological and social;
6 the intrusion of AI technologies into lifestyle change, personal life and the self;
7 the transformation of power as a result of AI technologies of surveillance.
The complex systems in which AI is enmeshed in the contemporary world are at once economic, social, political, material and technological. These interconnected complex systems, as I seek to show, should not be reduced to separate ‘factors’ or ‘processes’. There are no automated intelligent machines without complex systems. As a result, AI is a field characterized by transformation, unpredictability, innovation and reversal. The interdependent complex systems of AI are continually adapting, evolving and self-organizing.
In the early decades of the twenty-first century, there have been two major debates about technology and the general conditions of society and world order. One concerns a possible ‘autonomization’ of society and possibly even of culture and politics. The other concerns broad, massive changes in technological systems, sometimes labelled the coming AI revolution. AI is often presented as an alternative to existing society, which is represented by some critics as politically limited or by other critics as fundamentally flawed. The new, complex systems underpinning the stunning technological advances of AI are often pictured as a utopian pathway to a better world and a more equitable society. Advances in AI, especially powerful predictive algorithms, promise an ever-greater digitalized measure of the world. According to some critics, AI is nothing if not mathematical precision. If we return to complexity theory, however, things are not so clear-cut. Utopic forecasts which emphasize precision or control (of people, of systems, of societies) fail to take into account that such interventions – even the so-called exquisitely precise technological interventions of AI – can generate unanticipated, unintended and opposite, or almost opposite, impacts. One reason for this is the force field of tiny but potentially major changes often described as ‘the butterfly effect’. In 1972, Edward Lorenz posed the question: ‘Does the flap of a butterfly’s wings in Brazil set off a tornado in Texas?’ Lorenz had been studying computer modelling of weather predictions, and he discovered that certain systems – not only meteorological systems, but traffic systems and transport systems – are intrinsically unstable and unpredictable. Notwithstanding the gigantic transformations and combinations of new technology today, some critics invoke the butterfly effect thesis – of highly improbable and unexpected events – to argue that AI technologies, no matter how powerful and advanced, will always fall short of their predictive mark. James Gleick, in Chaos: Making a New Science, argues that AI is unable to secure the goal of precision control – or, we might add, controlled precision – because the smallest variations in measurement may dramatically disrupt the results.
It has been argued previously that separating right and wrong predictions of the future is a task that not even computational analysis will solve; and, if undertaken, is bound to fail at any rate. Our complex world, as well as our opaque lives and social interactions, are far more labyrinthine, and even chaotic, than the mathematical precision of AI allows. This does not mean, however, that all predictive algorithms circulate in a self-referential, sealed-off technical domain; from the fact that AI can’t explain, or even reveal, the complexity that shapes social events and global trends, it does not follow that automated intelligent machines do not influence global complexity or the engendering of catastrophic change. Perhaps instead of talking about the long-dreamt-of controlled precision, or precise control, of AI, it would be more in keeping with the conditions of current global systems to speak of algorithmic cascades, a never-ending, always incomplete, open-ended and unfinished process whereby the consequences of human–machine interactions spread quickly, irreversibly and often chaotically throughout interdependent global systems. These algorithmic cascades might consist of abrupt switches, sudden collapses, system trips, phase transitions or chaos points. A recent example of such an algorithmic cascade has been the dramatic militarization of the means of automated weapons systems, such as parasite unmanned aerial vehicles (UAVs). These UAVs are in effect tiny flying sensors, with automatically operating algorithms processing information, and have significantly disturbed the assumption that the nation-state has a monopoly on the means of violence, as well as having contributed to the proliferation of new wars. Similar algorithmic cascades can be identified throughout the fields of healthcare, education and social welfare, as well as work, employment and unemployment. The point is that a new cloud of uncertainty appears with the emergence, spread and dissemination of algorithmic cascades. Such AI-driven change is non-linear; there is no easy connecting line between causes and effects. Moreover, algorithmic cascades neither are contrary nor stand in opposition to the complexity, or even chaotic feedback loops, of social organization and social systems; they are, rather, a newly added dimension of complex global systems and, far from arresting its dynamics, add fuel to the fire.
The term ‘interdependent complex systems’ can be misleading, since it leads many people to think of either the cold, detached world of bureaucratic administration or the technical terrain of computational classification. Discussions of technological innovation, as we will see, often tend to assume that AI operates as an ‘enhancement’ for already formed individuals to deploy in their lifestyles, careers, families and wider social interactions. This is perhaps true at some trivial level, but what such writers tend often to miss is that AI technologies are supporting an equally profound transformation of cultural identity. Smartphones, self-driving cars, automated office environments, chatbots, face-recognition technology, drones and now the integration of all these as ‘smart cities’ reconfigure ways of doing things and forms of activity so as to cultivate new configurations of personhood. Just think, for example, of smartphones. Is it right to say that people have these intelligent machines, or are people thoroughly absorbed into the machine? Licklider spoke of a ‘man-machine symbiosis’, as we have seen. Whilst we cannot speak of man any more in such a universal form, Licklider’s general argument arguably holds good. My contention throughout this book is that a critical understanding of AI technologies requires a re-evaluation of the kinds of subjecthood it fosters, while an outline of newly emergent cultural identities must include an elaboration of their relation to AI and automated intelligent machines. But, again, it is essential to see that the emergence of new individual identities or lifestyle options does not operate according only to personal preference or consumer choice – as much of the discussion of the culture of AI tends to assume.
This brings us back to interdependent complex systems. AI is not simply ‘external’ or ‘out there’; it is also ‘internal’ or ‘in here’. AI technologies intrude into the very centre of our lives, deeply influencing personal identity and restructuring forms of social interaction. To say this is to say that AI powerfully impacts how we live, how we work, how we socialize and how we create intimacy, as well as countless other aspects of our public and private lives. But this is not to say, however, that AI is simply a private matter or personal affair. If AI cultivates new configurations of cultural identity, these emergent algorithmic forms of identity are structured, networked and enmeshed in economies of technology. That is to say, today’s