We Humans and the Intelligent Machines. Jörg Dräger

We Humans and the Intelligent Machines - Jörg Dräger


Скачать книгу
specific tasks. People have to define the tasks and train the devices, because an algorithm does not know on its own whether a photo depicts a dog or a house or whether a poem was written by Schiller or a student in elementary school. The more specific the task and the more data the algorithm can learn on, the better its performance will be.

      In contrast to human intelligence, however, AI is not yet able to transfer what it has learned to other situations or scenarios. Computers like Deep Blue can beat any professional chess player, but would initially have no chance in a game on a larger board with nine times nine instead of eight times eight squares. Another task, such as distinguishing a cat from a mouse, would completely overwhelm these supposedly intelligent algorithms. According to industry experts, this ability to transfer acquired knowledge will remain the purview of humans for the foreseeable future.6 Strong AI, also called superintelligence by some, which can perform any cognitive task at least as well as humans, remains science fiction for the time being. When we talk about AI in this book, we therefore mean what is known as weak or narrow AI which can achieve a limited number of goals set by humans.

      The debate about artificial intelligence includes many myths. Digital utopians and techno-skeptics both sketch out visions of the future which are often diametrically opposed. Some consider the emergence of superintelligence in the 21st century to be inevitable, others says it is impossible. At present, nobody can seriously predict whether AI will ever advance to this “superstate.”7 In any event, the danger currently lies less in the superiority of machine intelligence than in its inadequacy. If algorithms are not yet mature, they make mistakes: Automated translations produce nonsense (hopefully not too often in this book), and self-driving cars occasionally cause accidents that a person at the wheel might have avoided.

      Instead of drawing a dystopian distortion of AI and robots, we should put our energy into the safe and socially beneficial design of existing technologies. In the thriving interaction of humans and machines, the strengths and weaknesses of both sides can be meaningfully balanced. This is exactly the subject examined in the following two chapters.

       3People make mistakes

      “Artificial intelligence is better than natural stupidity.” 1

      Wolfgang Wahlster, Former Director of the

      German Research Center for Artificial Intelligence

      To err is human. This well-known saying provides consolation when something fails; at the same time, it seems to dissuade us from pursuing perfection. A mistake can even have a certain charm, especially when a person is self-deprecating about her own fallibility. But the original Latin phrase, from which the saying derives, is longer than just the first words. Written by the theologian Saint Jerome more than 1,600 years ago, the complete quotation is: Errare humanum est, sed in errare perseverare diabolicum. To err is human, but to persist in error is diabolical.

      As sympathetic as a small lapse that does not entail any serious consequences might seem, systematic misjudgments are tragic when they relate to existential questions. Cancer diagnoses, court decisions, job hires – generosity should not be the watchword here when it comes to avoidable mistakes.

      Algorithms can help when people reach their cognitive limits. There is an increasing need for algorithmic support, especially in areas that are particularly important to society, such as medicine or the judiciary. On the one hand, psychological research has shown that the quality of human decisions is suboptimal even when the decisions are of great significance and made by experts. On the other, big data and the computer power for processing it have led to new ways of optimizing diagnoses, analyses and judgments.

      While scientists have become more adept at understanding the limits of our cognitive abilities, advances in IT are making more information available to us. Evaluating that information, however, is becoming increasingly challenging, even overwhelming, for human brains. To refuse ourselves the support machines can provide would mean to persist in error. By accepting such support, we could overcome our intellectual limitations, which get expressed as information overload, flawed reasoning, inconsistency and the feeling of being overwhelmed when dealing with complex situations. To refrain from doing so would not be human as described by Saint Jerome, but diabolical.

       Information overload: Drowning in the flood of data

      The radiology department at the University Hospital in the German town of Essen is nothing but a huge data-processing machine. It is big enough that visitors can take an extended stroll through the premises. The rooms on the right and left of the long corridor are, even now, on a sunny afternoon, dim and dark. With the blinds closed, radiologists sit in front of large monitors and process data. They are the central processing units of radiology. The specialists click through information: patient files, x-rays, scans, MRIs. In one room, images of the brain of a stroke patient flicker across the monitors while, next door, cross-sectional images of a lung with metastases are examined.

      The radiologists at the hospital look at a good 1,000 cases per day. The amount of information they have to process has multiplied in recent years – and not only in Essen. Researchers at Mayo Clinic in the United States have evaluated 12 years’ worth of the organization’s data and duty rosters. During that time, not only did the number of annual examinations almost double, the volume of recorded images increased rapidly. In 1999, one doctor examined 110 images per patient, compared to 640 in 2010. Mayo Clinic hired additional staff, but not as fast as the data to be analyzed grew. The result is a challenge: While in 1999 a doctor viewed and evaluated three images per minute, in 2010 she had to look at more than 16 images per minute – one every three to four seconds – in order to cope with the information flooding in over the course of an eight-hour work day.2

      For patients, the extra data can be life-saving. When Michael Forsting, Director of Radiology at the University Hospital in Essen, looked at cross-sectional images of the brain as a young doctor in the 1980s, each one showed a section 10 to 12 millimeters thick. There was a significant probability of overlooking a metastasis seven millimeters in diameter. Today, each image depicts one millimeter of the brain. The seven-millimeter metastasis, which used to remain undetected between images, is now visible in seven pictures. New technical processes are capturing reality in much greater detail. Hospitals, however, no longer have the human resources to take full advantage of the quality of their findings. As Forsting says: “We have 10 times more pictures. A CT of the brain used to consist of 24 images, now it’s 240. And someone has to take a look at them.”3

      The challenge in radiology is exemplary of those present in other areas, such as identifying the fastest route in urban traffic or coping with the mass of scientific literature on any given subject. Technical advances are improving the amount and quality of data, and technology must help determine the relevant parts of this flood of information. Doctors can now create images of the body down to the smallest cell using computer tomography. Instead of palpating for tumors, radiologists use CTs or MRTs to search for abnormal cellular changes. These days, more data are available than a physician can effectively process using traditional methods. Even the best radiologists would not be able to evaluate 160 images per minute, instead of today’s 16. Any attempt to achieve better results in this way is doomed to fail since the quality of a physician’s judgment declines as he or she grows tired.

      An increase in personnel would not be a solution. Apart from the question of funding such a move in today’s already expensive healthcare system, the race against the constantly growing amount of data cannot be won with new hires. Algorithmic tools are needed instead, and doctors should be open to that. After all, monotonously processing x-rays in a darkened room is not what humans do best, neither is it the core competence of highly trained radiologists – and it is certainly not the reason why someone chooses this profession.

       Flawed reasoning: Making mistakes and discriminating

      Tim Schultheiss and Hakan Yilmaz have a lot in common. Both are looking for an apprenticeship.


Скачать книгу