The Innovation Ultimatum. Steve Brown
weight, cost, and so on—and the tool generates options and evaluates each one using simulation tools. The designer simply picks the option that best meets their needs—perhaps the one that's cheapest to make, easiest to manufacture, or has the lightest weight.
Generative AI will help to design lighter-weight aircraft, cars that are more resilient to crashes, and stronger, lighter robots. Generative architecture will improve the structural integrity and design of new buildings.
Researchers at the University of California, Berkeley, working in partnership with Glidewell Dental Lab, use GANs to design dental crowns. The AI uses digital x-rays of a patient's upper and lower jaw to design a crown that perfectly fills the gap in the patient's tooth line, optimizes bite contact, and looks aesthetically pleasing. Researchers claim that AI-generated crowns outperform those designed by humans. The approach should speed crown production, reduce costs, and free dentists to spend more time generating revenue by working in patients’ mouths, rather than designing crowns on a CAD machine in a back office.
Generative AI is an example of a broader category of AI that I refer to as “collaborative AI.” Collaborative AIs operate in partnership with humans in a creative process. Humanity's use of tools distinguishes us from most other species. Traditional tools are subordinate—we wield a hammer, drive a car, and program a computer. Collaborative AI changes our relationship with tools. They are no longer subordinate, they now co-create with us. Collaborative AIs aren't just tools, they're partners.
Collaborative AI will co-create visuals for presentations, advertisements, and marketing brochures. Collaborative email software will auto-compose responses. Collaborative management software will co-create plans for complex projects. Many job functions will benefit from collaborative AI in the coming years.
Future Uses of AI
In a fast-moving field, new use categories, beyond the eight listed earlier, are bound to emerge. GANs, a technique that's central to several of these application categories, were invented relatively recently. As research embraces techniques beyond deep learning—cause and effect AI, common sense AI, capsules, and others—artificial intelligence will solve even more business problems than it can today.
AI is a big deal. Every leader should pay close attention. Every organization must understand how AI will shape product development, business operations, customer service, and workforce management.
How AI Works
You don't have to understand how AI works to use it. But such insight can help you to understand the capabilities and limitations of today's technology. While the following description is designed to be accessible to nontechnical types, feel free to skip to the next section if it gets too far into the weeds for you.
Neural Networks, Training, and Models
Neural networks underpin most of today's artificial intelligence. They operate quite differently from traditional digital computers. Traditional computers are glorified adding machines. Neural nets are organized more like the highly interconnected structures found in our brains.
Neural nets are made up of connected “nodes,” which act like neurons. Each node holds a numerical value. Unlike binary computers that work with zeros and ones, each node can have a range of values; the range depends on the application. Nodes are arranged into layers. The first layer is known as the input layer and the final layer is known as the output layer. All of the layers in between are known as hidden layers (see Figure 1.1).
Figure 1.1 A simple neural network.
Typically, the more layers there are, and the more nodes in each layer, the more capable the neural network. Neural networks with many layers are known as “deep” neural networks. This is where the term deep learning comes from.
Every node in the hidden layers has both inputs and outputs. Each node is connected to every node in the previous layer and every node in the next layer. The value of each node is influenced by the values of all the nodes it is connected to in the previous layer. Here's the tricky bit: some nodes have a stronger influence on the value of proceeding nodes than others; their influence is weighted. The value of each node is the weighted sum of the values of the previous nodes. These weightings are determined during the training phase and collectively make up what is known as “the model.” The model determines the functionality of the neural network: different weightings, different functionality. Information passes across the network from the input layer to the output layer via this complex web of weighted interconnections.
Neural networks are trained with a process known as backpropagation, or “backprop” as it's known in the business. The details of how backprop works is beyond the scope of this book. At a high level, backprop is a computationally intensive statistical approach that compares the desired output of a neural network with the actual output and then tweaks the weightings in the network to improve the accuracy of results. When the right result is given, the weightings of all the pathways through the neural network that lead to the correct result are strengthened. If the result was incorrect, the pathways that lead to the incorrect result are weakened. Over time, with exposure to more and more data, the model becomes increasingly accurate. The network “learns” the correct complex associations between inputs and outputs.
Example: A Radiology AI
To train a neural network to read radiology charts and look for tumors, you would expose it to many example charts (the input), each tagged with a radiologist's diagnosis—tumor or no tumor (the desired output). The output of the network is a single number, the probability that an image contains a tumor. Each time the neural net is exposed to a new image, the output of the network is compared with the correct result. If an image of a tumor is presented, the result should be close to 100%. If there's no tumor the result should be close to zero. The backprop process is used to tweak the network's model (the weightings of the connections between the nodes), strengthening the weightings of links that lead to the correct result, and weakening those that don't. Once trained with enough data, the neural network will predict the right diagnosis with impressive accuracy. A more complex network might have several outputs. One could be the percentage chance of a tumor, another the probability of an embolism, another the probability of a broken bone, and so on.
If this all seems too difficult to understand, that's okay. The key thing to understand is that neural nets can infer how to perform tasks from examples, without the need of a domain expert to supply explicit rules on how to perform that task.
Radiologists train for many years to read x-rays, computer tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) images. After medical school, radiologists do additional training, often involving a four-year residency. Some do additional specialization training after that. Reading images to look for tumors and other ailments uses all of the radiologist's skill, experience, and training. Yet this task is within the reach of a neural network. Given enough training data, an AI can be built with diagnostic abilities similar to those of a human radiologist, a person with about a decade of intense education behind them. As we train the neural network, we are essentially codifying the collective knowledge, and several decades of professional experience, from hundreds of thousands of radiologists. Their experience and diagnostic insight are captured in the model that's generated.
Some radiologists already use AI-based tools to offer a “second opinion” as they read charts. As the accuracy of these tools surpasses that of human radiologists on routine charts, radiologists will be able to focus their attention on more