The Digital Big Bang. Phil Quade

The Digital Big Bang - Phil Quade


Скачать книгу
AUTONOMOUS VEHICLES

      Autonomous vehicles, which have started cruising the streets of major cities, offer a long list of benefits, including safer streets, more efficient use of vehicles leading to less congestion, increased mobility for seniors, and more. Data collection and analysis speeds are again critical, especially as vehicles analyze data on roads crowded with other cars, bikes, pedestrians, and traffic lights. It is also important that such technologies leverage the cloud (where data from multiple sources can be stored and analyzed) but also make “local” decisions without the latency caused by transmitting data to and from the cloud (this is called the “intelligent cloud/intelligent edge” model).

      At the same time, we will be presented with new risks as the technology matures and perhaps comes under attack. Indeed, as of December 17, 2018, the California Department of Motor Vehicles had received 124 Autonomous Vehicle Collision Reports, and there have been reports of researchers hacking automated vehicles. Still, while autonomous vehicles may not be completely “safe,” they will, if correctly designed, undoubtedly be “safer”: Autonomous vehicles can process data far more quickly than humans, they will not panic in an emergency situation, and they are unlikely to be distracted by cell phones.

      CONTEXT: AUTONOMOUS LETHAL WEAPONS

      In the last century, intercontinental ballistic missiles shortened military response times from days to minutes. Since then, the introduction of cyberwarfare capabilities has arguably reduced response times further. As General Joseph Dunford, the outgoing Chairman of the Joint Chiefs of Staff, noted in 2017, “The speed of war has changed, and the nature of these changes makes the global security environment even more unpredictable, dangerous, and unforgiving. Decision space has collapsed and so our processes must adapt to keep pace with the speed of war [emphasis added].” To take advantage of the remaining decision space, Dunford noted, there must be “a common understanding of the threat, providing a clear understanding of the capabilities and limitations of the joint force, and then establishing a framework that enables senior leaders to make decisions in a timely manner.” This suggests the need for greater pre-decisional planning, from collecting better intelligence about adversaries' capabilities and intentions to better scenario planning so that decisions can be made more quickly.

      At the same time that we attempt to adapt, we must also grapple with a fundamental question: As the need for speed increases, will humans delegate decision making—even in lethal situations—to machines? That humans will remain in the loop is not a foregone conclusion. With the creation of autonomous lethal weapons, some countries have announced new policies requiring a “human-in-the-loop” (see Department of Defense Directive Number 3000.09, November 12, 2012). Additionally, concerned individuals and organizations are leading calls for international treaties limiting such weapons (see https://autonomousweapons.org/). But as we have seen with cybersecurity norms, gaining widespread international agreement can be difficult, particularly when technology is new and countries do not want to quickly, and perhaps prematurely, limit their future activities. And as we have seen recently with chemical weapons, ensuring compliance even with agreed-upon rules can be challenging.

      THE RISK

      The risk here is twofold. The first relates to speed: If decision times collapse and outcomes favor those who are willing to delegate to machines, those who accept delay by maintaining adherence to principles of human control may find themselves disadvantaged. Second, there is the reality that humans will, over time, come to trust machines with life-altering decisions. Self-driving cars represent one example of this phenomenon. At first, self-driving vehicles required a human at the wheel “for emergencies.” Later, automated vehicles with no steering wheel and no human driver were approved. While this may be, in part, because humans have proven to be poor at paying attention and reacting quickly in a crisis, experience and familiarity with machines may have also engendered complacency with automated vehicles, much as we have accepted technology in other areas of our lives. If this is correct, then humans may ultimately conclude that with decisional time shrinking, machines will make better decisions than the humans they serve, even when human lives are at stake.

      ABOUT THE CONTRIBUTOR

      Scott Charney – Vice President of Security Policy, Microsoft

      Scott Charney is vice president for security policy at Microsoft. He serves as vice chair of the National Security Telecommunications Advisory Committee, as a commissioner on the Global Commission for the Stability of Cyberspace, and as chair of the board of the Global Cyber Alliance. Prior to his current position, Charney led Microsoft's Trustworthy Computing Group, where he was responsible for enforcing mandatory security engineering policies and implementing security strategy. Before that, he served as chief of the Computer Crime and Intellectual Property Section (CCIPS) at the U.S. Department of Justice (DOJ), where he was responsible for implementing DOJ's computer crime and intellectual property initiatives. Under his direction, CCIPS investigated and prosecuted national and international hacker cases, economic espionage cases, and violations of the federal criminal copyright and trademark laws. He served three years as chair of the G8 Subgroup on High-Tech Crime, was vice chair of the Organization of Economic Cooperation and Development (OECD) Group of Experts on Security and Privacy, led the U.S. Delegation to the OECD on Cryptography Policy, and was co-chair of the Center for Strategic and International Studies Commission on Cybersecurity for the 44th Presidency.

      “The convenience of IoT devices comes at a cost: a vastly expanded attack surface.”

       Brian Talbert, Alaska Airlines

Diagrammatic representation of two oval rings with a connector, which symbolizes connectivity.

      “The drive to connect is an unstoppable force within cyberspace.”

       Chris Inglis, Former Deputy Director, NSA

      Enabling and protecting safe connectivity is the core mission of cybersecurity. At its most basic definition, cybersecurity is about allowing or denying access to information. That is how information is protected. And while the extraordinary adoption of the Internet may certainly have been powered by recognition of the incredible benefits of connectivity, it comes with risk.

      The triumph of collaboration and connectivity coded into the core of the Internet has been manipulated to attack it. As the connectivity of the early Internet broadened—and with it, new targets—so too did the breadth and depth of the attacks. Every cyberattacker has at least one substantial advantage. As Sun Tzu succinctly stated in The Art of War, “In conflict, direct confrontation will lead to engagement and surprise will lead to victory.” Threat actors can choose when and where to strike.

      Each attacker may learn from other attacks about what worked, what didn't, and where the valuable data resides. This is one reason attackers often hold an advantage over defenders.

      An integrated defense—a staple of high-end security strategies in all other areas and fields of protection—is an often-neglected cybersecurity


Скачать книгу