Breaking News

In Army of None, a field guide to the coming world of autonomous warfare

The Silicon Valley-military industrial complex is increasingly in the crosshairs of artificial intelligence engineers. A few weeks ago, Google was reported to be backing out of a Pentagon contract around Project Maven, which would use image recognition to automatically evaluate photos. Earlier this year, AI researchers around the world joined petitions calling for a boycott of any research that could be used in autonomous warfare.

For Paul Scharre, though, such petitions barely touch the deep complexity, nuance, and ambiguity that will make evaluating autonomous weapons a major concern for defense planners this century. In Army of None, Scharre argues that the challenges around just the definitions of these machines will take enormous effort to work out between nations, let alone handling their effects. It’s a sobering, thoughtful, if at times protracted look at this critical topic.

Scharre should know. A former Army Ranger, he joined the Pentagon working in the Office of Secretary of Defense, where he developed some of the Defense Department’s first policies around autonomy. Leaving in 2013, he joined the DC-based think tank Center for a New American Security, where he directs a center on technology and national security. In short, he has spent about a decade on this emerging tech, and his expertise clearly shows throughout the book.

The first challenge that belies these petitions on autonomous weapons is that these systems already exist, and are already deployed in the field. Technologies like the Aegis Combat System, High-speed Anti-Radiation Missile (HARM), and the Harpy already include sophisticated autonomous features. As Scharre writes, “The human launching the Harpy decides to destroy any enemy radars within a general area in space and time, but the Harpy itself chooses the specific radar it destroys.” The weapon can loiter for 2.5 hours while it determines a target with its sensors — is it autonomous?

Scharre repeatedly uses the military’s OODA loop (for observe, orient, decide, and act) as a framework to determine the level of autonomy for a given machine. Humans can be “in the loop,” where they determine the actions of the machine, “on the loop” where they have control but the machine is mostly working independently, and “out of the loop” when machines are entirely independent of human decision-making.

The framework helps clear some of the confusion between different systems, but it is not sufficient. When machines fight machines, for instance, the speed of the battle can become so great that humans may well do more harm then good intervening. Millions of cycles of the OODA loop could be processed by a drone before a human even registers what is happening on the battlefield. A human out of the loop, therefore, could well lead to safer outcomes. It’s exactly these kinds of paradoxes that make the subject so difficult to analyze.

In addition to paradoxes, constraints are a huge theme in the book as well. Speed is one — and the price of military equipment is another. Dumb missiles are cheap, and adding automation has consistently added to the price of hardware. As Scharre notes, “Modern missiles can cost upwards of a million dollars apiece. As a practical matter, militaries will want to know that there is, in fact, a valid enemy target in the area before using an expensive weapon.”

Another constraint is simply culture. The author writes, “There is intense cultural resistance within the U.S. military to handing over jobs to uninhabited systems.” Not unlike automation in the civilian workforce, people in power want to place flesh-and-blood humans in the most complex assignments. These constraints matter, because Scharre foresees a classic arms race around these weapons as dozens of countries pursue these machines.

Humans “in the loop” may be the default today, but for how long?

At a higher level, about a third of the book is devoted to the history of automation, (generalized) AI, and the potential for autonomy, topics which should be familiar to any regular reader of TechCrunch. Another third of the book or so is a meditation on the challenges of the technology from a dual use and strategic perspective, as well as the dubious path toward an international ban.

Yet, what I found most valuable in the book was the chapter on ethics, lodged fairly late in the book’s narrative. Scharre does a superb job covering the ground of the various schools of thought around the ethics of autonomous warfare, and how they intersect and compete. He extensively analyzes and quotes Ron Arkin, a roboticist who has spent significant time thinking about autonomy in warfare. Arkin tells Scharre that “We put way too much faith in human warfighters,” and argues that autonomous weapons could theoretically be programmed never to commit a war crime unlike humans. Other activists, like Jody Williams, believe that only a comprehensive ban can ensure that such weapons are never developed in the first place.

Scharre regrets that more of these conversations don’t take into account the strategic positions of the military. He notes that international discussions on bans are led by NGOs and not by nation states, whereas all examples of successful bans have been the other way around.

Another challenge is simply that antiwar activism and anti-autonomous weapons activism are increasingly being conflated. Scharre writes, “One of the challenges in weighing the ethics of autonomous weapons is untangling which criticisms are about autonomous weapons and which are really about war.” Citing Sherman, who marched through the U.S. South in the Civil War in an aggressive pillage, the author reminds the reader that “war is hell,” and that militaries don’t choose weapons in a vacuum, but relatively against other tools in their and their competitors’ arsenals.

The book is a compendium of the various issues around autonomous weapons, although it suffers a bit from the classic problem of being too lengthy on some subjects (drone swarms) while offering limited information on others (arms control negotiations). The book also is marred at times by errors, such as “news rules of engagement” that otherwise detract from a direct and active text. Tighter editing would have helped in both cases. Given the inchoate nature of the subject, the book works as an overview, although it fails to present an opinionated narrative on where autonomy and the military should go in the future, an unsatisfying gap given the author’s extensive and unique background on the subject.

All that said, Army of None is a one-stop guide book to the debates, the challenges, and yes, the opportunities that can come from autonomous warfare. Scharre ends on exactly the right note, reminding us that ultimately, all of these machines are owned by us, and what we choose to build is within our control. “The world we are creating is one that will have intelligent machines in it, but it is not for them. It is a world for us.” We should continue to engage, and petition, and debate, but always with a vision for the future we want to realize.



from TechCrunch https://ift.tt/2MU1CsQ

No comments