Tech as the Battlefield

An AI-generated image of a technology battlefield.

The technology industry and the defense industry have always shared an intimately close relationship. Most technology sectors, from semiconductors to the internet, originated from massive investment by the U.S. Federal Government. And America’s technological superiority has been decisive in establishing our military dominance—from better combat communications to more accurate missiles.

But today with the rapid rise of AI, we seem to be headed into an era in which these industries are more than just inter-related. Technology is becoming the primary military battlefield on which modern geopolitical conflicts will be fought. That emerging reality has drawn comparisons to the race to develop atomic weapons during World War II, with Alexander Karp, the CEO of defense intelligence analytics company Palantir, declaring the creation of AI weapons our “Oppenheimer Moment” in a recent op-ed in The New York Times.

Although much of the piece is delivered in tiresome, anti-woke rhetoric (e.g. “The preoccupations and political instincts of coastal elites may be essential to maintaining their sense of self and cultural superiority but do little to advance the interests of our republic.”), the core of Karp’s argument highlights the dilemma the United States is facing: specifically that, “Our adversaries will not pause to indulge in theatrical debates about the merits of developing technologies with critical military and national security applications. They will proceed. This is an arms race of a different kind, and it has begun.”

Written in response to the public letter signed by many industry experts warning AI presents a “risk of extinction” and a recent White House event at which top AI companies committed to safeguards around the technology, Karp is espousing a distinctly conservative philosophy—that escalation is the best form of deterrence; that not only developing but being willing to use next-generation weaponry will somehow ensure our safety. It is the argument one would expect from a purveyor of such weapons, analogous to gun manufacturers calling for more guns to defend ourselves from gun violence.

While such a “if-we-don’t-do-it-our-enemies-will” philosophy severely blurs the line between defender and aggressor, the argument certainly has historical precedence. The primary rationalization for one of the most horrific acts of war of all time—indiscriminately killing an estimated 200,000 Japanese civilians nearly instantly in Hiroshima and Nagasaki—was that it abruptly ended the war in the Pacific and ushered in a prolonged era of peace, albeit one defined by a decades-long cold war, devastating regional conflicts, and the continuous escalation of a nuclear arms race. Regardless, the one—and, thankfully for humanity thus far, only—decision to use nuclear weapons was made in the midst of the deadliest armed conflict in human history in which over 60 million people were killed. The possibility that nuclear weapons would destroy humanity seemed a gamble worth taking when humanity seemed to be already on that course.

To be clear, we are not in the midst of such a conflict.

In fact, the decision to deploy AI technology for lethal military purposes seems, at best, unnecessary, and, at worst, apocalyptic. It was the expense and expertise required to develop nuclear weapons, controlled by governments and military officials acutely aware of the likelihood of mutually assured destruction, that constrained most military conflicts over the last 80 years to conventional warfare. Human beings, with our innate tendencies toward self-preservation, empathy toward other human beings, and a general mooring of ethics, have decided not to deploy nuclear weapons. Even the most despotic leaders have resisted their use. An untethered AI algorithm would not be constrained by a guilty conscience or its own mortality. The purpose and promise of AI is to leverage data to learn autonomously how to achieve an objective with increasing efficiency. Without the inconvenient constraints placed on it by humans, AI can notoriously devise suboptimal solutions. Indeed, AI seems less likely to end a war than to start one.

Another reason history seems unlikely to repeat itself in the race to weaponize AI is the role of corporations in its development. Rather than one government-funded Manhattan Project in Los Alamos, there are hundreds of venture-funded AI companies in Palo Alto and around the world. While we might quibble over the corporate structure, companies like Huawei and Palantir are both mechanisms for advancing the strategic aims of their respective countries, while making their founders and executives fabulously rich. Palantir made Karp a billionaire, just like Palantir co-founder and lead investor, Peter Thiel. Such private sector financial incentives make it exceedingly difficult to put the cat back in the bag. While we debate the ethics of AI or even enact regulatory policies, there’s always going to be a new company in another country with eager investors that is unbound by such ethical dilemmas.

This rapidly escalating battle for technological dominance will almost certainly be the defining geopolitical confrontation of our time. As the technological skirmishes and subterfuge continue, one thing is certain: it makes for great storytelling. Just as Nazi Germany during WWII, the cold war against the Soviet Union, and the so-called “war on terror” in the Middle East have been the basis of innumerable novels, movies, and TV shows, AI is emerging as the next super villain. It has all the traits of a perfect antagonist: something ubiquitous and familiar, designed with the best intentions, but that is going awry due to nefarious and unseen actors, spiraling out of our control and turning against us to threaten the existence of our species and the planet as a whole.

My third novel which I am currently writing (working title Outage), is about just such a premise. In the story, a state-sponsored AI algorithm learns to exploit vulnerabilities in communications chips to disable all internet and mobile phone communications in the U.S., told from the perspective of one family in suburban Virginia. This book continues my macro theme in all my work of exploring how technology is shaping our lives, not always for the better. Storytelling is frequently a forum for expressing, sharing, and analyzing our deepest fears. And given the speed and uncertainty with which AI is evolving, it is sure to be the subject of many more stories for a long time.

Michael Trigg1 Comment