By Nathan Fleischaker
I applaud Destination Unknown’s editors and many contributors for the creativity and skill they demonstrated in the volume’s contributions. In their work they demonstrated artistic skill and creativity in this imaginative approach to envisioning future war. In doing so they provide a fruitful starting point from which we can reason backwards to discuss how we should prepare in the present. The result is a collection that is both entertaining and thought provoking (not to mention free and easy to access). I hope that this means it will reach a larger audience than similar topics covered in more mundane journal and blog articles.
After reading the stories however, one of my concerns is that that the creativity and imagination are largely clustered around common themes that are already quite familiar – coup d’oeil and the irreducibly unquantifiable and human aspects of war, the need for greater lethality in conducting missions that look eerily familiar to those we execute today (small units and special operations raids, some kind of conventional force on force assault, combat air patrols), the tension between individual initiative and guidance/direction from higher headquarters. If warfare over time is a study in continuity and change, then Destination Unknown’s implicit theme is clearly continuity: artificial intelligence (AI), space and other future technology will not solve our problems, it will only perpetuate or potentially aggravate them. In the best tradition of science fiction, these graphic novels take existing problems and provide us a fresh look by imagining them in an unfamiliar setting – in these cases we see them dressed in the new garb of AI, space and other future technology.
Yet in so stressing continuity, I fear that we miss potential changes – sets of novel problems that are not merely a newly clothed form of a pre-existing problem. These too demand our attention. Certainly, the nature of war is unchanging and some problems will remain intractable, but we should expect other aspects of the character and context of warfare to change dramatically. The reach of artificial intelligence and other future technology seems to promise more than a variation on existing forms of combat – it promises to change the make-up of the societies that it touches. I briefly outline a few such possible problems in the hope they might inspire topics for a potential sequel to Destination Unknown and discussion elsewhere in our professional dialogue.
The Context of War: Why are We Fighting Again?
My first set of concerns is that the stories provide little to no context for any of the military operations. But I’m left to wonder – what is it we are fighting about again? In only one story – “The Last Fighter Pilots” is there any attempt to provide more than passing explanation for what kind of issues sparked the war. (And here the brief description of multiple businesses creating the reasons for conflict is certainly intriguing). But if we don’t know why we’re fighting, then we have no basis for thinking about how our military activity might solve such problems. The result: all the tactical brilliance and technological superiority will be for naught. Two of the stories focus on small or special units conducting what appear to be direct action raids – apparently the kill capture HVI hunting “strategy” of the last decade hasn’t been refined in our upcoming half-century. In the other stories we have the backdrop of US- Sino conventional conflict. But the purpose of the military operations is never described.
The problem of this omission is that it exhibits the same pathology that led the Marine Corps into the MEB 2.0 dogma and the current situation that our Commandant is working to fix. See the 2019 Planning Guidance. In both situations, we envision and develop military capabilities apart from any strategic purpose or plausible employment scenario. This is unacceptable given the Deterrence by Denial strategy that serves as the intellectual backbone to the National Defense Strategy and the animating force behind the Commandant’s Planning Guidance. Instead of a developing capabilities in a vacuum, Deterrence by Denial is based on focusing our attention on denying adversaries the objectives they seek. But to do this first requires an acute awareness of our adversary objectives; if we don’t know what we’re trying to deny to the adversary, then our capabilities will be unrelated to the operational and strategic realities and we end up irrelevant.
A related concern to the lack of strategic context is the uninterest in conventional to nuclear escalation. Yet surely a limited “conventional” war with China must include the possibility of such inadvertent escalation and operational plans needs to be conscious of this major risk! In at least one of the stories, we’ve had an on-again-off-again set of wars with China that fortunately have not gone nuclear. By assuming that conventional war with China will look like a WWII repeat or some version of the kill-capture counter-terrorism man-hunting seems to completely miss one of the most important discontinuities between recent military operations and present/future great power competition: our great power adversaries retain the ability to employ nuclear weapons. Given this, thinking about future war needs to be sensitive to the dynamics of how we might keep a war with China limited and escape a nuclear escalation spiral.
This omission is certainly not unique to these the author’s, it is also a major blind spot in our PME and concept development, and it is a topic that we’re understandably not familiar with since the the non-state actors we’ve spent most of the last two decades fighting simply don’t have nuclear weapons. But as we shift to great-power competition, inadvertent conventional to nuclear escalation desperately needs to be thought through whenever we envision the role of conventional military forces and hostilities between great (nuclear armed) powers.
Indirect and Direct Approaches and New Spaces for Conflict
My second set of concerns turns on the idea that AI is primarily being thought of as a tool to make us better at things we already do: small unit rais are more lethal because ofAI enhanced information, communication and fire support; our operational plans are analyzed and optimized by computers. These stories all fall into the category of using technology to basically help me do what I’m already doing, just better. But this assumes that the things we are doing now are the same kind of things should be doing in the future. Perhaps instead of directly improving lethality, technology creates maneuver space for alternative forms of competition. As a rough analogy – we might think that cyberspace operations’ best use is to enhance existing military concepts by degrading adversary C2 (“cyber jamming”) and aid in the find/fix portion of a targeting cycle. But this ignores powerful ways in which Cyberspace operations have been in an indirect manner for deception, misinformation, and domestic meddling. The competition between opposing wills is still constant, but technology also creates new spaces for that competition to occur.
A related idea is to challenge the assumption that AI will be unable to pick up on personality and other intangibles. This is clearly the main theme in both “A Second chance with ARIA” and “A Matter of Instinct” in which the basic premise is that AI is unable to identify the key qualities, but fortunately humans pick up on what the AI is blind to and safe the day. This framing resonates with our aversion to “body count” metrics of Vietnam or our distinction between the art and science of war. But it also seems at least somewhat at odds with another preset reality – we are already concerned about relatively benign machine algorithms ability to exploit human cognitive biases to buy more or believe conspiracy theories and disinformation. The current reality seems not to be that machines are ignorant of human personality traits, but that machines might “understand” human nature too well. Social media (and video game and marketing) companies apply cutting edge research in social psychology and employ small armies of researchers to tune their algorithms. Their goal: applications optimized to capture our attention, distract us from other things, create addictive habits and influence our behavior. Given this, it seems reasonable that AI in the future could “understand” and exploit other cognitive biases. We generally think that big tech firms are motivated by the relatively benign goal of increased market share – they are not intentionally manipulating people for nefarious ends. Yet the negative second-order effect of this relatively benign end appears to be political polarization and social fragmentation. But what if other algorithms are intentionally tuned for ends more nefarious than market share, and then used by one state to gain advantage over another? This kind of AI – and the competition to control it – is frightening. It is also a discontinuity from conceiving of AI as simply something that provides quantitative increases to my existing capabilities.
Don’t we Think Social Changes will Affect Anything?
Finally, it would be useful to think about how war might differ as the underlying society on which it is based adjusts. The paradigm here might be the Napoleonic wars set within the social change of the French Revolution – the technology that Napoleon had access to was not radically different from that of its recent history, yet the social changes of the French Revolution unleashed mass national armies on continental Europe, drastically changing the character of these “more total” wars. We might expect AI and other technology to have substantial disruptive effects on society, and perhaps it is these changes to society – more than the direct effect of technology – that really create the most substantial variations to future conflict.
Clausewitz tells us that war is the extension of politics by other means. We should expect that technology will affect all aspects of this relationship – the character of war itself, the ways in which war and politics intersect, and the development of “other” other means by which politics can be extended and with which war will have to interact. I applaud all who were involved in Destination Unknown and encourage others to read this as it helps start conversations about how the character of future war may reflect the past, albeit with some twists. I urge us to take this seriously, while also employing our creative minds to think about the other ways in which technology will affect the character and context of war.