Dr. Lipsky or: How I Learned to Stop Worrying and Love the Ones and Zeroes

Posted on :

Jeremy Kofsky

The recent paired release of Meta’s Open-Sourced Artificial Intelligence (AI) language (LLaMa) and their creation of custom computer chips has created a moment akin to nuclear tests in 1945 in the deserts of New Mexico in terms of what potential destruction this event can produce, despite ‘moral filters’ built into these programs. Noted luminaries such as Elon Musk and Steve Wozniak have stated as much, while Bill Gates and others have stated it is a global good. While this debate will continue, the issue of what this technology will provide to militaries, transregional cartels, and terrorists groups is of more immediate concern than existential theoretical discussions. There are areas wherein AI can potentially produce devasting effects. Understanding the pros and cons of AI in military applications and the potential counters/future battlefields of AI is a crucial area needing clarity as AI is coming. It may be up to the Marines and soldiers on the ground to determine if it is the end or if they can be the refrigerator box counter to this threat.

The Sky is Falling!

                The initial days of the Russian invasion of Ukraine provided a portent of how AI can be used to generate actual effects on the battlefield. After a Russian hacking group gained access to the Ukraine Channel 24 news broadcast, they were able to upload both a ticker and an AI generated deepfake video of Ukrainian President Volodymyr Zelenskyy stating Ukrainian Armed Forces should lay down their arms and that President Zelenskyy had fled Kyiv. While the rather amateurish quality of the video lead to it being quickly debunked, the technology to create more and more realistic videos, smoothing out the ‘uncanny valley’ effect, will be more realized in the future. This in turn can be used by nation states and proxies to create a ‘suppressive’ attack of ‘paralysis by analysis’ wherein several different types of videos with different views through and by the Overton Window will blunt real messaging and create a condition wherein people have to counter these statements, a common adversary technique. Additionally, this will blunt any morale boosting Information Operations messaging as the targeted audience being targeted will be unable to fully trust any true messaging.

                The Tactics, Techniques, and Procedures (TTPs) of the United States military’s individual services and Joint Warfighting construct are readily available online. While the ingenuity and unpredictability of America’s warfighters has oftentimes been lauded, there is still an osmotic adherence to these key stone documents within the military. Having doctrinal and tactics manuals from throughout history, and the ways these were used in real scenarios will provide an adversary with a generally understood idea of how the United States military would attack a certain area, which would also be understood by historical documents of previous campaigns and technologies available to both sides. This would provide a ground plan for how to counter American actions in that area to include tailored counter-TTPs.

The engine that will feed many of these AI fueled scenarios is the paired maturation of Quantum Computers and their ability to solve in seconds problems taking ordinary computers thousands of years to complete. Fully realized Quantum Computers will be able to brute force encryption services and make current secure and encrypted applications such as WhatsApp, Signal, and Instant Messenger viewable to any entity with the appropriate prioritization and resources. As this technology proliferates, it could extend into other networks requiring tailored access, such as the military’s Secret Internet Protocol Router-Network (SIPR-Net) wherein most operational matters and burgeoning military programs are discussed. In addition to the decrypting ability of the quantum computers, the sheer mass of calculations can provide the most accurate extrapolation from the exploited TTPs and doctrine of an adversary force.

                The ability to quickly synergize the various effects, understanding of intentions and goals, and means of countering in the information space will create a new paradigm in conflicts. Much as the Cold War was about ascertaining intentions and capabilities of nuclear technology, the upcoming (or current) conflicts of peer adversaries will focus on the ability to gain information dissemination supremacy and understand the algorithms creating decisions for the competing entities. The new ‘Golden Ticket’ for nations will be the coding used for various applications of other nations and non-state actors. As if they control the Orientation portion of the Decision Making Cycle, they will be to dominate the subsequent portions of the cycle, specifically the Act.

The Sky is Falling?

                While the previous section, and featured articles portend a terrifying Skynet or Hal future scenario using AI, the future is not completed until it is written. The viral story about an Unmanned Combat Aerial Vehicle attacking its human operator proved to be a ‘thought-experiment’ and a ‘misspeak’ by an Air Force Colonel but still shows the unknown about AI’s application in warfare. This story also shows the importance of John Boyd’s reform guiding principle of People, Ideas, and Things as the thought experiment showed a human was the lynchpin of decision making and the thing was last in the order or priority. The wars and conflicts of the future will be fought by People, not weapons, as weapons are a means, not the end of themselves.

                The Ideas of strategy are important as the mental models of yesterday will not be the way the next generation of strategists look at problems. The ideas of Clausewitz and Sun Tzu are applicable to today’s warfighters, but so are the studies of John Boyd, David Kilcullen, and Jim Mattis. AI can mostly use existing databasing (known doctrine and history) to create its mental map. This will provide mental maneuver space for new and evolving techniques for the next generation to create, experiment, and ultimately succeed with.  The use of Hardware exists to provide the materials with which to create these success patterns within and the ‘hacking’ of their capabilities often leads to unforeseen advantages that can not be taught into a program until after the fact.

They say one of God’s greatest inventions is the Marine Lance Corporal. A source of both scorn and endless humor, both of which were on display during a DARPA test of a camera designed to identify people walking. After six days of ‘teaching’ the robot to understand walking, all eight of the selected Marines were able to walk 300 meters undetected to the camera, simply by not appearing to be humans walking. From a moving bush, to Cirque du Soleil, and a Benny Hill skit, inspiration abounded in the Marines, inspiration the computer program would have no ability or even reason to know about as its programming was tailored to a specific task, that of a human walking. This distributional shift, caused by an algorithm’s rigidity, allows for innovation to be the determinant in ‘getting around’ AI.  

                The continued focus by the Marine Corps on Talent Management and Force Design places great emphasis on the Boyd Principles of People, Ideas, and Things. It also creates an emphasis on the need to have a more educated force. While this is typically advertised as more technologically savvy personnel, the ability to be, as former Commandants have stated, ‘disruptive thinkers’ is another critical aspect to get around the apparent omniscience of AI. Creating distrust in AI is another application of getting into the adversary’s Decision Cycle. If the adversary does not trust in its AI to perform its functions, then they will be less likely to rely on their guidance and tailored outcomes. Until the day humankind achieves a Singularity with AI, humans will be the ones to actually ‘flip the switch’ on an AI algorithm. Stopping this action renders all the portends of doom from AI mute.       

The Sky is Falling.

As many probably deduced from both the introduction and the title of this article, the allegory of Dr. Strangelove comes to mind. The movie and its title, loved for its satire and irony, is fitting in that by the end of the movie, no one is left to worry because everyone is dead. While AI does have some devasting down trace effects, many of which are likely to surprise us in the next twenty years as we cannot even fathom them currently, the overall construct should be one of embracing AI as a learning and modelling tool to drive better decision making of the People in charge of conducting warfare. The ability of nations and non-nation state actors to use AI for nefarious and illegal purposes should not be discounted but rather seen as a potential railroading of adversary intentions. If they stay rigid and we stay flexible, the worrying will be on the adversary, not us, and we can embrace the Ones and Zeroes to amplify, not create our imagination. 


This site is free for everyone to learn about information warfare, connect with mentors, and seek the high ground! Unfortunately operating the site is not free and your donations are appreciated to keep KTC up and running. Even a five or ten dollar donation helps.

Please follow and like us: