Skip to content

Has the DOD Thought through Weaponized Robots?

Credit: gettyimages/628335852

Table of Contents

Dave Patterson

National Security Correspondent at LibertyNation.Com. Dave is a retired US Air Force Pilot with over 180 combat missions in Vietnam. He is the former Principal Deputy Under Secretary of Defense, Comptroller and has served in executive positions in the private sector aerospace and defense industry. In addition to Liberty Nation, Dave’s articles have appeared in The Federalist and DefenseOne.com.

libertynation.com


All too often, when a new and formidable weapon technology emerges, the US military sees the capability as a panacea to defeat an enemy. Suddenly, the technology using robots takes center stage for research and development and rapid deployment at the expense of tested, tried, and successful armaments. It’s a lot like three-year-old soccer players all converging on the ball wherever it goes on the soccer field. Whether it’s hypersonic missiles, unmanned aerial vehicles, or artificial intelligence-guided naval vessels, the Defense Department’s research, development, and procurement will focus on that advanced capability, often to the exclusion of systems less sexy, but still effective.

There Are Consequences to Robots on the Battlefield

Liberty Nation brought this phenomenon to its readers’ attention in the article “Drones – Is the DOD Just Chasing the Next Shiny Thing?” What was not considered was the consequences on the battlefield of using a “killer robot” in an autonomous role. Is it wise to invest US service members’ lives in the vagaries associated with setting loose autonomous lethal artificial intelligence (AI) against an enemy on the battlefield? “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risk will be manageable,” AI technology experts, including Elon Musk, wrote in an open letter in the March 22, 2023 issue of Future of Life.

The reason this topic is bubbling up now is because, as Brad Dress wrote for the Hill:

“As the Defense Department is pushing aggressively to modernize its forces using fully autonomous drones and weapons systems, critics fear the start of a new arms race that could dramatically raise the risk of mass destruction, nuclear war and civilian casualties. The Pentagon and military tech industry are going into overdrive in a massive effort to scale out existing technology in what has been the Replicator initiative.”

The “Replicator initiative” will bring an aggressive fielding approach to self-deciding weapon systems in the battlespace on the ground, in the air, and at sea, beginning with “accelerating the scaling of all-domain attritable autonomous systems,” Deputy Secretary of Defense Kathleen Hicks told an audience at a Defense News Conference on September 6. These are systems ostensibly so inexpensive they can be treated as consumables, like ammunition. Hicks is also the Defense Department official who announced the US military would transition all its 170,000 non-tactical vehicles to electric vehicles (EV) by 2030. This notion proved to be a knee-jerk reaction to the Defense Department’s climate hysteria, with little thought given to where the lithium for batteries would come from: China. She explained the Replicator initiative would “change the game.” Additionally, the deputy secretary asserted, “We are embarking on audacious change – fast – using the means we have.”

Note to the wise: Whenever any government official uses the words “game” and “change” in the same sentence, the questions that should come to mind are: Change what, precisely, and change it to what exactly? It’s a throw-away line officials use when they want to sound forward-thinking. It also covers up the fact they aren’t quite sure what the details of the resulting end state are. When it comes to autonomous weapons sent out to kill and destroy, those officials should know.

Defense Officials Must Know What They Are Getting Into

But those defense officials don’t know. That is disconcerting. How do weapons that move around the battlefield on their own, identify a target as an enemy, and decide what to do about that enemy it identified elect to engage and destroy the target all on their own? In a 2019 attack on a Saudi Arabia oil field and refinery facility, Yemen’s Houthi rebels used a formation of 18 Iranian-produced attack drones and seven cruise missiles that penetrated the Saudis’ sophisticated air defense capabilities. Though each unmanned aerial vehicle had preplanned targets and was not fully autonomous, today’s AI can accomplish the same mission with the most rudimentary directions, allowing the self-directed weapons to pick and choose among targets.

The stampede for autonomous weapons has caught on among US allies. NATO is putting on a full-court press to detect and stop underwater sabotage like the destruction of the Nord Stream pipeline. “Fourteen nations from the alliance, along with Sweden, are testing sea drones, sensors, and the use of AI in a 12-day exercise off the coast of Portugal that ends Friday,” Joao Lima and Natalia Drozdiak wrote recently in Stars and Stripes. NATO felt obliged to move into the AI technology space as Russia identifies and maps the US’s and allies’ underwater cables and pipelines.

“…China is working on AI-powered tanks and warships, as well as a supersonic autonomous air-to-air combat aircraft called Anjian, or Dark Sword, that can twist and turn so sharply and quickly that the g-force generated would kill a human pilot,” Noel Sharkey, professor emeritus of artificial intelligence and robotics at the University of Sheffield in Britain, wrote in Scientific American. “Humans are outsourcing the decision to kill to a machine – with no one watching to ascertain the legitimacy of an attack before it is carried out,” Sharkey warned.

These weaponized robots will supposedly be equipped with software algorithms capable of discriminating good guys from bad guys. As the AI subject matter experts warned in their open letter, there is the threat of ever more capable analytical processing machines resulting in an “out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” Americans need to be skeptical of the Replicator initiative and similar projects that take control of human destiny in domestic life or on the battlefield with artificial intelligence. It’s called “artificial” for a reason. So, take as a lesson the tagline from the movie The Fly, “Be afraid. Be very afraid.”

Latest