Does the US military have an obligation to develop weapons systems with AI capabilities? That’s a question that Prof Robert J. Marks tackles in The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI.
It’s a nice little book that makes for a fascinating read. You can already download a free, digital copy, but all conference attendees will receive a physical copy of the book. Most importantly, Prof Marks will deliver a keynote during Algorithm Conference, so you’ll have an opportunity to interact with him in person if you stick around for his talk and his discussion panel apperance.
Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence; a Distinguished Professor of Electrical and Computer Engineering, Baylor; and a Fellow of both IEEE and the Optical Society of America. Marks served as editor-in-chief for the IEEE Transactions on Neural Networks.
His research has been supported/funded by the Army Research Lab, the Office of Naval Research, the Naval Surface Warfare Center, the Army Research Office, NASA, JPL, NIH, NSF, Raytheon, and Boeing. And he has consulted for Microsoft and DARPA.
He is co-author of Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, Introduction to Evolutionary Informatics, and the author of The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI. His keynote will touch on this subject of the military and lethal AI.
No matter how fast computers compute, the Church-Turing thesis dictates certain AI limitations of yesterday and today will apply tomorrow. This includes quantum computing. Alan Turing showed there existed problems unsolvable by computers because the problems were nonalgorithmic.
Sentience, creativity and understanding are human properties that appear to be nonalgorithmic. The sentient property of qualia is possibly the most obvious example of uncomputability.
The inability of computers to understand is nicely explained through the allegory of Searle’s Chinese Room. And for AI to be creative, it must pass the Lovelace test proposed by Selmer Bringsjord. No AI has yet passed the Lovelace test.
With an understanding of the limitations of AI, we can soberly address use of AI in potentially lethal applications like autonomous military weapons.
In a previous assignment, Michael Kanaan was the first Chairperson of Artificial Intelligence, HQ U.S. Air Force, where he authored and guided the research, development, and implementation strategies for AI technology and machine learning activities across its global operations. Prior to that, he was the Enterprise Lead for Artificial Intelligence, HQ U.S. Air Force ISR & Cyber Effects Operations. He’s now the Director of Operations, U.S. Air Force/MIT Artificial Intelligence.
In recognition of his fast-rising career and broad influence, he was named to the Forbes “30 Under 30” List and has received numerous other awards and prestigious honors — including the US Government’s Arthur S. Flemming Award.
His highly anticipated book, T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power, is due this August.
As an extra, the speakers listed above, plus a couple more yet to be announced, will participate in a discussion panel titled Is lethal AI for the military unavoidable? Regardless of where you stand on this matter, you’ll want to hang around for this discussion panel.
Registration for workshop and for the conference itself is now open. The workshop has a limited number of tickets, so hurry and register if you want to guarantee yourself a spot. To reserve your ticket(s), click on that big red button.
Want to register using your favorite cryptocurrency? We’re on your side. Just click that button to email us to begin the process. We’ll get back with you pronto.