Does the US military have an obligation to develop weapons systems with AI capabilities? That’s a question that Prof Robert J. Marks tackles in The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI.
It’s a nice little book that makes for a fascinating read. You can already download a free, digital copy, but all conference attendees will receive a physical copy of the book. Most importantly, Prof Marks will deliver a keynote during Algorithm Conference, so you’ll have an opportunity to interact with him in person if you stick around for his talk and his discussion panel apperance.
Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence; a Distinguished Professor of Electrical and Computer Engineering, Baylor; and a Fellow of both IEEE and the Optical Society of America. Marks served as editor-in-chief for the IEEE Transactions on Neural Networks.
His research has been supported/funded by the Army Research Lab, the Office of Naval Research, the Naval Surface Warfare Center, the Army Research Office, NASA, JPL, NIH, NSF, Raytheon, and Boeing. And he has consulted for Microsoft and DARPA.
He is co-author of Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, Introduction to Evolutionary Informatics, and the author of The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI. His keynote, Nonalgorithmic You: Why AI will never be sentient, creative or understand, will touch on this subject of the military and lethal AI.
No matter how fast computers compute, the Church-Turing thesis dictates certain AI limitations of yesterday and today will apply tomorrow. This includes quantum computing. Alan Turing showed there existed problems unsolvable by computers because the problems were nonalgorithmic.
Sentience, creativity and understanding are human properties that appear to be nonalgorithmic. The sentient property of qualia is possibly the most obvious example of uncomputability.
The inability of computers to understand is nicely explained through the allegory of Searle’s Chinese Room. And for AI to be creative, it must pass the Lovelace test proposed by Selmer Bringsjord. No AI has yet passed the Lovelace test.
With an understanding of the limitations of AI, we can soberly address use of AI in potentially lethal applications like autonomous military weapons.
Professor Marks will participate in this discussion panel that will explore the issue of the development of lethal AI by the US military in particular and by other armed forces around the world. Is the development and deployment of lethal AI an established fact?
Panelists: Prof Robert J. Marks plus three others to be announced.
Should an AI system be recognized as an inventor? That’s what an international team of patent attorneys are trying to get patent authorities around the world to decide. The generative AI at the center of that effort was written by Dr Thaler. Click the button to learn more.
To register for a workshop and for the conference itself, click on that big red button.
Thursday, July 16, 2020 (8 a.m. - 10 a.m.)
Thursday, July 16, 2020 (1:30 p.m. - 5 p.m.)
Friday, July 17 - Saturday, July 18, 2020
Thursday, July 16, 2020 (7 p.m. - 9:30 p.m.)
Friday, July 17, 2020
Saturday, July 18, 2020
Subscribe to our newsletter to get the latest update about Algorithm Conference in your inbox? We won’t spam you. Just the latest Algorithm Conference news.