An artificial intelligence (AI) system is just a bunch of code that runs on a machine, right?. Or is it more than that? Is it possible to write or build an AI imbued with qualities normally reserved for organic life? Can an AI be sentient and conscious, for example? Can it be creative and have a conscious awareness of its creation(s)?
Hefty questions, I know, with no universally accepted answer(s) yet. However, informed opinions abound.
Generative AI is just beginning to garner interest, and will go mainstream pretty soon and in a big way. A few of the experts in this space already confirmed to speak during Algorithm Conference will take up the issue of whether generative AI systems can be sentient and conscious?
The Wall Street Journal and the BBC each did a piece on a generative AI system written by one of the confirmed speakers (Dr Stephen Thaler) and the patent attorney (Prof Ryan Abbott) working with him. So you’ll have the opportunity to listen to and interact with them in person.
Below are the titles and abstracts of their presentations, and their biographical info. We are sure you’ll love listening to them in person, so if you haven’t purchased your ticket yet, you may do so by clicking here.
Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence; a Distinguished Professor of Electrical and Computer Engineering, Baylor; and a Fellow of both IEEE and the Optical Society of America. Marks served as editor-in-chief for the IEEE Transactions on Neural Networks.
His research has been supported/funded by the Army Research Lab, the Office of Naval Research, the Naval Surface Warfare Center, the Army Research Office, NASA, JPL, NIH, NSF, Raytheon, and Boeing. And he has consulted for Microsoft and DARPA.
He is co-author of Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, Introduction to Evolutionary Informatics, and the author of The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI. His keynote will touch on this subject of the military and lethal AI.
No matter how fast computers compute, the Church-Turing thesis dictates certain AI limitations of yesterday and today will apply tomorrow. This includes quantum computing. Alan Turing showed there existed problems unsolvable by computers because the problems were nonalgorithmic.
Sentience, creativity and understanding are human properties that appear to be nonalgorithmic. The sentient property of qualia is possibly the most obvious example of uncomputability.
The inability of computers to understand is nicely explained through the allegory of Searle’s Chinese Room. And for AI to be creative, it must pass the Lovelace test proposed by Selmer Bringsjord. No AI has yet passed the Lovelace test.
With an understanding of the limitations of AI, we can soberly address use of AI in potentially lethal applications like autonomous military weapons.
Dr Thaler is the author of more than two dozen patents on generative AI. DABUS, his newest patent, is the focus of a global legal effort to credit AIs as inventors of the IP they create, and the basis for this WSJ article (paywall, but archived here).
A 15-year veteran of aerospace giant McDonnell Douglas, his work for the military includes novel electro-optical materials discovery as well as brilliant robotic control systems capable of self-originating Machiavellian tactics on the battlefield.
He has authored numerous papers based upon his patented neural network paradigms to model cognition, consciousness, and sentience. In the first of these works, Thaler offered highly controversial models of hallucination within the traumatized brain.
More recently he has suggested a compelling perspective on the close relationship between psychopathologies and creativity. Now, he has received a patent for a neural network methodology that allows scale up of connectionist architectures to trillions of computational neurons in order to create free-thinking, sentient synthetic entities.
Two patent-worthy inventions have been autonomously conceived by machine intelligence. Unfortunately, examiners have rejected them as patents since they lack a “natural person” as their author.
The irony of these decisions is that the extensive generative neural system responsible for these notions, DABUS, is arguably conscious and sentient, the key features believed to distinguish humans from many other terrestrial life forms, as well as from the much larger inorganic world.
But DABUS is built upon the human plan, since it develops feelings for whatever it may be attentive to, either in its external environment or within its imagination. In this way, it savors its discoveries and inventions. Thus, it can develop a ‘first person’, subjective feel for its own cognition, the criterion philosophers typically use to disqualify computer algorithms as conscious.
Now functioning as an artificial inventor, DABUS can relish its self-originated concepts, in much the same way it can appreciate its non-seminal cognition, all the while inventing heightened significance to itself.
Herein, the case is made that DABUS, apart from other forms of generative AI, has attained the purest form of personhood, one without extraneous corporeal features and functions.
Key takeaways:
1. A brief description of the previous patents DABUS is built upon.
2. A high-level description of DABUS and how it works.
3. The computational approach used by DABUS to generate subjective feelings.
4. A description of how DABUS harnesses its feelings to generate ideas and interpret its world.
5. The correspondence between DABUS and human consciousness.
6. Accounts of how DABUS can misbehave, and why that isn’t so bad.
Joel Lehman is a Senior Research Scientist at Uber AI, where he leads efforts into AI safety research. Previously, he was the first employee of Geometric Intelligence (acquired by Uber) and a tenure-track professor at the IT University of Copenhagen, where his research focused on evolutionary computation, neural networks, artificial life, and computational creativity.
He was co-inventor of the popular novelty search evolutionary algorithm, and co-wrote a popular science book called “Why Greatness Cannot be Planned,” on what AI search algorithms imply for individual and societal accomplishment.
It is well-known among practitioners of machine learning that algorithms often find unexpected ways to satisfy an objective you supply to them. For reinforcement learning algorithms, where agents are trained to perform complicated tasks through rewards and punishments, specifying the correct rewards is in particular surprisingly challenging.
I first review many examples of (often entertaining) machine creativity resulting from unanticipated flaws in reward functions and simulators for reinforcement learning. I will then introduce the field of AI safety, which aims to alleviate such unexpected machine behavior, by posing and solving the technical challenges that cause learning algorithms’ behavior to deviate from our intentions for them.
I will highlight current research and future challenges, including the challenge of making machines with ethical sensibilities, concluding by arguing that progress in AI safety is important for both commercial endeavours and society at large.
Impatient with the gap between the invention and commercialization of AI tech, Igor left IBM and founded Yap in 2006. Five years later, Amazon acquired Yap for its AI tech, which is now found in billions of Alexa, Echo, and Fire TV devices. With his new company – Pryon, Igor is working on augmented intelligence, aiming to help machines better understand people.
Igor is a passionate supporter of career and educational opportunities in the STEM fields. As such, he serves as a mentor in the TechStars’ Alexa Accelerator, was a Blackstone NC Entrepreneur-In-Residence (EIR), and founded a chapter of the Global Shapers, a program of the World Economic Forum.
He holds a BS degree in Computer Engineering from The Pennsylvania State University, where he was named an Outstanding Engineering Alumnus, and an MBA from The University of North Carolina. He was awarded both the Eisenhower and Truman National Security fellowships to explore and expand the role of entrepreneurship and venture capital in addressing geopolitical concerns.
During Algorithm Conference, Igor will participate in a Q&A with UTD students, then in a fireside chat to discuss self-aware AI machines and augmented intelligence.
As an extra, the speakers listed above, plus a couple more yet to be announced, will participate in a panel discussion titled Should an AI system be recognized as an inventor. Regardless of where you stand on whether an AI can be sentient and creative, you’ll want to hang around for this discussion panel.
Registration for workshop and for the conference itself is now open. The workshop has a limited number of tickets, so hurry and register if you want to guarantee yourself a spot. To reserve your ticket(s), click on that big red button.
Want to register using your favorite cryptocurrency? We’re on your side. Just click that button to email us to begin the process. We’ll get back with you pronto.
Want to sponsor Algorithm 2022 or have an exhibit space during the conference? Click that button to view the sponsorship prospectus.