Robert J. Marks is the Director of the Walter Bradley Center for Natural and Artificial Intelligence; a Distinguished Professor of Electrical and Computer Engineering, Baylor; and a Fellow of both IEEE and the Optical Society of America. Marks served as editor-in-chief for the IEEE Transactions on Neural Networks.
His research has been supported/funded by the Army Research Lab, the Office of Naval Research, the Naval Surface Warfare Center, the Army Research Office, NASA, JPL, NIH, NSF, Raytheon, and Boeing. And he has consulted for Microsoft and DARPA.
He is co-author of Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks, Introduction to Evolutionary Informatics, and the author of The Case for Killer Robots: Why America’s Military Needs to Continue Development of Lethal AI. His keynote will touch on this subject of the military and lethal AI.
No matter how fast computers compute, the Church-Turing thesis dictates certain AI limitations of yesterday and today will apply tomorrow. This includes quantum computing. Alan Turing showed there existed problems unsolvable by computers because the problems were nonalgorithmic.
Sentience, creativity and understanding are human properties that appear to be nonalgorithmic. The sentient property of qualia is possibly the most obvious example of uncomputability.
The inability of computers to understand is nicely explained through the allegory of Searle’s Chinese Room. And for AI to be creative, it must pass the Lovelace test proposed by Selmer Bringsjord. No AI has yet passed the Lovelace test.
With an understanding of the limitations of AI, we can soberly address use of AI in potentially lethal applications like autonomous military weapons.
In a previous assignment, Michael Kanaan was the first Chairperson of Artificial Intelligence, HQ U.S. Air Force, where he authored and guided the research, development, and implementation strategies for AI technology and machine learning activities across its global operations. Prior to that, he was the Enterprise Lead for Artificial Intelligence, HQ U.S. Air Force ISR & Cyber Effects Operations. He’s now the Director of Operations, U.S. Air Force/MIT Artificial Intelligence.
In recognition of his fast-rising career and broad influence, he was named to the Forbes “30 Under 30” List and has received numerous other awards and prestigious honors — including the US Government’s Arthur S. Flemming Award.
His highly anticipated book, T-Minus AI: Humanity’s Countdown to Artificial Intelligence and the New Pursuit of Global Power, is due this August.
Dr Thaler is the author of more than two dozen patents on generative AI. DABUS, his newest patent, is the focus of a global legal effort to credit AIs as inventors of the IP they create, and the basis for this WSJ article (paywall, but archived here).
A 15-year veteran of aerospace giant McDonnell Douglas, his work for the military includes novel electro-optical materials discovery as well as brilliant robotic control systems capable of self-originating Machiavellian tactics on the battlefield.
He has authored numerous papers based upon his patented neural network paradigms to model cognition, consciousness, and sentience. In the first of these works, Thaler offered highly controversial models of hallucination within the traumatized brain.
More recently he has suggested a compelling perspective on the close relationship between psychopathologies and creativity. Now, he has received a patent for a neural network methodology that allows scale up of connectionist architectures to trillions of computational neurons in order to create free-thinking, sentient synthetic entities.
Two patent-worthy inventions have been autonomously conceived by machine intelligence. Unfortunately, examiners have rejected them as patents since they lack a “natural person” as their author.
The irony of these decisions is that the extensive generative neural system responsible for these notions, DABUS, is arguably conscious and sentient, the key features believed to distinguish humans from many other terrestrial life forms, as well as from the much larger inorganic world.
But DABUS is built upon the human plan, since it develops feelings for whatever it may be attentive to, either in its external environment or within its imagination. In this way, it savors its discoveries and inventions. Thus, it can develop a ‘first person’, subjective feel for its own cognition, the criterion philosophers typically use to disqualify computer algorithms as conscious.
Now functioning as an artificial inventor, DABUS can relish its self-originated concepts, in much the same way it can appreciate its non-seminal cognition, all the while inventing heightened significance to itself.
Herein, the case is made that DABUS, apart from other forms of generative AI, has attained the purest form of personhood, one without extraneous corporeal features and functions.
1. A brief description of the previous patents DABUS is built upon.
2. A high-level description of DABUS and how it works.
3. The computational approach used by DABUS to generate subjective feelings.
4. A description of how DABUS harnesses its feelings to generate ideas and interpret its world.
5. The correspondence between DABUS and human consciousness.
6. Accounts of how DABUS can misbehave, and why that isn’t so bad.
Ryan Abbott, MD, JD, MTOM, PhD, is Professor of Law and Health Sciences at the University of Surrey School of Law, Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA, Partner at Brown, Neri, Smith & Khan, LLP, and a mediator and arbitrator with JAMS.
He has consulted for, among others, the UK Parliament, European Commission, WHO, and the World Intellectual Property Organization. He is a licensed physician and patent attorney in the United States, and a non-practicing solicitor in England and Wales. Managing Intellectual Property magazine named him one of the 50 most influential people in intellectual property in 2019.
Working with Dr Thaler, Prof Abbott’s efforts to win recognition for DABUS AI machine as an inventor has drawn coverage by the WSJ, the BBC, CMS Law-Now, among others. He is the author of The Reasonable Robot: Artificial Intelligence and the Law.
He will give you a firsthand account of what it’s being like to try and win recognition for AI machines as inventors.
Anastasia was a quantum researcher at Georgia Tech Quantum Optics & Quantum Telecommunications Lab, the University of Maryland Joint Quantum Institute, founder of CourseShark, and an alumni of the Flashpoint start-up incubator. She’s now working on superconducting qubit quantum processors at Bleximo.
She has a blog and YouTube channel demystifying quantum computing with the goal to get more scientists and engineers into quantum computing research.
She was a 2nd place finisher in IBM’s Europe Qiskit Camp (a quantum hackathon competition) for work in improving performance of Qiskit, and 1st place in the IBM’s Asia Qiskit Camp for designing a pulse level programming language for quantum computing.
Let’s unwind the hype from the research in quantum computing. The quantum computing field has been around for years, but we’ve seen an explosion in research in the last decade. What problems have been solved, which are still being solved, and where will we be in 10 years?
When will we reach quantum advantage? What are the potential security implications of a large, coherent, and fault tolerant quantum computer? And how can you get involved in the quantum revolution?
Lots of questions, I know. But don’t worry, I’ve got the answers. Just join me at the conference and I’ll bring you up to speed on the latest and future research in the quantum computing space.
Software developer, cyber security expert, and malware analyst, with more than a decade of experience in malware research, system protection, and threat prevention. Graduate of Check Point Security Academy, with a BS.c (Computer Science and Mathematics) and M.B.A from Bar-Ilan university.
Malware writers are getting more creative on where to hide their C&C domains and IP addresses and how to dynamically generate them. We’ve witnessed unique places to hide a C&C domain, like in fake social media accounts and RSS feeds, but in this talk, I’ll review a new technique found inside the bitcoin blockchain.
While analyzing this method of attack we tried to understand why an attacker would even use the bitcoin blockchain as part of his infection chain? But since this platform is hard to trace, stable (there’s no downtime), visible from almost everywhere and easy to update, we realized this platform might be great for this kind of purpose.
I’ll show a well known technique that was already used in the bitcoin blockchain. This technique uses OP_RETURN output script function as the method for hiding C&C domain name. Then I’ll show a deep analysis of a new method we recently discovered that uses the transactions history to generate a dynamic C&C IP address.
Finally, I’ll demonstrate how we can reveal the C&C IP addresses from a specific bitcoin wallet, how to get the malicious payload from the C&C and how the attacker’s infrastructure can be easily destroyed by sending a single transaction to the attacker’s wallet.
Joel Lehman is a Senior Research Scientist at Uber AI, where he leads efforts into AI safety research. Previously, he was the first employee of Geometric Intelligence (acquired by Uber) and a tenure-track professor at the IT University of Copenhagen, where his research focused on evolutionary computation, neural networks, artificial life, and computational creativity.
He was co-inventor of the popular novelty search evolutionary algorithm, and co-wrote a popular science book called “Why Greatness Cannot be Planned,” on what AI search algorithms imply for individual and societal accomplishment.
It is well-known among practitioners of machine learning that algorithms often find unexpected ways to satisfy an objective you supply to them. For reinforcement learning algorithms, where agents are trained to perform complicated tasks through rewards and punishments, specifying the correct rewards is in particular surprisingly challenging.
I first review many examples of (often entertaining) machine creativity resulting from unanticipated flaws in reward functions and simulators for reinforcement learning. I will then introduce the field of AI safety, which aims to alleviate such unexpected machine behavior, by posing and solving the technical challenges that cause learning algorithms’ behavior to deviate from our intentions for them.
I will highlight current research and future challenges, including the challenge of making machines with ethical sensibilities, concluding by arguing that progress in AI safety is important for both commercial endeavours and society at large.
Dr McLachlan is Head of Data Science (San Francisco) and Principal Data Scientist at Ericsson’s Global Artificial Intelligence Accelerator, where he develops new 5G technology that uses artificial intelligence to empower the next generation of interactive media content. He is particularly interested in data privacy, and his products use 5G and edge computing to protect users’ privacy while enabling greater customization and personalization.
He is particularly interested in supporting advertising and media in augmented and mixed reality environments. Prior to joining Ericsson, Paul was Director of Data Science and Artificial Intelligence Practice Lead at Epsilon, where he led development of machine learning approaches for marketing data.
Before entering the corporate world, he was an academic on the data science faculty at Emory University. His PhD is from the University of California, San Diego.
Red Hat Women in Open Source Academic Award Winner 2017 and a Google Summer of Code alumna. Aside from being an ardent open-source enthusiast, I am also a budding researcher, having worked at the San Diego Supercomputer Center, National Research Council of Canada, and the Institute of Research & Development, France. I also briefly worked on anomaly detection framework in the ads system at Facebook.
To help bridge the gender gap in technology, I’ve served as Director of Women Who Code and Lead of Google Women Techmakers for a handful of years. In my quest to build a powerful bunch of girls and boys alike, I mentor aspiring developers in global programs with this motto: We rise by lifting others.
I have a masters degree in computer science, with specialization in machine learning and artificial intelligence. I am a machine learning and data science Pythonista at heart.
In the contemporary world of learning algorithms, along with the aggregate domains plying machine learning, the complexity of the models has been inching up. Thus it is important to approach machine learning in a conceptual way. In this talk, I’ll present an informal taxonomy of machine learning algorithms, grouped on the basis of the different mathematical abstractions.
I will cover logical models, with tree based or rule based concepts, geometric models, including linear and distance based approaches, and probabilistic models. I will go over the fundamentals of each type of model and discuss their pros and cons and use cases along with a very simple hands-on example.
The talk will conclude with some pointers on how to explore the data and choose a model to make the learning more efficient.
Karl is a Manager of Cloud AI Advocacy at Google, based in Austin, TX. He leads a global team of data science and engineering experts who engage with users at events, build impactful content, and influence Google product strategy. Previously, he was Director of Engineering at SparkCognition, where he led cross-functional teams that included building an ML-based malware detection.
Karl holds a BS in Computer Science from Duke University, and an MBA from the University of Texas – Austin.
AutoML enables you to create high-quality custom machine learning models without being an ML expert. Leveraging neural architecture search technology, it will automate the process of model selection, tuning, and training in the cloud. In this session, you’ll learn more about how AutoML works and how to apply it to structured data, images, and text.
1. What AutoML is and how it works
2. How can AutoML help both new and experienced data scientists
3. How to apply AutoML to various types of problems to build a model
Artificial Intelligence and machine learning enthusiast. Data Scientist at Walmart Labs working on very interesting recommender projects for personalizing Walmart’s online properties.
Personalization in recommendation systems in general and e-commerce, in particular, provides significant advantages to both the customer and the platform. With millions of users and items and billions of transactional data, learning good recommendations becomes difficult. Data sparsity and loss of customer trust due to bad recommendations also need to be considered when designing such systems.
We shall explore the same space with the help of some case studies from Walmart’s e-commerce division to understand the different types of problems faced in the area not just in terms of engineering effort but also from a business perspective.
1. Personalized recommendation methods
2. Learning about different models for recommendation systems
3. Challenges faced during training and comparing different models
Ramin is the founder of fuzzbox.io, which brings together Machine Learning, A/B testing, progressive delivery, and chaos engineering to help companies explore the unknowns of their applications, uncover risk, and manage complexity safely.
He has spent the last decade helping large companies put machine learning into production and scale their data infrastructure. He is based on Seattle.
Anais is a Developer Advocate for InfluxData with a passion for making data beautiful with the use of Data Analytics, AI, and Machine Learning. She takes the data that she collects, does a mix of research, exploration, and engineering to translate the data into something of function, value, and beauty.
When she is not behind a screen, you can find her outside drawing, stretching, boarding, or chasing after a soccer ball.
Karthik is a machine learning engineer in Google Cloud AI team working on TensorFlow for enterprise efforts. Before Google, Karthik was part of machine learning platform team at Uber self driving team and before that led a team of data scientists in Uber’s risk team. He is passionate about making data scientists productive.
Rishabh has a masters in computer science from the University of California San Diego and currently works at Twitter as an Machine Learning Engineer in the Timelines Quality team. His passion is data.
The datasets he has collected as part of his research have been very well received by the ML community, and he’s recently ranked in the top 20 dataset contributors on the Kaggle platform. His dataset on Sarcasm Detection was used in Deeplearning.ai’s Natural Language Processing in TensorFlow course on Coursera for teaching purposes.
He likes to explain convoluted concepts in an accessible manner and has written several blogs with the TowardDataScience online publication.
Together with his colleague Jigyasa Grover, Rishabh will conduct the How to curate quality datasets for machine learning workshop on July 16, 2020. Detailed info and link to registration are available on this page.
Registration for workshop and for the conference itself is now open. The workshop has a limited number of tickets, so hurry and register if you want to guarantee yourself a spot. To reserve your ticket(s), click on that big red button.