Rebooting AI

drawn image of calendar icon

May 23, 2024

hand drawn writing icon

Anastassia Lauterbach

I ask all my students to read “Rebooting AI” by Gary Marcus and Ernest Davis. This book is one of the scarce examples of realistic and sober views on what AI can and can’t do today.

There are five takeaways from the book.

Takeaway number one: Any account in the media of some AI system that is either very exciting or terrifying is likely to be unrealistic. When media or even some companies, valued quite a lot, try to push for the hype around AI technology, you must see through the noise and identify the real merit behind the claims. If you read something like ‘this too much excitement’ or some very dark scenarios, please read those speculations with sceptical eyes.

Takeaway number two: Deep Learning is currently the leading approach in Artificial Intelligence. It is really powerful. It handles a lot of narrow tasks. And there’s always an enormous amount of relevant data in those tasks.

Deep Learning, however, will never lead to a human-level AI.

Takeaway number three: There are real dangers associated with using Artificial Intelligence, deliberate or accidental. However, the greatest threat is that the enormous potential of AI and its vast potential for good might remain unrealized.

Takeaway number four: Human-level AI will need a rich understanding of a situation and the tasks at hand and a body of common-sense knowledge. Deep learning methodologies don’t allow for such a rich understanding.

Takeaway number five: AI can only be made safe and trustworthy through great engineering, common-sense knowledge, human values, and smart regulation.

Deep Learning works poorly when there is little data you can use in training. When circumstances change and the context changes, Deep Learning systems can get confused, even if the difficulties machines face are surprising from a human perspective. For example, you train a self-driving car in one city. And then this training is not valuable in another town because everything looks different.

A reading program trained to read black on white may do poorly in trying to read white on black.

Deep Learning systems are very susceptible to so-called adversarial examples. For example, a slight change in a text or in a picture, a photograph that, from a human standpoint, seems trivial or indetectable might confuse a Deep Learning system. Those adversarial changes might sometimes occur because cybercriminals use technology to penetrate a particular domain and fool those systems.

AI programs of all kinds need basic common sense, which current technologies lack. For example, Robot Waiters serving drinks at a party should know (without being told) not to give a guest a broken wine glass. This is common sense for all Human Waiters, but machines must possess this knowledge as well to perform without flaws.

In other words, AI programs need basic knowledge about greetings, calling, answering, and all aspects of everyday life that humans take for granted. Getting that kind of knowledge into AI programs has become tremendously complicated.

A solution to the current flaws of Deep Learning/ Machine Learning will require several components.

First, AI programs will have to start with some basic knowledge built in. Indeed, the fundamental properties of time, space, and causality should be embedded into those programs. There should also be some knowledge about physical objects, physical interactions, and people in their interactions. So, this is the first common knowledge that should be embedded.

Second, AI programs will need the ability to deal explicitly with concepts and reason about the world regarding the relationship between concepts. So, it is tremendously important to be capable of generalizing, being abstract, and producing abstract knowledge from the beginning.

Third, AI programs need to be structured to have a body of knowledge about the world in general that they can call on to execute different tasks rather than learning each task in isolation.

So, in one of my pieces on Patreon, I am writing about Mum being a linguist, and I’m discussing Noam Chomsky’s Universal Grammar Theory. Chomsky believes that language is an innate framework for any human.

So, this general body of knowledge in an infant’s head could turn into learning language.

A learning mechanism would be needed in AIs to build up common sense knowledge incrementally from observing and interacting with the world. This is easier said than done. Researchers around the globe must dedicate time and considerable financial resources to investigate methods that are alternative to deep learning.

Let me now go to concerns and pitfalls around AI, which we must take seriously!

Some criminals and people with bad intentions use Artificial Intelligence to gain personal gain and to do what they want.

Unless AI is audited quite carefully by people with a great understanding of technology and a great general body of knowledge, AI will perpetuate existing social biases, as we have seen in many scandals over the last decade.

For example, a couple of years ago, a highlighted Amazon recruiting program was unshakably biased against female applicants. This was not because Amazon as a company was evil. It was because the database lacked female engineers and applicants.

AI can magnify existing biases and produce bad results for individuals and society in this context.

To sum up, AI research fixated on the short-term successes of deep learning and machine learning and failing to explore other approaches (which might have longer-term payoffs but might not be profitable in the short term) is the real danger of our times. The authors of “Rebooting AI” have seen it before the Covid pandemic. In my eyes, things have deteriorated, especially after OpenAI launched ChatGPT 3 in November 2022.

I don’t believe in fast fixes to curate the current situation and introduce a healthier approach to AI.

Without general technology literacy, we will be at the mercy of a few Big Tech companies and regulators lacking the very technology literacy I am campaigning for.

Hence, I write the “Romy and Roby” series for children and a podcast for their clever parents and grandparents. Of course, there are more complete solutions. However, addressing technology education with families is a worthy step to take on a significant journey.

romy and roby and the secrets of sleep book cover

Book 1

Romy & Roby And the Secrets Of Sleep.

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Select your currency