Skip to content
Blog Home » AI (Artificial Intelligence) » Godfathers of AI Speak Out

Godfathers of AI Speak Out

The three “godfathers” of AI (Artificial Intelligence), Nobel Computing Prize winners, Geoffrey Hinton, Yoshua Bengio, and Yann LeCun have recently spoken out about their concerns over the technology they were instrumental in creating.

Introduction

AI is the capability for computers to perform tasks so complex, that they have previously required human intelligence to carry out. A recent example is the appearance of AI-powered chatbots (such as ChatGPT). These have the appearance of providing human-like answers to questions. The propensity for these AI text applications to provide misinformation – sometimes deliberately created by their human operator, and sometimes unbeknown to them – has led to the EU (European Union) to plan legislation to regulate AI.

However, we need to remember that AI chatbots are just one facet of AI – they just happen to be in the spotlight at the moment. AI powers how streaming platforms pick what you should watch next. AI is being used in hiring to help sort and select job applications, for aiding in the diagnoses of medical conditions, and by insurance companies to determine premiums.

The speed with which AI development is progressing has shocked its inventors and developers. Its evolution has been exponential and dramatic since 2012, when Dr Hinton built his innovative neural network to analyse images.

Sundar Pichai, CEO of Google, said recently that even he did not fully understand everything that Google’s AI chatbot Bard did.

Geoffrey Hinton

On May 4, 2023, Dr. Geoffrey Hinton, a British-Canadian cognitive psychologist and computer scientist, announced his resignation from Google. He said that he wants to be able to speak freely about his concerns, and that he now regrets that his pioneering research on neural networks and deep learning has paved the way for current AI systems like ChatGPT. He had a warning about the growing risks arising from developments in AI.

He said that, in his opinion, some of the risks posed by AI chatbots such as ChatGPT were “quite scary. Right now, they’re not more intelligent than us, as far as I can tell. But I think they soon may be … Right now, what we’re seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has, and it eclipses them by a long way. In terms of reasoning, it’s not as good, but it does already do simple reasoning … given the rate of progress, we expect things to get better quite fast. So we need to worry about that.

We’re biological systems, and these are digital systems. And the big difference is that with digital systems, you have many copies of the same set of weights, the same model of the world … all these copies can learn separately but share their knowledge instantly. So it’s as if you had 10,000 people and whenever one person learnt something, everybody automatically knew it. And that’s how these chatbots can know so much more than any one person.

Yoshua Bengio

On May 31, 2023, Professor Yoshua Bengio, a Canadian computer scientist, was the second “godfather” of AI to speak out. He announced that his life’s work left him feeling lost. He has allied himself with calls by many leading experts for AI regulation, and has said that militaries should not be given AI. He expressed his worries over the speed and the direction at which AI is developing.

Professor Bengio confessed that these possibilities were taking a personal toll. He said that his life’s work, which had previously given him both a sense of identity and direction, was no longer obvious to him. He said, “It is challenging, emotionally speaking, for people who are inside [the AI field].”

Professor Bengio recently signed two statements urging caution about the future dangers of AI. He stated that companies building powerful AI applications should be registered. “Governments need to track what they’re doing, they need to be able to audit them, and that’s just the minimum thing we do for any other sector like building aeroplanes or cars or pharmaceuticals … We also need the people who are close to these systems to have a kind of certification … we need ethical training here. Computer scientists don’t usually get that, by the way.

Yann LeCun

The third “godfather” of AI, Professor Yann LeCun, VP and Chief AI Scientist at Meta (formerly Facebook), said that, in his opinion, Dooms Day warnings about AI are exaggerated. He said, “I don’t think AI will try to destroy humanity, but it might put us under strict controls … There’s a small likelihood of it annihilating humanity. Close to zero, but not impossible.

LeCun agrees about the importance of making AI systems controllable, and in ensuring their accuracy in the information they provide. That said, he differs in his views from Geoffrey Hinton and Yushua Bengio. In his opinion, believing that future AI systems will be designed in the same way as current models shows a lack of imagination. He believes that innovation will make AI more controllable. The challenge will then be setting design goals that align with human policies and values.

He’s concerned about the possible impact on the economy when the small number of companies developing these applications gain even more influence and power. He believes that good regulation is preferable to halting R&D. He suggests maximising the positive effects of AI, while minimising the negative ones. He feels that AI has the potential to enhance human intelligence, and that this could possibly lead to a new Renaissance – a step forward for humanity.

Their Recommendations & Concerns

Pause AI Development

In March, an open letter – co-signed by a number of AI experts, called for a pause on any AI development more evolved than the latest version of ChatGPT (an AI chatbot from OpenAI) – to allow time to design and implement safety measures.

Professor Bengio said that because of the unexpected speed-up in the development of AI systems’ capabilities, “we need to take a step back.”

Dr. Hinton said that he believed that AI will deliver many more benefits than risks, and didn’t think AI development should stop – only be paused. However, he also believes that because of international competition to develop superior AI, pausing development will be very difficult to accomplish. He amplified this by saying, “Even if everybody in the U.S. stopped developing it, China would just get a big lead.” He stated that it was the role of governments to make sure that AI development was carried out “with a lot of thought into how to stop it going rogue.”

They all believe that we’re on a speeding train, and are very concerned that it might start building its own track.

Concern About Bad Actors

Dr. Hinton was concerned that “bad actors” would try to use AI for “bad things“. “This is just a kind of worst-case scenario, kind of a nightmare scenario … You can imagine, for example, some bad actor like [Russian President Vladimir] Putin decided to give robots the ability to create their own sub-goals.” He warned that this might eventually “create sub-goals like ‘I need to get more power’.” He’s worried that AI is a very different kind of intelligence from human intelligence.

Professor Bengio said he was also concerned about bad actors using AI – particularly as it becomes more powerful and sophisticated. “It might be military, it might be terrorists, it might be somebody very angry, psychotic. And so if it’s easy to program these AI systems to ask them to do something very bad, this could be very dangerous. … If they’re smarter than us, then it’s hard for us to stop these systems or to prevent damage.”

Other experts and academics have also warned that the speed of AI development might end in malicious AI being deployed by bad actors to actively cause harm – or an AI choosing to inflict harm of its own volition. Some are also afraid that advanced AI technology might be put to destructive purposes – such as chemical weapon development.

Need for a Responsible Approach

Not everyone in the field believes that AI will bring about the end of humanity. They have concerns, but argue that there are more urgent issues.

Dr. Sasha Luccioni, an AI research scientist, said that we need to concentrate on problems such as the spread of misinformation by chatbots, predictive policing, and AI bias – current and concrete dangers.

There are numerous instances of AI’s benefits to human society. Recently, an AI-powered tool uncovered a new antibiotic, and a paralysed man gained the ability to walk just by thinking about doing it – thanks to a microchip that was created using AI technology.

However, there are concerns about the future impact of AI on the world economy. Corporations are already eliminating personnel and replacing them with AI applications – this was a factor in the recent strike by Hollywood writers.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x