Skip to content
Blog Home » AI (Artificial Intelligence) » Tech Experts Sign Open Letter to Pause AI Development

Tech Experts Sign Open Letter to Pause AI Development

On March 22, 2023 a number of experts in artificial intelligence have called for the training of AI systems to be delayed until checks and balances can be put in place. They signed an open letter that warns of the possible dangers of Ai. They claim that the current race to develop better and better AI applications has gotten out of control.

Who Signed?

Elon Musk (tech entrepreneur and owner of Twitter) is one of the signatories who are calling for training of AIs beyond a certain capability to be paused for a minimum of six months. Apple co-founder Steve Wozniak along with some researchers at DeepMind (Google’s AI think tank) also signed.

Another signatory to the letter, Stuart Russell (a computer-science professor at the University of California, Berkeley), said, “AI systems pose significant risks to democracy through weaponised disinformation, to employment through displacement of human skills and to education through plagiarism and demotivation … In the long run, taking sensible precautions is a small price to pay to mitigate these risks.

Who Created the Letter?

The letter originates from Future of Life Institute, a non-profit whose mission is to “steer transformative technologies away from extreme, large-scale risks and towards benefiting life.” In the letter, they call on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months. Warning of the dangers more advanced systems might pose in the future.

GTP-4 is the most recent release of ChatGPT. An AI text generator from OpenAI. This is the current state-of-the-art of this technology, and has concerned crtics with its potential for generating convincing misinformation.

What Are Some of the Key Points?

The letter states that, “AI systems with human-competitive intelligence can pose profound risks to society and humanity.” It goes on to say that future AI development must be done with care. Unfortunately, “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict, or reliably control.”

The letter also warns that AIs might saturate information distribution means with misinformation, and that they might displace existing workers with automation. In fact, this is backed up by a recent report from Goldman Sachs (a major investment bank) which said that they expect AI to increase productivity, millions of jobs might become obsolete because of automation.

The letter speculates, “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us?

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x