Experts ask to stop ‘Giant AI experiments’

Elon Musk, the CEO of Tesla Inc., and AI pioneer Yoshua Bengio are among the top tech execs and researchers advocating to halt the rapid creation of new, potent AI tools.

The release of GPT-4 from Microsoft-backed company OpenAI sparked an open petition, which has already gathered more than 1,000 signatures, including those of Elon Musk and Apple co-founder Steve Wozniak. The business claims that its most recent model is significantly more potent than its previous one, which was used to power ChatGPT, a bot that can produce text tracts from the briefest instructions.

Humanity and society are in significant danger from AI systems that are intelligent enough to compete with humans; powerful AI systems should only be developed once we are certain that their beneficial effects and risks will be manageable, according to the open letter named “Pause Giant AI Experiments.”

According to the letter, a six-month or longer moratorium would allow the sector time to establish safety guidelines for AI design and prevent any negative effects from the riskiest AI technologies.  

The nonprofit Future of Life Institute, which lists Mr. Musk as an external advisor, organized a letter titled “Pause Giant AI Experiments: An Open Letter” that outlined these worries and the suggestion for the hold. The co-founders of the Center for Humane Technology, Tristan Harris and Aza Raskin, who have criticized social media and AI technology, as well as Apple co-founder Steve Wozniak and Stability AI CEO Emad Mostaque, also signed the letter that was made public on Wednesday, according to a spokeswoman for the group that wrote it. 

The letter asks businesses to temporarily cease developing AI systems more potent than GPT-4, a system just unveiled this month by Microsoft Corp.-backed startup OpenAI. This contains GPT-5, the newest version of OpenAI’s technology. 

Officials from OpenAI claim that GPT-5 training has not yet begun. Sam Altman, CEO of OpenAI, stated in an interview that the company has long prioritized safety in development and spent more than six months testing GPT-4 for safety issues previous to launch. 

Calls for a broken conflict with a widespread wish among tech firms and startups to invest more in so-called generative AI, a tool that can create original content in response to human prompts. After OpenAI revealed a chatbot that could perform tasks like giving detailed responses and writing computer code with humanlike sophistication, the buzz surrounding it erupted last fall. 

In order to improve its Bing search engine and other tools, Microsoft has adopted the technology. Google, a subsidiary of Alphabet Inc., has implemented a competing system, and organizations like Adobe Inc., Zoom Video Communications Inc., and Salesforce Inc. have also unveiled cutting-edge AI tools.

That strategy has sparked fresh worries that a quick rollout might result in unintended repercussions in addition to beneficial effects. According to Max Tegmark, a co-organizer of the letter, the head of the Future of Life Institute, and a physics professor at the Massachusetts Institute of Technology, advancements in AI have surpassed what many experts thought was feasible just a few years ago. 

At the same time, Mr. Musk has adopted some AI tools at Tesla for the business’s sophisticated driver-assistance features. Last month, Tesla announced that it was returning about 362,800 cars that had its technology, known as Full Self-Driving Beta. The technology might occasionally break local traffic laws, according to the top auto safety organisation in the United States, possibly raising the risk of an accident if a driver doesn’t take action.

The letter’s premise didn’t please Yann LeCun, chief scientist at Meta Platforms Inc., so he tweeted on Tuesday that he chose not to sign it. The CEO of Stability AI, Mr. Mostaque, stated in a tweet on Wednesday that despite signing the letter, he did not concur with a six-month break.

Google did not quickly respond to a request for comment, and Microsoft declined to do so. The six-month break, according to the authors, can be used by experts to create a collection of common safety guidelines for advanced AI design. These guidelines should be audited and supervised by outside experts.

Follow us at – https://www.facebook.com/dissenttimes

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *