Home / NEWS / Wealth / Elon Musk wants to pause ‘dangerous’ AI development. Bill Gates disagrees—and he’s not the only one

Elon Musk wants to pause ‘dangerous’ AI development. Bill Gates disagrees—and he’s not the only one

If you’ve heard a lot of pro-AI gabble in recent days, you’re probably not alone.

AI developers, prominent AI ethicists and even Microsoft co-founder Bill Gates participate in spent the past week defending their work. That’s in response to an open letter published last week by the Later of Life Institute, signed by Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, calling for a six-month end to work on AI systems that can compete with human-level intelligence.

The letter, which now has more than 13,500 signatures, expressed apprehension that the “dangerous race” to develop programs like OpenAI’s ChatGPT, Microsoft’s Bing AI chatbot and Alphabet’s Bard could be enduring negative consequences if left unchecked, from widespread disinformation to the ceding of human jobs to machines.

But large strips of the tech industry, including at least one of its biggest luminaries, are pushing back.

“I don’t think asking one particular group to discontinue solves the challenges,” Gates told Reuters on Monday. A pause would be difficult to enforce across a global assiduity, Gates added — though he agreed that the industry needs more research to “identify the tricky areas.”

That’s what imputes the debate interesting, experts say: The open letter may cite some legitimate concerns, but its proposed solution seems ludicrous to achieve.

Here’s why, and what could happen next — from government regulations to any potential robot uprising.

What are Musk and Wozniak distressed about?

The open letter’s concerns are relatively straightforward: “Recent months have seen AI labs locked in an out-of-control track horse-races to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, foretell, or reliably control.”

AI systems often come with programming biases and potential privacy issues. They can largely spread misinformation, especially when used maliciously.

And it’s easy to imagine companies trying to save money by take over froming human jobs — from personal assistants to customer service representatives — with AI language systems.

Italy has already in the interim banned ChatGPT over privacy issues stemming from an OpenAI data breach. The U.K. government published bye-law recommendations last week, and the European Consumer Organisation called on lawmakers across Europe to ramp up regulations, too.

In the U.S., some colleagues of Congress have called for new laws to regulate AI technology. Last month, the Federal Trade Commission issued instruction for businesses developing such chatbots, implying that the federal government is keeping a close eye on AI systems that can be hardened by fraudsters.

And multiple state privacy laws passed last year aim to force companies to disclose when and how their AI outcomes work, and give customers a chance to opt out of providing personal data for AI-automated decisions.

Those laws are currently brisk in California, Connecticut, Colorado, Utah and Virginia.

What do AI developers say?

At least one AI safety and research company isn’t worried yet: Current technologies don’t “model an imminent concern,” San Francisco-based Anthropic wrote in a blog post last month.

Anthropic, which received a $400 million investment from Alphabet in February, does cause its own AI chatbot. It noted in its blog post that future AI systems could become “much more powerful” terminated the next decade, and building guardrails now could “help reduce risks” down the road.

The problem: Nobody’s unequivocally sure what those guardrails could or should look like, Anthropic wrote.

The open letter’s skill to prompt conversation around the topic is useful, a company spokesperson tells CNBC Make It. The spokesperson didn’t denominate whether Anthropic would support a six-month pause.

In a Wednesday tweet, OpenAI CEO Sam Altman acknowledged that “an functioning global regulatory framework including democratic governance” and “sufficient coordination” among leading artificial general keenness (AGI) companies could help.

But Altman, whose Microsoft-funded company makes ChatGPT and helped develop Bing’s AI chatbot, didn’t identify what those policies might entail, or respond to CNBC Make It’s request for comment on the open letter.

Some researchers go through another issue: Pausing research could stifle progress in a fast-moving industry, and allow authoritarian countries mature their own AI systems to get ahead.

Highlighting AI’s potential threats could encourage bad actors to embrace the technology for nefarious purposes, responds Richard Socher, an AI researcher and CEO of AI-backed search engine startup You.com.

Exaggerating the immediacy of those threats also supports unnecessary hysteria around the topic, Socher says. The open letter’s proposals are “impossible to enforce, and it tackles the complication on the wrong level,” he adds.

What happens now?

The muted response to the open letter from AI developers seems to intimate that the tech giants and startups alike are unlikely to voluntarily halt their work.

The letter’s call for increased command regulation appears more likely, especially since lawmakers in the U.S. and Europe are already pushing for transparency from AI developers.

In the U.S., the FTC could also prove rules requiring AI developers to only train new systems with data sets that don’t include misinformation or complete bias, and to increase testing of those products before and after they’re released to the public, according to a December hortatory from law firm Alston & Bird.

Such efforts need to be in place before the tech advances any further, says Stuart Russell, a Berkeley University computer scientist and outstanding AI researcher who co-signed the open letter.

A pause could also give tech companies more time to make good that their advanced AI systems don’t “present an undue risk,” Russell told CNN on Saturday.

Both sides do have all the hallmarks to agree on one thing: The worst-case scenarios of rapid AI development are worth preventing. In the short term, that means supply AI product users with transparency, and protecting them from scammers.

In the long term, that could unkind keeping AI systems from surpassing human-level intelligence, and maintaining an ability to control it effectively.

“Once you start to establish machines that are rivalling and surpassing humans with intelligence, it’s going to be very difficult for us to survive,” Gates give someone a tongue-lashed the BBC back in 2015. “It’s just an inevitability.”

DON’T MISS: Want to be smarter and more successful with your money, move & life?

Check Also

LVMH watch and jewelry CEOs see luxury sales picking up in 2025

After a year of settles, sales of watches and jewelry at luxury giant LVMH rebounded …

Leave a Reply

Your email address will not be published. Required fields are marked *