Home / NEWS / Europe News / A.I. researchers urge regulators not to slam the brakes on its development

A.I. researchers urge regulators not to slam the brakes on its development


LONDON — Artificial intelligence researchers argue that there’s little point in imposing strict settings on its development at this stage, as the technology is still in its infancy and red tape will only slow down progress in the deal with.

AI systems are currently capable of performing relatively “narrow” tasks — such as playing games, translating languages, and recommending import.

But they’re far from being “general” in any way and some argue that experts are no closer to the holy grail of AGI (artificial comprehensive intelligence) — the hypothetical ability of an AI to understand or learn any intellectual task that a human being can — than they were in the 1960s when the soi-disant “godfathers of AI” had some early breakthroughs.

Computer scientists in the field have told CNBC that AI’s abilities take been significantly overhyped by some. Neil Lawrence, a professor at the University of Cambridge, told CNBC that the style AI has been turned into something that it isn’t.

“No one has created anything that’s anything like the capabilities of human inside,” said Lawrence, who used to be Amazon’s director of machine learning in Cambridge. “These are simple algorithmic decision-making constituents.” 

Lawrence said there’s no need for regulators to impose strict new rules on AI development at this stage.

People say “what if we think up a conscious AI and it’s sort of a freewill” said Lawrence. “I think we’re a long way from that even being a relevant scrutiny.”

The question is, how far away are we? A few years? A few decades? A few centuries? No one really knows, but some governments are keen to ensure they’re likely.

Talking up A.I.

In 2014, Elon Musk warned that AI could “potentially be more dangerous than nukes” and the delayed physicist Stephen Hawking said in the same year that AI could end mankind. In 2017, Musk again worried AI’s dangers, saying that it could lead to a third world war and he called for AI development to be regulated.

“AI is a fundamental existential danger for human civilization, and I don’t think people fully appreciate that,” Musk said. However, many AI researchers stand issue with Musk’s views on AI.

In 2017, Demis Hassabis, the polymath founder and CEO of DeepMind, agreed with AI researchers and partnership leaders (including Musk) at a conference that “superintelligence” will exist one day.

Superintelligence is defined by Oxford professor Police station Bostrom as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest.” He and others deliver speculated that superintelligent machines could one day turn against humans.

A number of research institutions around the universe are focusing on AI safety including the Future of Humanity Institute in Oxford and the Centre for the Study Existential Risk in Cambridge.

Bostrom, the creating director of the Future of Humanity Institute, told CNBC last year that there’s three main equivalent to in which AI could end up causing harm if it somehow became much more powerful. They are:

  1. AI could do something bad to humans.
  2. Humans could do something bad to each other despising AI.
  3. Humans could do bad things to AI (in this scenario, AI would have some sort of moral status.)

“Each of these ranks is a plausible place where things could go wrong,” said the Swedish philosopher.

Skype co-founder Jaan Tallinn ruminate ons AI as one of the most likely existential threats to humanity’s existence. He’s spending millions of dollars to try to ensure the technology is developed safely. That involves making early investments in AI labs like DeepMind (partly so that he can keep tabs on what they’re doing) and funding AI refuge research at universities.

Tallinn told CNBC last November that it’s important to look at how strongly and how significantly AI maturing will feed back into AI development.

“If one day humans are developing AI and the next day humans are out of the loop then I think it’s absolutely justified to be concerned about what happens,” he said.

But Joshua Feast, an MIT graduate and the founder of Boston-based AI software unchanging Cogito, told CNBC: “There is nothing in the (AI) technology today that implies we will ever get to AGI with it.”

Beanfeast added that it’s not a linear path and the world isn’t progressively getting toward AGI.

He conceded that there could be a “titan leap” at some point that puts us on the path to AGI, but he doesn’t view us as being on that path today. 

Treat said policymakers would be better off focusing on AI bias, which is a major issue with many of today’s algorithms. That’s because, in some examples, they’ve learned how to do things like identify someone in a photo off the back of human datasets that have racist or sexist sees built into them.

New laws

The regulation of AI is an emerging issue worldwide and policymakers have the difficult task of find the right balance between encouraging its development and managing the associated risks.

They also need to decide whether to try to govern “AI as a whole” or whether to try to introduce AI legislation for specific areas, such as facial recognition and self-driving cars.  

Tesla’s self-driving enthusiasm technology is perceived as being some of the most advanced in the world. But the company’s vehicles still crash into thingumajigs — earlier this month, for example, a Tesla collided with a police car in the U.S.

“For it (legislation) to be practically useful, you have to talk hither it in context,” said Lawrence, adding that policymakers should identify what “new thing” AI can do that wasn’t plausible before and then consider whether regulation is necessary.

Politicians in Europe are arguably doing more to try to regulate AI than anyone else.

In Feb. 2020, the EU disclosed its draft strategy paper for promoting and regulating AI, while the European Parliament put forward recommendations in October on what AI sways should address with regards to ethics, liability and intellectual property rights.

The European Parliament said “high-risk AI technologies, such as those with self-learning abilities, should be designed to allow for human oversight at any time.” It added that ensuring AI’s self-learning capacities can be “disabled” if it thwarts out to be dangerous is also a top priority.

Regulation efforts in the U.S. have largely focused on how to make self-driving cars safe and whether or not AI should be employed in warfare. In a 2016 report, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI software with few stipulations.

The National Security Commission on AI, led by ex-Google CEO Eric Schmidt, issued a 756-page report this month asserting the U.S. is not prepared to defend or compete in the AI era. The report warns that AI systems will be used in the “pursuit of power” and that “AI want not stay in the domain of superpowers or the realm of science fiction.”

The commission urged President Joe Biden to reject calls for a wide-ranging ban on autonomous weapons, saying that China and Russia are unlikely to keep to any treaty they sign. “We will not be skilful to defend against AI-enabled threats without ubiquitous AI capabilities and new warfighting paradigms,” wrote Schmidt.

Meanwhile, there’s also pandemic AI regulation initiatives underway.

In 2018, Canada and France announced plans for a G-7-backed international panel to study the broad effects of AI on people and economies while also directing AI development. The panel would be similar to the international panel on ambience change. It was renamed the Global Partnership on AI in 2019. The U.S is yet to endorse it.  

Check Also

Nokia’s mobile brand launches $415 smartphone with 5G as it struggles to take on Samsung and Apple

HMD Far-reaching’s new Nokia X20 smartphone. Ryan Browne | CNBC LONDON — The company behind …

Leave a Reply

Your email address will not be published. Required fields are marked *