Home / NEWS / Top News / What ‘deepfakes’ are and how they may be dangerous

What ‘deepfakes’ are and how they may be dangerous

A match of an original and deepfake video of Facebook CEO Mark Zuckerberg.

Elyse Samuels | The Washington Post | Getty Images

Camera apps make become increasingly sophisticated. Users can elongate legs, remove pimples, add on animal ears and now, some can even initiate false videos that look very real. The technology used to create such digital content has at once become accessible to the masses, and they are called “deepfakes.”

Deepfakes refer to manipulated videos, or other digital portraits produced by sophisticated artificial intelligence, that yield fabricated images and sounds that appear to be real.

Such videos are “proper increasingly sophisticated and accessible,” wrote John Villasenor, nonresident senior fellow of governance studies at the Center for Technology Alteration at Washington-based public policy organization, the Brookings Institution. “Deepfakes are raising a set of challenging policy, technology, and legal pay-offs.”

In fact, anybody who has a computer and access to the internet can technically produce deepfake content, said Villasenor, who is also a professor of electrical contriving at the University of California, Los Angeles.

What are deepfakes?

The word deepfake combines the terms “deep learning” and “fake,” and is a variety of artificial intelligence.

In simplistic terms, deepfakes are falsified videos made by means of deep learning, said Paul Barrett, adjunct professor of law at New York University.

Esoteric learning is “a subset of AI,” and refers to arrangements of algorithms that can learn and make intelligent decisions on their own.

But the danger of that is “the technology can be adapted to to make people believe something is real when it is not,” said Peter Singer, cybersecurity and defense-focused strategist and postpositive major fellow at New America think tank.

Singer is not the only one who’s warned of the dangers of deepfakes.

Villasenor told CNBC the technology “can be familiar to undermine the reputation of a political candidate by making the candidate appear to say or do things that never actually occurred.”

“They are a substantial new tool for those who might want to (use) misinformation to influence an election,” said Villasenor.

How do deepfakes work?

A deep-learning group can produce a persuasive counterfeit by studying photographs and videos of a target person from multiple angles, and then mimicking its behavior and song patterns.

Barrett explained that “once a preliminary fake has been produced, a method known as GANs, or generative adversarial networks, passes it more believable. The GANs process seeks to detect flaws in the forgery, leading to improvements addressing the flaws.”

And after multiple heats of detection and improvement, the deepfake is completed, said the professor.

According to a MIT technology report, a device that enables deepfakes can be “a faultless weapon for purveyors of fake news who want to influence everything from stock prices to elections.”

In fact, “AI embellishes are already being used to put pictures of other people’s faces on the bodies of porn stars and put words in the mouths of senators,” wrote Martin Giles, San Francisco bureau chief of MIT Technology Review in a report.

He said GANs didn’t form this problem, but they’ll make it worse.

How to detect manipulated videos?

While AI can be used to make deepfakes, it can also be adapted to to detect them, Brookings’ Villasenor wrote in February. With the technology becoming accessible to any computer user, sundry and more researchers are focusing on deepfake detection and looking for a way of regulating it.

Large corporations such as Facebook and Microsoft have on the agenda c trick taken initiatives to detect and remove deepfake videos. The two companies announced earlier this year that they wishes be collaborating with top universities across the U.S. to create a large database of fake videos for research, according to Reuters.

“A while, there are slight visual aspects that are off if you look closer, anything from the ears or eyes not matching to feathery borders of the face or too smooth skin to lighting and shadows,” said Singer from New America.

But he said that learn ofing the “tells” is getting harder and harder as the deepfake technology becomes more advanced and videos look more rational.

Even as the technology continues to evolve, Villasenor warned that detection techniques “often lag behind the most assisted creation methods.” So the better question is: “Will people be more likely to believe a deepfake or a detection algorithm that fall offs the video as fabricated?”

Update: This alibi has been revised to reflect an updated quote by John Villasenor from the Brookings Institution.

Check Also

Elon Musk received court summons in SEC suit over failure to properly disclose Twitter stake

Tesla CEO Elon Musk looks on as US President Donald Trump speaks to the journalists …

Leave a Reply

Your email address will not be published. Required fields are marked *