Insights

Deepfake Technology and the risks it posses

Given that misinformation can be easily obtained and disseminated on social media platforms, it is increasingly difficult to know what to believe, which has a negative impact on informed decision-making, etc. Today, we live in what some people call the “post-truth” era, which is characterized by hostile entities trying to influence public opinion through disinformation campaigns, conducting digital deception and information warfare.

Recent technological breakthroughs have made it simple to generate what are now known as “deepfakes,” a hyper-realistic video with minimal evidence of manipulation. It is an Artificial intelligence (AI) application that merges, combines, replaces, and superimposes photos and video clips to produce fake videos that appear authentic, usually without the approval of the individual whose image and voice is involved.

Deepfake technology can create a hilarious, sexual, or political video of a person saying anything. Traditionally, vision-based programs like Adobe Photoshop were used to modify digital photographs and images that had been manually processed may be easily differentiated. However, synthetic images are becoming increasingly convincing due to the fast development of deep learning methods, which has led to the popularity of this technology. Its methods are gaining popularity since they can currently edit media in such a way that another person’s face can substitute an original face while maintaining the original facial expressions and activities.

The extent, scale, and sophistication of the technology involved in deepfakes are game-changing, as essentially anyone with a computer can create fake films that are virtually indistinguishable from legitimate material. While early examples of deep fakes included political leaders, actresses, comedians, and entertainers having their faces woven into porn videos, deep fakes will almost certainly be used for revenge porn, bullying, fake video evidence in courts, political sabotage, terrorist propaganda, blackmail, market manipulation, and fake news in the future.

Deepfaking audio can also be used to produce “voice skins” or “voice clones” of popular persons. After being contacted by a scammer who imitated the German CEO’s voice, the CEO of a UK division of a German energy corporation puts approximately £200,000 into a Hungarian bank account in March of 2019.

Videos of the movie star, Tom Cruise, playing around in an upscale men’s clothing store and showing off a coin trick started cropping up on TikTok earlier this year, and they were remarkably un-Tom Cruise-like. This phoney Tom Cruise was extremely popular on TikTok, with tens of millions of views and over 1.7 million followers. Chris Ume, a visual effects professional from Belgium, created the deepfakes by using deepfake technology to generate a captivating video that appears exactly like the celebrity.

Following that, artists, pranksters, and others have used these procedures to build an ever-growing collection of sound and film portraying world leaders like Donald Trump, Barack Obama, and Vladimir Putin is making statements they won’t ever say. This pattern has brought worries up in the national security community that new leap forwards in machine learning would build the viability of destructive media manipulation attempts, for example, those attempted by Russia during the 2016 official political race in the United States.

Deepfakes can likewise be utilized for acceptable, for example, producing voices for individuals who have lost theirs or refreshing scenes of films without reshooting them. The number of malicious uses of deepfakes, then again, far dwarfs the valuable ones. The improvement of complex profound organizations, just as the accessibility of colossal measures of information, has delivered adulterated photos and recordings essentially undetectable to humans and even sophisticated computer algorithms. The way toward making those controlled pictures and recordings is likewise a lot easier today, as it just requires an objective person’s character photograph or a short video. Producing astonishingly convincing tempered footage requires less and less effort.

 
 
How To Spot A Fake Deepfake
To aid in deepfake identification, researchers are looking at soft biometrics, such as how a person speaks, as well as other features in films. This emphasis on soft biometrics is critical since you can spot these telltale signs on your own. 
 

Deepfakes can be identified by several indicators: 

1.    Current deepfakes have problems animating faces convincingly, resulting in videos where the person never blinks or blinks far too frequently or strangely. The eyes appear to be unnatural. It’s difficult to reproduce the act of blinking naturally since it’s difficult to replicate a real person’s eye movements because a person’s eyes would frequently follow the person with whom they’re conversing. New deepfakes were produced that did not have this problem when researchers at the University of Albany published a paper finding blinking irregularity.
2.    Look out for skin or hair issues, as well as faces that appear to be blurrier than the environment in which they’re placed. The focus may be unnaturally soft. Hair is extremely difficult for deepfakes to portray accurately, particularly where strands are visible on the fringe.
3.    Slowed-down images that appear strange. You can zoom in and inspect images more closely if you watch a movie on a screen larger than your smartphone or if you have video-editing software that can slow down a video’s playback. Zooming in on lips, for example, can reveal whether they’re actually speaking or if they’re lip-syncing badly.
4.    Uncomfortable body or posture. Another clue is if a person’s body shape is unnatural, or if their head and body are positioned awkwardly or inconsistently. Because deepfake technology primarily concentrates on facial traits rather than the entire body, this may be one of the simpler anomalies to identify etc.
Deepfake Technology’s Potential Dangers Include:
Deepfakes represent a significant danger to our general public, political framework, and business since they 1) it puts more pressure on journalists who are trying to distinguish between real and fake news. 2) It risks public safety by dispersing promulgation and meddling in decisions and vote based system. 3) It subverts resident confidence in government data and its exercises. 4) It raises cybersecurity concerns for individuals and organizations and will make it even easier for criminals to dupe innocent people.
Governments, schools, and tech organizations are all sponsoring deepfake detection research. The first Deepfake Detection Challenge, sponsored by Microsoft, Facebook, and Amazon, started in April 2020. It will include research teams from all over the world battling for supremacy in the game of deepfake detection. In the approach to the 2020 US political race, Facebook has prohibited deepfake recordings that are probably going to misdirect users into believing somebody “expressed words that they didn’t utter.” The policy, however, only applies to falsehoods generated by artificial intelligence.
Enjoyed this post? Never miss out on future posts by following us»

2 thoughts on “Deepfake Technology and the risks it posses

  • Thank you for sharing this informative article. It’s true that negative uses seem to dwarf deepfake and voice cloning’s positive applications, that’s why it’s saddening when the positive applications have a much richer, deeper, and valuable impact than the negative ones. Experts and tech giants should race to make sure this technology cannot be used maliciously so people can continuously utilize the technology for good.

    Reply
  • Hi Sean, you are right that the technology meant for good is being used negatively. However, lets also have it in mind that the internet was also invented for positive use, but as of today we can see lots of scam being perpetuated by some uncanny individuals, despite that the internet has had lots of positive impact in the world today compared to when it was first developed. I do believe that deepfake technology will also achieve what it was created for.

    Reply

Leave a Reply

Discover more from Africa's top tech news platform

Subscribe now to keep reading and get access to the full archive.

Continue reading