Deepfakes Versus  Shallowfakes

By: Dr. Ratan Bhattacharjee

A ‘Forbes’ article asked us to imagine a deepflake video of President Trump announcing nuclear weapons launch against North Korea. Thus deepflakes can be accused of being a tool of spreading rumour. The more insidious impact of deepfakes, along with other synthetic media and fake news, is to create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. And when trust is eroded, it is easier to raise doubts about specific events. Coined by Sam Gregory at the human rights organisation witness, shallowfakes are videos that are either presented out of context or are doctored with simple editing tools. They are crude but undoubtedly impactful. The terms Artificial Intelligence (AI), Machine Learning (ML), and Deep Learning (DL) are frequently mentioned in relation to deepfakes and other synthetic media.

– Advertisement –

As noted elsewhere, both machine learning and deep learning may be considered as subsets of artificial intelligence enabling techniques. Deepfakes, which first emerged on the scene in 2019 with fake videos of Meta CEO Mark Zuckerberg and former US House Speaker Nancy Pelosi, are the 21st century’s alternative to Photoshopping – creating images and videos of celebrities via a form of artificial intelligence (AI) called deep learning. The US President Barack Obama is calling Donald Trump a “complete dipshit”, or Zuckerberg having “total control of billions of people’s stolen data” – and more recently a deepfake video of actor Rashmika Mandana that went viral on social media .These show how dangerous a deepfake video can be on social media.

The AI firm Deeptrace found 15,000 deepfake videos online in September 2019, a near doubling over nine months. As new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world to fuel revenge porn. Governments, universities and tech firms are all funding research to detect deepfakes. Deepfake technology can create convincing but entirely fictional photos from scratch. Poor-quality deepfakes are easier to spot. The lip synching might be bad, or the skin tone patchy. There can be flickering around the edges of transposed faces. And fine details, such as hair, are particularly hard for deepfakes to render well, especially where strands are visible on the fringe.

In 2018, US researchers discovered that deepfake faces don’t blink normally. It takes a few steps to make a face-swap video. First, you run thousands of face shots of the two people through an AI algorithm called an encoder. The encoder finds and learns similarities between the two faces, and reduces them to their shared common features, compressing the images in the process. A second AI algorithm called a decoder is then taught to recover the faces from the compressed images. a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face. Deepfakes harass, intimidate, demean, undermine and destabilise. A Danielle Citron, a professor of law at Boston University, puts it: “Deepfake technology is being weaponised against women.” The threat of Deepfakes and synthetic media comes not from the technology used to create it but from people’s natural inclination to believe what they see and as a result deepfakes and synthetic media do not need to be particularly advanced or believable in order to be effective in spreading misinformation or disinformation.

It is hard to make a good deepfake on a standard computer. Most are created on high-end desktops with powerful graphics cards or better still with computing power in the cloud. Facebook banned deepfake videos that are likely to mislead viewers into thinking someone saying words that they did not actually say. However this happened during the 2020 US election. The policy covers only misinformation produced using AI meaning shallowfakes. The dystopian fantasy is what the average social media feed looks like today , a war zone of high-tech disinformation operations vying for our attention  and support. The Big Disinfo which is a cartel of think tanks is finally devastating democracy which is shot in the chest by Arficial Intelligence. The problem with shallowfakes is that disinformation campaigns are not actually happening but that claims of mind-warping AI enabled propaganda. Deepflakes when first developed, the technology made us scared because there were premonitions about the grip of the society on reality would be slackening. But there is the other side of the picture too. Artificial Intelligence can help to spot fake videos but many existing detection systems have a serious weakness. Tech firms are now working on detection systems that aim to flag up fakes whenever they appear. Digital watermarks are not foolproof, but a blockchain online ledger system could hold a tamper-proof record of videos, pictures and audio so their origins and any manipulations can always be checked. Voice –cloning deepfakes can restore people’s voices when they lose them to disease.

– Advertisement –

Deepfake videos can enliven galleries and museums. Surrealists can take great help from deepfake videos. For the entertainment industry deepfake technology may be of great use as they can resurrect dead actors. They are used to enhance video games and other forms of entertainment. And they are being used to advance medical research. But deepfake technology can be weaponized to cause harm. It can be used to make people appear to say or do things that they have not actually said or done. With the development of the Artificial Intelligence the deepfake technology is reaching a huge height and the world is becoming increasingly more synthetic. This technology is not going away so easily as its appeal to the people is so great. Deepfakes reveal dark side of AI and there is now a call for stringent laws. It is true that Deepfakes can be used for entertainment, education, and research; however, they pose a range of significant problems across various domains, such as misinformation, political manipulation, propaganda, reputational damage, and fraud. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation. The question of how humans and AI interacted was more of an existentialist query than a concern. But now, with AI permeating everyday life, the question becomes more pressing. There are physical safety concerns. In 2018, a self-driving car used by the rideshare company Uber hit and killed a pedestrian in a driving accident. In that particular case, the court ruled that backup driver of the self-driving car was at fault as she was watching a show on her phone instead of paying attention to her surroundings. If companies rely too much on AI predictions for when maintenance will be done without other checks, it could lead to machinery malfunctions that injure workers. Models used in healthcare could cause misdiagnoses. And there are further, non-physical ways AI can harm humans if not carefully regulated. AI will be programmed to do something devastating. The best example of this is the idea of “autonomous weapons” which can be programmed to kill humans in war.

There are other ways AI could be programmed to harm humans. Another concern, somewhat related to the last, is that AI will be given a beneficial goal, but will develop destructive behaviors as it attempts to accomplish that goal. In 2015, many leading technology experts (including Stephen Hawking, Elon Musk, and Steve Wozniak) signed an open letter on AI that called for research on the societal impact. At one time we were worried about Selfie and Velfie. But now Deepfake and shallowfake created by AI have stolen all the limelight of our attention.  Let us wait for the coming days to see a new sunrise in technological revolution. (The author is currently attached to Virginia Commonwealth University as Affiliate Faculty in English and President Kolkata Indo American Society. He can be reached at bhattacharjr@vcu.edu)

Read More

The Hills Times