By Sadhna Shanker
In 2024, more than 50 countries are going to go to the polls, covering nearly 4.2 billion people around the globe. In this scenario, the growth of deep fakes and fake videos are a challenge for both the electorate and the candidates.
Few years ago, for most of us an image, a video or an audio bite was proper proof that something is real or has happened. They were used in trials as proof. But presently the idea that an image or audio is proof of authenticity has changed drastically. As Artificial Intelligence is improving with computing power and data training increasing exponentially, doubts about images we see or audios we hear are a part and parcel of everyday life.
At the dawn of the AI generated image, it was relatively simple to identify fake images; as fabricated audio or video content often was out of sync or had some peculiar features. But that is increasingly becoming difficult.
A video that apparently showed musician Taylor Swift holding a flag promoting Donald Trump went viral on X, formerly known as Twitter. It has been seen by more than 4.5 million people. But the video is fake.
Increasingly generative AI will be able to produce perfect fakes – digital clones of what a genuine image or recording would have been. It is a frightening future where scamsters will be able to impersonate loved ones, anyone’s photographs can be converted into pornography or important political leaders could be shown saying or doing strange things! Deep fakes and doctored videos and audios are not only causing financial damage, but also negatively impacting the relationships between individuals, groups and communities.
In the race between the generators and the detectors, the forgers seem to be winning.
In this fight for the truth, technology, policy, and awareness all are trying to face the challenge. Many technology based detection systems are in use and many are being experimented with. In the latter category, ‘watermarking’ of digital goods is something that is being tried. The idea is to add a feature that subtly can be seen by anyone trying to find out if the text or image is real or fake. Such tweaking would be difficult for humans to pick, but could be tracked by machines. Watermarking research is an on-going effort, because in the battle of true versus fake, it is better to have some safeguards rather than none.
While the techies work on watermarking and other methods of detection, what about us?
We are the ultimate users and consumers of all content. Perhaps it is time for us to accept and learn that images, videos or audios of things do not prove that they are true and happened or did not happen. Applying the ‘zero trust approach’ of cyber security to online content seems to be the best approach. This means not trusting anything by default and instead verifying everything. Online content no more testifies for itself, who posted it becomes more important than what was posted. The origin and trustworthiness of a source will be as important as always. Maybe the printed word will regain prominence, as we try to ascertain the truth of what we see or read online.
As AI makes fakes easier and more believable, we need to pause and try to verify what is put before us. That is going to be needed more than ever. Till the machines become smart enough to detect the fakes, it is our own human reason and intelligence that is going to be the ultimate safeguard.
—————————————
Sadhna Shanker is a New Delhi based author of six books. She also contributes to national and international publications.
Disclaimer: The views expressed are not necessarily those of The South Asian Times