With the growing influence of AI and the limitless ways it can be used, it will always have pros and cons. One such con of the currently trending AI is deepfakes.
The study on AI-generated deepfakes revealed the dangers they pose to the present and future of online safety, as the number of deepfakes online increased by over 900% in the last four years. In this context, we will delve into how to identify a deepfake and how these deepfakes are one of the root causes of the increase in the spreading of misinformation across the web.
What are the possible concerns raised due to deepfakes?
Deepfakes are one of the many tools across the web used to fool people as they can manipulate the viewer into believing fake news as true.
- Spreading misinformation: Creating deepfakes of videos and images by altering the original events, such as elections and mass public events, can easily put people into a frenzy/turmoil and create chaos, possibly leading to casualties.
- Violation of privacy: By replacing someone’s body part, most commonly the face, with something compromising like images or videos can be a violation of privacy that could lead to one’s harassment and embarrassment.
- Political turnaround: When it comes to politics, the results could affect millions. One such example was when the images of Donald Trump’s arrest circulated across the web, which caused people to believe the fake news, and as a result, there was outrage on social media. These fake situations are among the foremost contributors to hate crimes in the present scenario.
- Alteration of historical events: Altering historical events, such as political speeches or some famous historical events that hold immense importance for certain communities, has the potential to reshape people’s recollections of reality.
- Undermining the truth: As long as the deepfake technology advances, people can get increasingly skeptical of the visual media and can undermine the facts as collateral damage.
How are deepfakes altering people’s memories?
Highly realistic deepfake videos created using AI of current or historical influential people saying things they never said or adding subtexts to images with fake quotes can make people believe things that never actually happened. The phenomenon where people remember an event differently from its actual occurrence is called The Mandela Effect. It was labeled as “The Mandela Effect” by paranormal researcher Fiona Broome, who believed that Nelson Mandela died in prison in the 1980s but actually died in 2013. Deepfake technology is one reason behind altering the memories, as it makes you believe something that didn’t happen.
How to Spot Deepfake?
As the deepfakes are superimposed onto the original media, there are different ways to spot them.
- Uncommon facial expressions and motions- Paying close attention to the blinking of eyes, lip-syncing, or the unnatural motions of the face can be a good start to spot deepfakes.
- Obscure edges: Look for blurriness around the edges, such as ears and elbows, as they can look distorted.
- Questionable/Dubious audio syncing: Check whether the audio matches the movements of the lips accurately. Deepfakes have the issue of having incorrect syncing.
- Verification of the source: Verifying the source of the image or the video and cross-checking it with other credible sources can help you spot a deepfake.
- Utilization of detecting technology: Using the latest deepfake detection tools that are designed to identify fake content or media can help detect deepfakes.
As technology advances rapidly, deepfakes present a double-edged sword. It can have endless possibilities for creative content creation and entertainment in the form of memes and videos. On the other hand, they can be used for malicious purposes, which raise cases concerning misinformation, privacy invasion, hate crimes, and political agendas.
As we move forward, addressing the issues raised by deepfakes would require collective efforts from technology developers, policymakers, lawmakers, and our society.
You may also like to Read: