New Delhi: The recent Rashmika Mandanna deepfake controversy underlines the pressing need for a comprehensive legal and regulatory framework in India so that such brazen and misleading content can be checked.
Deepfake Controversy Calls For Legal Framework
Mandanna took to Twitter seeking immediate action against viral deepfake video
I feel really hurt to share this and have to talk about the deepfake video of me being spread online.
Something like this is honestly, extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused._
— Rashmika Mandanna (@iamRashmika) November 6, 2023
Bollywood superstar Amitabh Bachchan has now joined the chorus in demanding legal action following a reported morphed video featuring his ‘Goodbye’ co-star, Rashmika Mandanna.
AI’s Expanding Capabilities And Its Impact on Society At large
The ever-increasing prowess of artificial intelligence (AI) to create highly convincing content, such as deepfakes, has raised substantial concerns within the broader social context. The rise of AI is now sparking a significant debate on the very nature of content creation and the threat that lurks around when it comes to promoting nudity, falsehood, propaganda and misleading news.
Understanding What Is Deepfake: A Blend of Realism and Deception
Deepfakes are a form of synthetic media meticulously designed to resemble a real person’s voice, appearance, or actions. These technologically advanced creations fall within the realm of generative artificial intelligence (AI), a subset of machine learning (ML). It involves training algorithms to learn the intricate patterns and unique characteristics of a dataset, which can include video footage or audio recordings of a real individual. The goal is to enable the AI to recreate original sound or visual imagery with startling precision.
Challenges In Detecting Deepfake Speech/Videos
Notably, research has shown that the human capacity to discern artificially generated speech is not entirely reliable. A recent study conducted by researchers at University College London unveiled a surprising finding: humans could only identify deepfake speech with an accuracy rate of 73 percent. Even after participants received training to recognize the distinctive traits of deepfake speech, the improvement was only marginal. This conveys the growing difficulty in distinguishing authentic from manipulated audio content.
Deepfake: The Dual Nature of AI Audio Technology
Generative AI audio technology, while holding potential for positive applications like enhanced accessibility for individuals with speech limitations, also presents escalating concerns. Misuse of this technology by malicious actors, both criminal and nation-state, poses significant threats to individuals and societies at large.
Deepfake: Global Warning on AI’s Existential Risks
In a brief yet striking statement, prominent researchers, experts, and CEOs, including Sam Altman of OpenAI, recently issued a fresh warning about the existential threat posed by artificial intelligence (AI). Their collective voice emphasizes the imperative of addressing the risks associated with AI on a global scale, placing it alongside other global risks such as pandemics and nuclear war.