Malicious actors using deepfakes - Praxis
Malicious actors using deepfakes

Malicious actors using deepfakes

With over 2 billion voters across 50 countries going into elections worldwide, the democratic process faces a formidable foe asAI-generated content and deepfakes appear to unfairly influence the choice of voters

 

Very recently, a movie idol of Indian cinema who passed away back in 1980, was revived using technology to create a new movie around him. While this was one of the more innocent uses of artificial intelligence, political parties in India have been producing deepfakes of dead and alive leaders to influence voters. In neighbouring Pakistan, a deepfake video of the former US president Donald Trump has been circulating, promising the get Imran Khan, the former prime minister of that country released from prison.

With over 2 billion voters across 50 countries going into elections all over the world, and half of those in India, the democratic process faces a formidable foe – artificial intelligence generated content and deepfakes that look to unfairly influence the choice of voters. Some incidents show innovative approaches to political campaigning.

Others highlight many of the concerns experts have around mis- or disinformation and fake news: A spam campaign to discredit Taiwan’s president was traced back to an actor associated with the Chinese Communist Party, while deepfake videos showed Bangladeshi candidates withdrawing from the elections on election day. Of course, there are also plenty of examples of AI being used for fun. In India, a leading politician was turned into a popular singer, and in Mexico, voters were confronted with spoofed Starbucks cups.

Threats From Nation States

The dominant conversation across global elections so far this year centres on malicious use of generative AI for propaganda and disinformation. Microsoft has created a process for assessing foreign manipulator use of AI to influence audiences. All three of the authoritarian actors reviewed in this report—Russia, Iran, and China—leveraged some form of generative AI to create content since last summer. It is anticipated that election influence campaigns will include fakes—some will be deep, most shallow—and the simplest manipulations, not the most complex employment of AI, will likely be the pieces of content that have the most impact.

The recently published Artificial Intelligence Index report 2024 warns that though progress in AI has been impressive but current AI technology still has significant problems. It cannot reliably deal with facts, perform complex reasoning, or explain its conclusions. The report devotes an entire chapter on “Responsible AI.” A theme that is vital for understanding AI’s social implications, including biases in AI systems, the ethical development and deployment of AI, and efforts to mitigate risks associated with AI technologies.

Responsible AI

Responsible AI is a critical area of focus that encompasses AI ethics, fairness, accountability, transparency, and the societal implications of AI technology. This area is gaining importance as AI systems become more prevalent in our daily lives, influencing decision-making in critical sectors like healthcare, criminal justice, finance, and employment.

Key Findings on AI Biases and Ethical AI

  1. Lack of Standardised Evaluations for AI Ethics and Bias: The AI Index Report highlights a significant gap in standardised reporting and evaluation methodologies for AI ethics and bias. Leading AI developers test their models against varied benchmarks, making it challenging to compare models systematically. This diversity in evaluation standards complicates efforts to identify and mitigate biases across different AI systems.
  2. Political Deepfakes and Misinformation: The report discusses the ease of generating political deepfakes and the challenges in detecting them. This aspect of AI ethics is particularly concerning due to the potential impact on elections and democratic processes. The report mentions projects like CounterCloud, which exemplify how AI can be used to create and spread fake content, underscoring the ethical implications of generative AI technologies.
  3. Complex Vulnerabilities and Risks:AI systems, especially large language models (LLMs), exhibit complex vulnerabilities that can lead to biased or harmful outcomes. The report notes that researchers have found new strategies to exploit these vulnerabilities, highlighting the ongoing challenge of securing AI systems against ethical and bias-related risks.
  4. Concerns Over AI and Business Ethics: The global survey on responsible AI included in the report indicates that businesses worldwide are concerned about privacy, data security, reliability, and biases in AI systems. While steps are being taken to mitigate these risks, the report suggests that a comprehensive approach to addressing ethical concerns in AI is still needed.
  5. Copyright Issues and Transparency in AI Development: The report points out the ethical dilemma surrounding AI-generated content that may include copyrighted material. Furthermore, it emphasises the low transparency scores of AI developers, particularly regarding the disclosure of training data and methodologies, which poses challenges for understanding and improving the ethical aspects of AI systems.
  6. Debate Over Long-Term vs. Short-Term AI Risks: A significant debate among AI scholars and practitioners focuses on the prioritisation of immediate risks, such as algorithmic discrimination, versus long-term existential threats posed by AI. The report suggests that distinguishing scientifically founded claims is difficult, especially when short-term risks are tangible and immediate.
  7. AI Incidents on the Rise: The report cites data from the AI Incident Database, showing an increase in reported incidents related to AI misuse, including cases of bias and ethical violations. This trend underscores the need for vigilant monitoring and mitigation strategies to address ethical challenges in AI.

Addressing these challenges requires a multifaceted approach, including the development of standardised evaluation frameworks, increased transparency from AI developers, and ongoing research into effective mitigation strategies. Policymakers, researchers, and industry stakeholders must collaborate to ensure the ethical development and deployment of AI technologies, safeguarding against biases and ensuring that AI benefits society equitably.

 

 

Know more about the syllabus and placement record of our Top Ranked Data Science Course in KolkataData Science course in BangaloreData Science course in Hyderabad, and Data Science course in Chennai.

Leave a comment

Your email address will not be published. Required fields are marked *

© 2023 Praxis. All rights reserved. | Privacy Policy
   Contact Us