Rise in AI Threats
Here is an article that explains the rise in Deepfakes and how to defend against it. The shocking thing to me is that a candidate for a job used Deepfake to interview at a security company hence a big reason why I'm sharing this. Here are the highlights:
  • Rising AI Attacks: Generative AI (GenAI) attacks, including deepfakes, are increasing. AI-generated content now makes up around 12% of emails, up from 7% in 2022.
  • OWASP Guidance: The OWASP Top 10 for LLM Applications & Generative AI has released new guidance to help organizations prepare for AI-based threats, including:
  • Motivation for Companies: Scott Clinton, OWASP co-project lead, highlights that companies seek to use AI for competitive advantage and need secure ways to adopt it without hindrance.
  • Real-World Example of Deepfake: Exabeam experienced a deepfake job candidate who passed initial screenings but was flagged during a video interview for digital artifacts and lack of emotion. This led Exabeam to enhance their HR and security processes for identifying AI-based threats.
  • Increased Concern: A survey by Ironscales found 48% of IT professionals are concerned about deepfakes now, and 74% believe they will become a major threat in the future.
  • Future Deepfake Threats: AI advancements mean realistic digital impersonations, or "sock puppets," are likely to emerge, making traditional trust methods in communications unreliable.
  • Need for Better Defenses: Exabeam’s CISO emphasizes the need for technical solutions that can detect deepfakes reliably as technology improves.
  • OWASP Recommendations: Rather than relying solely on human detection, OWASP suggests creating technical infrastructures and processes (e.g., for financial transactions) to authenticate video chats and flag deepfakes effectively.
2
0 comments
Andry Rakotomalala
3
Rise in AI Threats
Cybersecurity Clarity by Andry
skool.com/cybersecurityclarity
I help people understand cybersecurity for personal or professional development
powered by