Artwork

Contenu fourni par Varun Kumar. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Varun Kumar ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.
Player FM - Application Podcast
Mettez-vous hors ligne avec l'application Player FM !

MITRE ATLAS Framework - Securing AI Systems

17:27
 
Partager
 

Manage episode 520770360 series 3667853
Contenu fourni par Varun Kumar. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Varun Kumar ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.

As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount.

Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems.

In this episode, we'll break down the core components of MITRE ATLAS:

Tactics: These are the high-level objectives of attackers – the "why" behind their actions.

MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).

Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging.

Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.

Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors.

We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.

Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model.

To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include:

Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts.

Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts.

Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity.

Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.

You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.

The

https://www.linkedin.com/company/practical-devsecops/
https://www.youtube.com/@PracticalDevSecOps
https://twitter.com/pdevsecops

  continue reading

12 episodes

Artwork
iconPartager
 
Manage episode 520770360 series 3667853
Contenu fourni par Varun Kumar. Tout le contenu du podcast, y compris les épisodes, les graphiques et les descriptions de podcast, est téléchargé et fourni directement par Varun Kumar ou son partenaire de plateforme de podcast. Si vous pensez que quelqu'un utilise votre œuvre protégée sans votre autorisation, vous pouvez suivre le processus décrit ici https://fr.player.fm/legal.

Welcome to a crucial episode where we delve into the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Framework, an exhaustive knowledge base designed to secure our increasingly AI-dependent world.

As AI and machine learning become foundational across healthcare, finance, and cybersecurity, protecting these systems from unique threats is paramount.

Unlike MITRE ATT&CK, which focuses on traditional IT systems, MITRE ATLAS is specifically tailored for AI-specific risks, such as adversarial inputs and model theft. It provides a vital resource for understanding and defending against the unique vulnerabilities of AI systems.

In this episode, we'll break down the core components of MITRE ATLAS:

Tactics: These are the high-level objectives of attackers – the "why" behind their actions.

MITRE ATLAS outlines 14 distinct tactics that attackers use to compromise AI systems, including Reconnaissance (gathering information on the AI system), Initial Access (gaining entry into the AI environment), ML Model Access (entering the AI environment), Persistence (establishing continuous access), Privilege Escalation (gaining more effective controls), and Defense Evasion (bypassing security).

Other tactics include Credential Access, Discovery, Lateral Movement, Collection, Command and Control, Exfiltration, Impact, and ML Attack Staging.

Techniques: These are the specific methods and actions adversaries use to carry out their tactics – the "how". We'll explore critical techniques like Data Poisoning, where malicious data is introduced into training sets to alter model behavior; Prompt Injection, manipulating language models to produce harmful outputs; and Model Inversion, which involves recovering target data from an AI model.

Other key techniques to watch out for include Model Extraction, reverse-engineering or stealing proprietary AI models, and Adversarial Examples, subtly altered inputs that trick AI models into making errors.

We'll also examine real-world case studies, such as the Evasion of a Machine Learning Malware Scanner (Cylance Bypass), where attackers used reconnaissance and adversarial input crafting to bypass detection by studying public documentation and model APIs.

Another notable example is the OpenAI vs. DeepSeek Model Distillation Controversy, highlighting the risks of model extraction and intellectual property theft by extensively querying the target model.

To safeguard AI systems, MITRE ATLAS emphasizes robust security controls and best practices. Key mitigation strategies include:

Securing Training Pipelines to protect data integrity and restrict access to prevent poisoning or extraction attempts.

Continuously Monitoring Model Outputs for anomalies indicating adversarial manipulation or extraction attempts.

Validating Data Integrity through regular audits of datasets and model behaviour to detect unexpected changes or suspicious activity.

Join us as we discuss how the MITRE ATLAS Framework transforms AI security, providing practical guidance to defend against the evolving threat landscape.

You'll learn why it's crucial for every organization to embrace this framework, contribute to threat intelligence, and engage with the wider AI security community to secure AI as a tool of innovation, not exploitation.

The

https://www.linkedin.com/company/practical-devsecops/
https://www.youtube.com/@PracticalDevSecOps
https://twitter.com/pdevsecops

  continue reading

12 episodes

Tous les épisodes

×
 
Loading …

Bienvenue sur Lecteur FM!

Lecteur FM recherche sur Internet des podcasts de haute qualité que vous pourrez apprécier dès maintenant. C'est la meilleure application de podcast et fonctionne sur Android, iPhone et le Web. Inscrivez-vous pour synchroniser les abonnements sur tous les appareils.

 

Guide de référence rapide

Écoutez cette émission pendant que vous explorez
Lire