← Prompts
Reference / Understand Alireza Rezvani Claude Skills

Use when assessing AI/ML systems for prompt injection, jailbreak...

Use when assessing AI/ML systems for prompt injection, jailbreak vulnerabilities, model inversion risk, data poisoning exposure, or agent tool abuse. Covers MITRE ATLAS technique mapping, injection signature detection, and adversarial robustness scoring.

# AI Security

AI and LLM security assessment skill for detecting prompt injection, jailbreak vulnerabilities, model inversion risk, data poisoning exposure, and agent tool abuse. This is NOT general application security (see security-pen-testing) or behavioral anomaly detection in infrastructure (see threat-detection) — this is about security assessment of AI/ML systems and LLM-based agents specifically.

---

## Table of Contents

- [Overview](#overview)
- [AI Threat Scanner Tool](#ai-threat-scanner-tool)
- [Prompt Injection Detection](#prompt-injection-detection)
- [Jailbreak Assessment](#jailbreak-assessment)
- [Model Inversion Risk](#model-inversion-risk)
- [Data Poisoning Risk](#data-poisoning-risk)
- [Agent Tool Abuse](#agent-tool-abuse)
- [MITRE ATLAS Coverage](#mitre-atlas-coverage)
- [Guardrail Design Patterns](#guardrail-design-patterns)
- [Workflows](#workflows)
- [Anti-Patterns](#anti-patterns)
- [Cross-References](#cross-references)

---

## Overview

Sign in to view the full prompt.

Sign In

Classification

Reference Documentation, cheatsheets, setup guides
Reference Understand
Explain or analyze
Scope Global
All AI interactions
Triggered Activates on context match -- file patterns, topics, working state