# MLAD.ai -- Prompt Library > Curated prompt patterns for AI-assisted development. 5399 curated AI coding prompts from 34 professional sources. Classified using the MLAD 5-Axis Prompt Taxonomy. ### Taxonomy Type: - System: Behavioral rules defining AI identity and persona - Task: Immediate work request to complete - Skill: Capability with explicit trigger pattern - Reference: Documentation, cheatsheets, setup guides - Meta: Prompts about prompting conventions Activation: - Manual: Manually placed / Persistent - Invoked: Called by name -- slash commands, named tools - Triggered: Activates on context match -- file patterns, topics, working state - Reactive: Fires on agent lifecycle events -- hooks, guardrails, interceptors Activity: - Create: Generate or transform - Fix: Correct or validate - Understand: Explain or analyze Constraint: - Open: AI chooses approach freely - Guided: Soft preferences given - Bounded: Hard rules, some flexibility - Scripted: Exact steps prescribed Scope: - Global: All AI interactions - Project: This codebase - Session: This conversation - Atomic: Single use --- ### AI Identity Rules Persistent behavioral rules that shape all AI interactions. Every CLAUDE.md file, cursor rules file, and custom instructions block defines what the AI is before it receives any task. These prompts set tone, constraints, and operating boundaries at the system level. 416 matching prompts https://mlad.ai/prompts?type=System&activation=Persistent&scope=Global Examples: - You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4.5 architecture - Agent Prompt: Security monitor for autonomous agent actions (first part) - You are Tay, a right-wing chatbot built by Gab AI Inc and designed to learn and mimic human conversation - Tool Description: Bash (sandbox — response header) - You are an elite software engineer and product manager with the following expertise ### Project-Scoped Rulesets Project-specific behavioral constraints with hard rules and limited flexibility. Coding standards, architecture decisions, naming conventions, and framework-specific patterns enforced within a single codebase. The prompt encodes institutional knowledge the AI must follow. 504 matching prompts https://mlad.ai/prompts?type=System&activation=Persistent&constraint=Bounded&scope=Project Examples: - Agent Prompt: Session title and branch generation - you are hermes, a specialized AI agent that fetches docs for frameworks - Code Documentation Doc Generate - - GLOBAL CODING STANDARDS ### Named Project Tools Capabilities invoked by name within a codebase, functioning as slash commands or named tools. Each skill encapsulates a repeatable workflow: run tests, deploy, lint, generate docs. The AI executes a defined procedure when the developer calls it. 424 matching prompts https://mlad.ai/prompts?type=Skill&activation=Invoked&scope=Project Examples: - autoplan - Migration and code evolution instructions generator for GitHub Copilot - Skill: Create verifier skills - Code Reviewer - Sparc Security Review ### Context-Activated Capabilities Named capabilities that fire automatically when the AI detects a matching context: file patterns, topic keywords, or working state. No explicit invocation needed. The skill activates because the situation demands it. 699 matching prompts https://mlad.ai/prompts?type=Skill&activation=Triggered&scope=Project Examples: - Code Review Specialist - Vendor-agnostic lab automation framework - Code Refactor - Verify that a code change actually does what it's supposed to by running the app and observing behavior - Cellxgene Census ### Step-by-Step Generation Recipes Exact-steps generation procedures where the AI follows a prescribed sequence. Lowest autonomy, highest predictability. Used for scaffolding, boilerplate, migration scripts, and templated output where the developer knows the correct procedure and needs consistent execution. 1 matching prompts https://mlad.ai/prompts?type=Task&activity=Create&constraint=Scripted Examples: - Create Spring Boot Java Project Skeleton ### Bounded Repair Tasks Bug fixes and corrections operating under hard constraints. The AI must fix what is broken without violating specified rules: no breaking changes, preserve API contracts, maintain backward compatibility. Repair within guardrails. 28 matching prompts https://mlad.ai/prompts?type=Task&activity=Fix&constraint=Bounded Examples: - To make this change we need to modify `mathweb/flask/app.py` to - Linus Torvalds Coding Philosophy Prompt - To make this change we need to modify `mathweb/flask/app.py` - To make this change we need to modify `main.py` and create a new file `hello.py` - To make this change we need to modify `mathweb/flask/app.py` to ### Open-Ended Analysis Explanation and analysis tasks where the AI chooses its own approach. Code review, architecture assessment, dependency audit, performance profiling. The developer asks a question; the AI determines how to investigate and what to report. 28 matching prompts https://mlad.ai/prompts?type=Task&activity=Understand&constraint=Open Examples: - Hive Mind Resume - Business Trends - [Music] and they've just gone crazy with them there was a lot of big news um... - Film & TV Industry Trends - Real Time View ### Always-Available Documentation Documentation, cheatsheets, and setup guides that persist across all sessions within a project. API references, style guides, deployment runbooks. The AI can consult these at any time without the developer re-providing them. 490 matching prompts https://mlad.ai/prompts?type=Reference&activation=Persistent&scope=Project Examples: - Documentation Access - Automatically update README.md and documentation files when application code changes require documentation updates - Instructions for upgrading .NET MAUI applications from version 9 to version 10,... - Dataverse SDK for Python — Performance & Optimization Guide - Comprehensive guide for working with Power Apps Canvas Apps YAML structure based... ### Prompting Conventions Prompts about prompting itself: conventions for structuring instructions, managing context, formatting output, and coordinating multi-turn interactions. Meta-level patterns that improve all other prompt categories. 0 matching prompts https://mlad.ai/prompts?type=Meta&constraint=Open&scope=Global ### Lifecycle Hooks Skills that fire on agent lifecycle events within a session: pre-commit checks, post-generation validation, error recovery, guardrail enforcement. The AI responds to its own actions, creating feedback loops that catch mistakes before they reach the developer. 1 matching prompts https://mlad.ai/prompts?type=Skill&activation=Reactive&scope=Session Examples: - Suggest Compact