# The MLAD AI Skills Schema > A practitioner-derived skill taxonomy for AI-assisted software development. 69 practitioner skills for AI-assisted development. 11 competency clusters. 6 themes. 4 proficiency bands: practice (35), foundation (18), mastery (15), frontier (1). Derived from documented expert workflows in production AI coding sessions. Skills are evidence, not claims; proficiency levels reflect accumulated engagement evidence, computed at read time. https://mlad.ai/ledger --- ### ai-self-verification (6 skills) - Design AI Self-Verification Loops [practice]: Design feedback mechanisms that allow the AI to check its own output and self-correct - Provision Verification Commands [foundation]: Make build, test, and lint commands available to the AI in the session context - Evaluate AI Self-Verification Capability [practice]: Assess whether the AI can correctly interpret and act on its own verification output - Design Convergent Iteration Loops [practice]: Structure AI verification-and-fix cycles that narrow toward correctness rather than oscillate - Recognise Limits of Automated Verification [practice]: Identify when AI self-verification is insufficient and human judgement must re-enter the loop - Choose Interpretable Verification Mechanisms [practice]: Select verification tools whose output the AI can reliably interpret and act on ### verification-tooling (6 skills) - Specify Correctness Upfront [practice]: Define what correct looks like before delegation — tests, scripts, typed interfaces, or acceptance criteria that make verification automatic - Evaluate Visual Output [foundation]: Assess whether AI-generated visual artifacts accurately represent the intended information - Verify Output Against Intent [foundation]: Compare what the AI produced against what was requested, catching drift between specification and delivery - Calibrate Verification Depth [practice]: Match verification effort to risk — quick scan for routine output, deep review for high-stakes changes - Design Human Review Protocols [practice]: Create systematic approaches for evaluating AI output — structured checklists, review workflows, acceptance criteria - Build Verification Harnesses [mastery]: Construct reusable verification infrastructure — test environments, CI pipelines, deployment checks — that make verification repeatable ### model-fluency (5 skills) - Read Model Reasoning [foundation]: Interpret thinking blocks and confidence signals to assess whether the model is reasoning or confabulating - Recognise Model Failure Signatures [practice]: Identify characteristic failure patterns specific to different AI models - Calibrate Task to Model [practice]: Match task complexity and type to model capabilities, switching when the work demands it - Adapt Context to Model [practice]: Adjust context loading, window management, and CLAUDE.md structure for different models' behaviours - Predict Model Behaviour Under Constraints [mastery]: Anticipate how a specific model will approach a task given the current context and constraints ### context-engineering (6 skills) - Design CLAUDE.md Structure [foundation]: Author and maintain the persistent context file that shapes every AI session - Maintain Context Freshness [practice]: Recognise when context has gone stale and clear or refresh before output quality degrades - Lazy-Load Context on Demand [practice]: Structure information so context is loaded when needed, not preloaded at session start - Audit Context State [foundation]: Monitor what the AI currently knows, what it's lost, and what's consuming the context window - Manage Context Window Lifecycle [practice]: Understand and deliberately manage the phases of a context window from initial load through degradation to renewal - Structure Hierarchical Context [mastery]: Design multi-level context that provides the right information at every scope without duplication ### session-convention (6 skills) - Establish Project Conventions Early [foundation]: Set up the working environment and structural norms before any AI session begins - Define Critical Rules and Priorities [foundation]: Identify the highest-cost rules and encode them with appropriate priority in the AI's context - Encode Team Learnings in Shared Config [practice]: Evolve personal AI configuration into team-maintained shared convention that improves through collective use - Design Workflow Triggers [practice]: Encode conditional behaviours in the AI's context so it responds to situations according to a designed protocol - Version Control the AI Configuration [foundation]: Treat AI configuration files as versioned infrastructure with review, commit discipline, and rollback capability - Configure Permission Boundaries [practice]: Design the trust contract between practitioner and AI: what's allowed, what requires approval, what's forbidden ### interruption-and-control (7 skills) - Recognise When to Interrupt [foundation]: Sense that the AI's current trajectory will not produce the desired result before significant divergence occurs - Distinguish Error Types [practice]: Classify interrupt triggers as direction errors, quality errors, or context errors to choose the right intervention - Interrupt with Precision [practice]: Provide specific, actionable corrections that address the root cause of the AI's wrong direction - Use Escape and Rewind as Workflow Tools [foundation]: Use interrupt and rewind mechanics fluidly as part of the standard workflow, not just as emergency stops - Resist the Slot Machine Impulse [practice]: Override the compulsion to re-prompt after failure and instead stop, diagnose, and choose a different intervention - Calibrate Interrupt Timing [mastery]: Adjust how long to let the AI run before intervening based on task risk, model behaviour, and observed trajectory - Design Permission Boundaries as Interrupt Proxies [mastery]: Use the permission system to create automatic interrupt points for high-risk operations ### parallel-orchestration (8 skills) - Run Multiple Instances Effectively [practice]: Manage multiple simultaneous AI sessions, each on a different task, keeping all productive - Design Workspace Layout for Parallel Work [practice]: Arrange the physical workspace to support efficient attention management across multiple AI sessions - Isolate Parallel Work with Git Worktrees [practice]: Use git worktrees to give each parallel session its own filesystem, preventing conflicts between concurrent work - Manage Attention Across Parallel Streams [mastery]: Maintain cognitive awareness of multiple concurrent sessions without losing track of each session's state and needs - Enable Asynchronous Awareness [foundation]: Configure notifications and signals so sessions can run autonomously and alert when they need attention - Design Work Distribution Across Instances [mastery]: Assign tasks to parallel sessions based on relationships, context requirements, and dependency minimisation - Initiate Sessions Asynchronously [frontier]: Start AI sessions designed to produce useful results without immediate supervision - Scope Research Separately [practice]: Use external AI sessions to prepare materials complementary to the current session, keeping research context separate from build context ### context-persistence-and-handoff (6 skills) - Persist Learnings Before Ending Sessions [foundation]: Capture what was learned, decided, and discovered during a session before closing it - Resume Sessions Effectively [foundation]: Start new sessions with sufficient context from previous sessions to continue without losing momentum - Use Diffs to Synchronise Fresh Sessions [practice]: Use git diffs or file comparisons to efficiently bring a new session up to speed on what changed since the last one - Encode Session Learnings into CLAUDE.md [practice]: Promote session discoveries into permanent convention that every future session inherits - Design Session Boundaries for Effective Handoff [mastery]: Choose when to end and start sessions based on the cost of reconstructing context versus the cost of accumulated noise - Build Cumulative Project Knowledge [mastery]: Compound session learnings over time into a project-level AI configuration that carries institutional memory ### composability-and-automation (7 skills) - Identify Recurring Workflow Patterns [foundation]: Recognise repeated workflows that are worth encoding as reusable primitives - Encode Workflows as Slash Commands [practice]: Capture multi-step workflows as single invocable commands - Design Effective Subagents [mastery]: Create isolated AI agents for specific tasks with deliberate context boundaries and structured outputs - Configure MCP Integrations [practice]: Connect external services through MCP, managing their value against their context cost - Design Pre/Post Tool-Use Hooks [mastery]: Automate quality checks and safety gates through event-driven hooks that fire before or after AI tool use - Manage Instruction Load [practice]: Maintain only the skills, commands, and integrations that add value, retiring the rest to prevent instruction overload - Build from Composability Primitives [mastery]: Choose and combine the right primitives (skills, commands, MCPs, subagents) for each workflow need ### recovery-and-git-discipline (6 skills) - Use Git as a Safety Net [foundation]: Commit at meaningful checkpoints so recovery always has a clean state to return to - Commit at Meaningful Checkpoints [foundation]: Choose commit points based on logical completeness, creating a chain of recoverable states - Diagnose Root Cause of AI Divergence [practice]: Trace AI failures to their origin point to choose the right recovery intervention - Choose Recovery Strategy [practice]: Select between steer, rollback, restart, and take-over based on the diagnosed failure and estimated recovery cost - Recognise Sunk-Cost Attachment [practice]: Override the impulse to salvage bad AI output and choose the fastest path to correct output regardless of time already invested - Design Rollback Points into the Workflow [mastery]: Structure the workflow so that recoverable states exist at predictable intervals, not just when remembered ### planning-and-decomposition (6 skills) - Start in Plan Mode [foundation]: Begin every non-trivial task by asking the AI to plan before executing - Challenge and Argue with the AI's Approach [practice]: Treat the AI's plan as a proposal to be stress-tested, not a directive to follow - Decompose Tasks for AI Delegation [practice]: Break complex tasks into bounded units the AI can execute independently with clear context and success criteria - Author Annotated Plan Documents [mastery]: Write plan documents with enough detail and reasoning that implementation becomes mechanical - Identify AI-Appropriate vs Human-Appropriate Work [practice]: Sort tasks by delegation suitability based on ambiguity, judgment requirements, and AI capability - Design Task Boundaries That Minimise Context [mastery]: Decompose tasks so each unit requires minimal, focused context for the AI to execute well