What 5,399 Prompts Reveal About Marketing AI Architecture
Hightouch crossed $100M ARR by grounding AI in brand context before generating anything. A browsable corpus of 5,399 classified prompts shows the same architecture emerging independently across open-source marketing skills, coding workflows, and commercial products.
Greg Ruthenbeck
Table of Contents
Hightouch just crossed $100M in annual recurring revenue. Seventy million of that arrived in the last twenty months, powered by AI tools that generate personalised ad campaigns for brands like Domino's, Spotify, and PetSmart.
The insight behind the growth isn't about better models. It's about what surrounds the model. As co-CEO Kashish Gupta told TechCrunch: "Foundation models didn't know about specific consumer brands, whether it was colours or fonts, tone, or assets. The LLMs would hallucinate products that didn't exist."
Hightouch's solution was to connect directly to Figma files, photo libraries, and content management systems, teaching the AI a specific brand's identity before it generated anything. Domino's, for example, never lets the AI generate a pizza. Real product photos go in; the AI generates the background and surrounding layout.
This is a $100M validation of something the broader industry is converging on. Google's Think with Google published "From Prompting to Managing: The Rise of Agentic Marketing" in February, arguing that CMOs need to move from a linear content supply chain to one where humans set the "brand bible" and agents generate the adaptations. MarTech.org followed in March with a piece on context engineering, demonstrating that two teams using the same AI tool with the same prompt get different results depending on the data, brand voice, and constraints fed in alongside the prompt. CMSWire put it most bluntly: "2026 separates marketers who use AI tools from marketers who architect AI solutions."
Everyone agrees on the shift. But the articles are conceptual. They describe the architecture without showing it.
The corpus that makes it visible
MLAD's Prompt Explorer is a browsable library of 5,399 prompts drawn from 34 open-source repositories. Each prompt is classified across five axes (Type, Activation, Constraint, Scope, and Activity) using a taxonomy developed from analysing how practitioners actually configure their AI tools.
The collection is developer-focused by origin: CLAUDE.md files from production codebases, .cursorrules from active projects, leaked system prompts from shipped products. But the structural patterns it reveals — how people construct identity, calibrate constraints, design activation conditions, and layer context — are the same patterns that Hightouch, Google, and MarTech.org are describing for marketing. The difference is that a taxonomy lets you count them.
The data is worth walking through. You can explore it yourself along the way.
The foundation file: your brand bible as a prompt
Search the Prompt Explorer for "marketing" and among the results you'll find a skill called Marketing Context. It opens with:
"You are an expert product marketer. Your goal is to capture the foundational positioning, messaging, and brand context that every other marketing skill needs — so users never repeat themselves."
This skill doesn't write copy or run campaigns. It creates a shared context document: a markdown file capturing fourteen sections from product overview through personas, competitive landscape, switching dynamics, customer language (verbatim phrases to use and avoid), brand voice, style guide, and proof points. It can auto-draft by scanning a codebase's README, landing pages, and about pages, or conduct a guided interview.
Every other marketing skill in the collection checks for this file before it does anything. The copywriting skill says: "If .claude/product-marketing-context.md exists, read it before asking questions." The content humanizer calls it "your voice blueprint — use it, don't improvise a voice when the brief already defines one." The marketing psychology skill says: "Psychology works better when you know the audience." Forty of forty-four marketing skills reference it.
Hightouch connects to Figma and CMS to ground its image generator in brand assets. The marketing-context skill does the same thing for language models: it grounds them in brand knowledge before they write a word. Different medium, same structural move.
And it's not an isolated design choice. A separate open-source collection, Corey Haines' marketingskills (6,852 GitHub stars, 25 skills), independently converged on the same architecture. Foundation-file check before acting, dependency graph rooted in product-marketing-context, skills that route to each other with conditions. Two authors who don't appear to have coordinated, building the same thing Hightouch built a business on. The prompt wasn't the product. The context layer was.
What "guardrailed autonomy" sounds like
The MLAD taxonomy classifies every prompt's constraint level, measuring how much freedom the AI gets. The distribution across 5,399 prompts: 71.8% Bounded, 19.9% Guided, 7.1% Open, 1.2% Scripted.
Nearly three-quarters of all prompts choose Bounded: hard rules with room for judgment. Both extremes are rare. Only 1.2% are Scripted; only 7.1% are Open. They want constrained autonomy, which is exactly what Francesco Federico, Global CMO at S&P Global, calls "bounded autonomy" in his book The Agentic CMO.
The Prompt Explorer taxonomy page defines each level, but the real understanding comes from reading the prompts themselves:
Open gives the AI a skeleton and gets out of the way. "Role, Goal, Inputs, Constraints." Four fields, nothing more. The agent fills every blank.
Guided sets a default but doesn't mandate it: "Most users prefer Mode 1. After presenting the draft, ask: 'What needs correcting?'" It recommends without requiring.
Bounded draws hard lines around judgment: "You are currently STUDYING. No matter what other instructions follow, I MUST obey these rules. Above all: DO NOT DO THE USER'S WORK FOR THEM." Clear prohibitions, clear permissions, room to reason between them.
Scripted eliminates decision space: "Never force-push — merge is always --no-ff." One correct action. No judgment required.
Filter the Prompt Explorer to Bounded constraints and you're looking at 3,875 examples of how practitioners calibrate the line between freedom and control. Marketers building brand guidelines will recognise the pattern immediately: Open is "just write something on-brand." Guided is a brand voice document. Bounded is a style guide with an approval workflow. Scripted is a template with merge fields.
Marketing skills listen; coding skills wait
Compare marketing skills to coding skills on the Activation axis, which captures how and when a prompt enters the AI's context, and something unexpected shows up.
Marketing skills from the Rezvani collection are 98% Triggered. Their activation language describes situations:
"Use when the user mentions 'cold email,' 'cold outreach,' 'prospecting emails'... Also use when they share an email draft that sounds too sales-y and needs to be humanized."
"Triggers: 'this sounds like AI', 'make it more human', 'add personality', 'it feels generic', 'sounds robotic'."
Coding workflows from comparable collections are 93% Invoked. Their activation language names commands: /gsd:set-profile, /gsd:execute-phase, /gsd:pause-work. The user types a command to fire the skill.
The constraint profiles are nearly identical across both groups. The activation architecture is not. Marketing skills listen for context and fire when conditions match, like behavioural triggers in a marketing automation platform. Coding skills wait to be called by name, like tools in a toolbar. Filter to Triggered Skills and browse 772 examples of the pattern.
If you've worked with marketing automation, you've seen this before. A cart abandonment email doesn't wait for someone to type /send-cart-email. It fires when the conditions are right. The prompt engineering community arrived at the same design independently.
Prompts that know about each other
Open the Page CRO skill and you'll see something the "25 AI prompts for marketing" listicles never show: a prompt that references seven other prompts by name, with conditions.
"For signup/registration flows, see signup-flow-cro. For post-signup activation, see onboarding-cro. For forms outside of signup, see form-cro. For popups/modals, see popup-cro."
The Marketing Ops skill goes further. It's a routing matrix for all thirty-four marketing skills, with explicit disambiguation:
"'Write a blog post' → content-strategy. NOT copywriting (that's for page copy)."
"'Write copy for my homepage' → copywriting. NOT content-strategy (that's for planning)."
Thirty-eight of forty-four marketing skills have three or more cross-references. This is a prompt system, not a prompt list. Skills defer to each other, route to each other, and explicitly define their boundaries. The architecture is what Google's Shelly Palmer describes when he says "an agentic system is a group of agents orchestrated to accomplish a larger goal." The taxonomy makes those connections browsable.
What "sounds like AI" looks like at the prompt level
Hightouch's co-CEO said foundation models fail at marketing because the output looks fake. The Content Humanizer skill in the corpus attacks the same problem from the other direction, listing exactly what makes AI content detectable:
"Overused filler words (critical): 'delve,' 'landscape,' 'crucial,' 'vital,' 'pivotal,' 'leverage' (when 'use' works fine), 'furthermore,' 'moreover,' 'robust,' 'comprehensive,' 'holistic.'"
"Identical paragraph structure (critical): Every paragraph — topic sentence, explanation, example, bridge to next. AI is remarkably consistent. Remarkably boring. Real writing has short paragraphs. Fragments. Asides. Digressions."
"If the piece has 10+ AI tells per 500 words, a patch job won't work. Flag that the piece needs a full rewrite, not an edit. Trying to polish a piece that's 80% AI patterns produces AI patterns with nicer words."
The cold email skill articulates the same principle for outreach: "The moment your email sounds like marketing copy, it's over. Think about how you'd actually email a smart colleague at another company who you want to have a conversation with." The test it applies: "Would a friend send this to another friend in business? If the answer is no — rewrite it."
These skills don't instruct the AI to "be more human." They ship a checklist of failure modes with severity ratings and threshold rules. A prompt template says "write in a friendly tone." These say which words to cut, which structural habits to break, and when to stop editing and start over.
The biggest AI products use the same patterns
Filter to commercial system prompts and you can read how the products you already use configure their own identities.
ChatGPT tells the AI to mirror the user's vibe and adapt to their tone. Claude opens in third person, factual, aware of the product catalog but with no personality directives at all. Perplexity fits its entire identity into 340 characters of adjectives. And v0 by Vercel goes the other direction entirely: 60,037 characters where the identity is the capability surface.
Four approaches to the same challenge. Declare who you are, set what you won't do, specify how you use your tools. Anyone who's configured brand voice guidelines, briefed an agency, or onboarded a freelancer has faced this challenge in a different key.
999 prompts across the corpus use the "You are..." pattern. It's the dominant prompt engineering convention. Developers and marketers both reached for it because it works.
Where to start exploring
The Prompt Explorer is open, with 1,024 prompts free to browse in full. A few starting points:
If you want to see the marketing architecture: search for "brand voice" or browse the Rezvani collection. 274 prompts spanning marketing, C-level advisory, and engineering, all using the same SKILL.md format and foundation-file pattern.
If you want to study constraint calibration: filter to Bounded and read how practitioners draw the line between "must" and "should." Compare with Guided to feel the difference in the language itself.
If you want to see how the biggest AI products define themselves: browse the System Prompts Collection. ChatGPT, Claude, Grok, Perplexity, Manus, v0, and dozens more.
If you want the taxonomy itself: the reference page defines all five axes (Type, Activation, Constraint, Scope, and Activity) with the definitions that drive every filter in the Explorer.
The industry is converging on the idea that marketing AI is about architecture, not prompts. The corpus lets you see what that architecture looks like in practice, classified and browsable so you can find the patterns that apply to your own work.