hendry.ai — engine status
hendry engine status --all
foundation context applied — define · understand · position
10 engines · 110 entries · 6 active · 1 beta · 3 planned

Engines

Raw documentation of building AI marketing engines by Hendry Soong. What changed, what broke, what was learned.

These logs capture the real work of operating the AI Marketing Framework. Not polished thought leadership. Timestamped entries showing iterations, failures, and extracted principles from running production systems. The Foundation entries (v0.1 to v0.3) show context engineering in action: designing the brand voice, ICP, and positioning that shape every AI output.

Engine

Purpose

Entries

Status

Create-Articles

Content generation with 3-tier validation

46

Production (v8.0.1)

Create-Images

SVG diagrams and hero images with 10 perception rules and 9-check exit gate

13

Production (v4.1.0)

Create-Compiler

Field validation with 22 checks, review agent, and closed-loop feedback

6

Production (v2.0.1)

Listen-Competitors

Competitive intelligence with synthesis

9

Production (v3.3)

Create-Social

LinkedIn carousel generation

1

Production (v1.0.2)

Create-Articles-Replicate

Portable content engine tested on 3 brands

8

Production

Listen-Competitors-Replicate

Portable competitive intel for other brands

1

Validated

The Pipeline: Articles flow through three engines: Create-Articles generates structured JSON with visual insertion points (0 SVGs). Create-Images generates SVG diagrams and hero images through a 9-check exit gate. Create-Compiler validates fields with 22 checks plus a 7-question review agent, classifies issues through a 4-tier router, and sends reverse manifests back to upstream engines. Output publishes to a headless CMS (Neon + Payload + Vercel) via publishing SDK. Closed-loop feedback. Each engine has its own context window, validation system, and version history.

Deep Dives

Full retrospectives on major learnings. Each deep dive expands on log entries with complete methodology, code examples, and extracted principles. 6 published, 9 in progress.

01Build Log #1: Content System
The full narrative field report. 3 content engines, headless CMS, substrate architecture. $366K traditional function augmented by 1 operator plus AI agents to $162K AI-native. 30 to 35 percent realistic savings, 56 percent solo ceiling.
An asset deploys once and redeploys across a portfolio. Bespoke projects restart with every client.
System
02Why Your AI Marketing System Should Be Model-Agnostic
Platform dependency always ends the same way. Five category-level truths about LLMs that hold regardless of which model you use, and five practices to build portable AI marketing systems.
System
03The Missing Layer Between Your AI Systems and Your Website
Eight CMS platforms evaluated for agentic marketing: structured content, full API access, and data ownership. Why most fall short when AI systems need to read, write, and assemble content programmatically.
Create-Articles
04The Engine Split: Context Window Survival at 84K Tokens
How a monolithic 84K-token engine was split into three modular engines (Create-Articles, Create-Images, Create-Compiler) connected by 6 integration contracts and a closed-loop feedback system. Token budget analysis, integration contract design, and 5 extracted principles.
Create-Articles
05LLMs Lie About Validation: How I Rebuilt Content Quality Checks
Discovered that LLMs pattern-match validation instructions to expected outputs without doing the work. Rebuilt the entire quality system with a 3-tier architecture: pattern checks, structural evidence, and semantic advisory.
Create-Articles
0623 Iterations in 32 Days: How I Built a Production AI Content System
The complete journey from folder structure to production engine. Gen 1 got to stable output. Gen 2 rebuilt validation after discovering LLMs lie about quality checks.
Create-Articles
07Video: Building a context-aware content system for marketing teams
23-minute walkthrough of the Gen 1 content engine. File structure, validation workflow, CMS integration, and the lessons from 25 versions before the Gen 2 rebuild.
Create-Articles

System Architecture

Orchestration decisions that affect all engines. How the system holds together across sessions, tools, and agents.

01Agent Website Building: Three Sites on Neon + Payload + Vercel
Built three websites using AI agents on the same headless stack: Neon Postgres, Payload CMS, Vercel. hendry.ai took 80+ sessions — 20+ article migrations, structured data models, publishing SDK, i18n, search, analytics. growthsetting took 28 sessions — consulting site from scaffold to production. A client site took 24 sessions — built in the same headless stack, delivered as HubSpot HUBL templates. Same stack, different content complexity, 3x difference in time-to-live.
Time-to-live depends on content complexity, not stack complexity.
System
02Listen Engine: Signal-to-Brief Pipeline Validated
First engine built on the substrate. Scaffolded discuss/build repo pair, then iterated from architecture redesign to live pipeline in 6 days across 20 commits. Signal-to-brief pipeline tested on real targets — 5 thought leaders yielded 83 ICP matches. Tested 8 source types: podcast transcripts scored highest signal density (70%), job scrapers failed for Swiss market coverage. Three-workflow architecture locked. 65-field marketing capabilities schema designed.
The first engine built on new infrastructure validates the infrastructure more than itself.
System
03Substrate Architecture: Six Repos Replace Three Chat Windows
Replaced three isolated AI chat windows with a six-repo substrate — versioned markdown in git, not conversation history. Three design principles: isolation prevents synthesis errors, discovery requires freedom from preconception, slots are earned not assigned. Three-state model for tracking drift: assumed state (framework spec), spec state (engine design), actual state (production code). Seven spec versions in two days.
Conversation history is not version control. When three chat windows carry the system state, every session starts with a manual sync that drifts silently.
System
04First Agentic Article Pipeline: Engine to CMS in One Session
Built a publishing SDK — engine generates article as JSON, SDK pushes to headless CMS via API, creates draft, preview available immediately. First full run: an article went from content brief through 6-phase pipeline, 3 revision passes, 5 SVGs, to CMS draft in one session. Two bugs discovered and fixed: (1) handoff template pointed to HTML instead of JSON, (2) hybrid content profiles needed a canonical profile rule. Added an update flag for re-publishing edited articles. 4 articles produced in total (Apr 6-9).
The first automated pipeline run reveals every assumption the manual process hid. Ship the pipeline, then fix what it exposes.
System
05Headless Native: Legacy CMS to Headless CMS
All three production engines migrated from legacy CMS HTML fragments to a headless CMS with rich text AST. Create-Articles outputs JSON payloads via publishing SDK. Create-Images reads from the CMS visuals array and writes via API. Create-Compiler changed from HTML page assembler to field validator — 22 checks (8 field, 6 cross-engine, 8 content quality) replacing the old CV/RA system. The CMS itself was built from scratch over 80+ sessions (Mar 5 to Apr 7): design system tokens, nested routing, pillar templates, article templates with TOC sidebar, structured log entry blocks for operator logs, content migration (20+ articles), search page, i18n with machine translation, analytics, performance optimization, and the publishing SDK.
Moving to headless didn't just change the output format. It changed what each engine is responsible for. The compiler stopped assembling HTML and started validating fields.
System
06Pipeline Orchestration Spec
Documented the full two-repo pipeline flow. Covers: repo layout, pipeline sequence (Create-Articles → Create-Images → Create-Compiler → Publish), handoff formats between engines, publishing SDK transformations, environment requirements, failure modes, and cross-repo commit rules. Both repos now reference this spec.
Document the pipeline before automating it. Cross-repo flows need a single spec both sides can reference.
System
07Security Review: What an AI Agent Finds vs What Matters
Ran an automated security review against the CMS codebase. The agent flagged preview auth, missing security headers, SVG injection surface, and absence of rate limiting. Most findings were legitimate — preview endpoints had no auth, SVG uploads were unsanitized, and no rate limiting existed on any route. Fixed all four in one session. The deeper learning: the agent found real vulnerabilities but couldn't assess business risk. It flagged everything equally — a missing Content-Security-Policy header and an unauthenticated preview route that could leak draft content got the same severity. The operator's job is triage: which findings matter given how the system is actually used. Also added password protection to an internal dashboard page after realising it exposed system architecture details publicly.
Automated security review finds real issues but can't assess business risk. The operator's job is triage — not every finding deserves the same fix urgency.
System
08Site Launch: 60+ Issues in 3 Days
Full launch preparation across 3 days. Performance optimization: self-hosted fonts, contrast ratios fixed, semantic landmarks added, mobile LCP improved by deferring analytics scripts. Analytics stack: product analytics (EU region), web vitals monitoring, speed insights, standalone tracking support. Mobile responsiveness: container padding audit, all touch targets to 44px+, 480px breakpoint added. AI-SEO audit: schema accuracy fixes, sitemap, RSS feed, breadcrumbs, machine-readable site description. Broken link audit: 17 internal links fixed sitewide. Unified post-sharing system with relevance ranking replaced manual explore sections. Domain migration: canonical URL normalization. Product analytics had to be immediately tuned after launch — default configuration was killing mobile performance by loading surveys, session recording, and autocapture. Disabled everything except core event tracking.
Launch preparation is a compression event — it surfaces every issue the development phase deferred. Plan 3 days minimum. Also: always audit analytics SDK defaults. Out-of-the-box configuration assumes desktop bandwidth.
System
09Voice Centralisation + Cross-Engine Path Audit
Voice rules moved from per-engine directories to a shared context directory as single source of truth. Per-engine copies replaced with redirect stubs. Comprehensive path audit across all 5 engines + system files: 50+ files checked, zero broken references remaining.
Shared context files need one canonical location. Per-engine copies drift with every version bump.
System
10Content Migration: Every Article is a Schema Stress Test
Migrated 20+ articles from legacy CMS to headless CMS. 4 deep dive articles, 6 pillar pages, 8+ standard articles, 4 definition pages. Built per-article seed scripts for reproducible migration. Each migration surfaced schema gaps the initial design missed: proficiency level field added after migrating a technical measurement article, software application entity type after migrating a roles article, video embed block after migrating an article with inline video content, topic taxonomy after migrating pillar pages that needed hierarchical categorization. Definition articles needed their own index page and styling. Pillar pages needed feed filters by topic. Every content shape that existed in the legacy CMS but not in the new schema forced a field addition or block creation.
Migrate real content early. Speculative schema design misses every edge case that actual articles expose. Each migration is a stress test the schema either passes or adapts to.
System
11CMS Architecture: Built From Scratch in 30 Sessions
Built the entire headless CMS frontend from zero. Homepage redesign with design system tokens and dual-theme support (light/dark). Article template iterated 3 times: v1 single-column, v2 added two-column layout with TOC sidebar, v3 added meta line, dynamic explore feed, and hero positioning. Pillar page template iterated 3 times: v1 basic, v2 added sidebar and feed filter, v3 added numbered topics grid and tag system. Hero visual system supporting three modes: uploaded image, engine-generated SVG, and visual insertion point placeholder. Nested routing with catch-all slug resolution and parent hierarchy. CMS-managed redirects. On-demand revalidation so edits reflect immediately. Structured log entry data model for operator logs with colored coordination boxes and entry counters. Topic taxonomy for content categorization. Seed infrastructure for reproducible content loading.
Article templates needed 3 iterations because each was driven by migrating real content, not by speculative design. Build the simplest version, migrate one article, learn what's missing, iterate.
System
12Context Centralisation + ORCHESTRATOR v1.3
Context files (ICP, messaging, offerings, company) moved from per-engine directories to shared context. Eliminates drift between Create-Articles and Create-Social copies. ICP enriched with social platform persona data (v2.3). Context manifest added for version tracking. Orchestrator bumped to v1.3 with shared resources section and multi-tool adapter support.
Centralise context once. When the same file lives in 3 engines, it diverges in 3 directions.
System
13Cross-Engine Version Sync: 30+ Stale References
Meta-documentation version sync across 14 files. Articles VERSION.md header/lineage/footer stuck at v7.9.34. Compiler BACKLOG header at v1.3.3. articles-to-images contract parties at v2.0.24+. 21 cross-engine version refs updated. Ban 'actually' added to Tier 1 banned words in both Articles and Social voice.md. Root cause: version bumps focus on the changed engine; cross-references in other engines get missed every time.
Version bumps are cross-engine operations. Grep all repos after every bump, not just the changed engine.
System
14Cross-Boundary Audit: 31 Discrepancies Across 3 Engine Boundaries
Audited all three engine boundaries: Articles to Images, Images to Compiler, Compiler to Articles. Found 31+ discrepancies across contracts, examples, validation rules, and version references. Made 6 design decisions: source attribution in HTML everywhere, VIP ID required (was optional), compiler gate types aligned (BLOCKING for Tier 0/1, ROUTING for Tier 2+), deprecated post-compile-checks.md (zero unique checks remaining), preserved VIP ID in final compiled output, backlogged verification manifest emission. Root cause of most drift: version propagation across engines. Each version bump updates its own engine’s files but misses cross-references in the other two engines’ READMEs, pipeline diagrams, and dependency tables. 13 commits, approximately 65 files changed, zero logic changes. Pure documentation and contract alignment.
Version propagation across engines is the most common drift vector. Bump one engine, grep all three.
System
15Cross-Engine Audit: 48 Checks, 43 PASS
First full cross-engine consistency audit across Create-Articles v7.9.32, Create-Images v2.0.22, and Create-Compiler v1.3.0. 48 checks covering version alignment, integration contracts, shared conventions, and documentation sync. 43 passed immediately. 4 fixed during audit (README stale references, VERSION.md lineage gaps, BACKLOG.md missing audit items). 1 informational (stale version strings in system files, deferred). Documentation sync committed: 7 files, 116 insertions, 64 deletions. The audit revealed that documentation debt compounds silently across engines. README files were 12+ versions behind despite CHANGELOGs being current.
Audit across engines, not within them. Cross-boundary drift is invisible from inside.
System
16Agent Teams Validate the Framework
Anthropic shipped Agent Teams on 5 February 2026. The architectural principles extracted from 6 weeks of engine building mapped directly: context window is finite (each teammate gets its own), agents improvise unless forbidden (teammates load CLAUDE.md), systems should be self-contained (file ownership prevents conflicts), evidence-based validation (more agents means more hallucination surface area). Working practitioners discover platform patterns before platforms ship them.
The problems come first, then the principles, then the product.
System
17Source-of-Truth Audit
Discovered 5 phantom versions in CHANGELOGs (documented but no corresponding git commit), stale cross-references in 6 files, and 2 engines missing VERSION.md. Reconciled: annotated phantom versions with actual commit SHAs, updated all pipeline references to current versions, added VERSION.md to Create-Social and Listen-Compete.
Audit the audit trail. Documentation debt compounds silently.
System
18GitHub as Single Source of Truth
Changelogs and backlogs existed in three places: Claude Projects (conversation memory), Claude Code (local files), and zipped engine packages. No single version was authoritative. Built a GitHub-based orchestration layer: CLAUDE.md as agent entry point with task routing, .ai/ORCHESTRATOR.md with 17 extracted principles, .ai/SYSTEM-STATUS.md for current engine versions, and per-engine CHANGELOG.md and BACKLOG.md files. Every agent reads CLAUDE.md on startup, follows the routing chain, and commits updates alongside code changes.
The agent is disposable. The orchestration layer is permanent.
System
19Three-Tool Workflow Split
Formalised the separation between strategy and execution. Claude Projects handles strategy, planning, decisions, and reviews. Claude Code handles execution: build, validate, commit. GitHub is the single source of truth for all engine state. Nothing gets built or versioned in Claude Projects. Decisions made there become backlog items committed through Claude Code.
Separate strategy from execution. Different tools for different thinking modes.
System

Create-Articles Logs

40+ versions across three generations. From folder structure to a production content engine with cross-article memory, messaging framework, and closed-loop compiler feedback.

The Dependency Chain: Create-Articles does not operate in a vacuum. It consumes the outputs of three Foundation engines. Without these pre-flight files, the engine defaults to generic LLM output. The Foundation entries (v0.1–v0.3) show context engineering in practice: voice rules, ICP definitions, and positioning that gate every piece of content.

01Headless Native: Legacy CMS to Headless CMS JSON
Major version upgrade. Output changed from legacy CMS HTML fragments to headless CMS JSON payloads with rich text AST. Engine now publishes via SDK. Deleted 4 legacy output files. Created 3 new reference files for the headless contract. Validation rewritten from HTML grep to JSON/AST traversal. Templates updated for rich text output. Reference files replaced with .json golden examples. All v7.9.x content quality rules (voice, structure, semantic, AI-SEO) preserved — only the output format changed. Workflow split into slim orchestrator (~1,200 tokens) + 8 phase files to fix the AI agent's 10K token read limit that truncated everything past Phase 2 during agentic runs.
Changing the output format tests whether your validation rules are coupled to structure or to quality. Ours survived because they check content, not markup.
Create-Articles
02PAT-019: Experience-Counting Pattern
New Tier 3 flag-for-review check. Detects LLMs counting personal experiences to manufacture authority ('five times,' 'after the third collapse'). Q18 added to QUICK-CHECK. voice.md bumped to v7.0.9. Triggered by model-agnostic article generated under v7.9.35 containing 'five times' / 'five collapses' in 6+ locations. The examples established the pattern without counting. Exception: counting artifacts/data ('80+ versions') is legitimate.
LLMs quantify experience to signal authority. The count itself is the tell — real operators describe what happened, not how many times.
Create-Articles
03Domain Frequency + Voice Sync + Rule Health Check
Three versions closing gaps found during production compiles. STR-023 Domain Frequency (v7.9.33): BLOCKING structural check — no external domain appears more than twice per article. Triggered by a single domain appearing 4x. v7.9.34 Voice-Validation Sync: PAT-014 Check 7 (inversion variant) added, Q17 PAT-018 anchor bug fix (was silent no-op on HTML), voice shape list expanded. Triggered by edit pass catching 4 voice failures that passed structural validation. v7.9.35 Rule Health Check: cross-engine documentation sync. STR-012 marked superseded by STR-023. Q3 grep aligned to max-2-per-domain. Image-handoff inline SVG section removed (contradicted STR-022). Cheatsheet synced.
Production compiles are the real test suite for upstream engines. Every compile run reveals gaps the generator missed.
Create-Articles
04PAT-018: The Ghost Rule
Article audit found 9 instances of dramatic thesis sentences (“The gap is architectural,” “The differentiator is system design”) in FAQ schema. The pattern existed in voice.md examples but had no validation rule, no grep pattern, no quick-check question. A ghost rule: the system taught the pattern through examples but never enforced it. PAT-018 defined with grep pattern and added to QUICK-CHECK as Q17 (Tier 3 advisory, flag not gate). Absorption Gate (Phase 1.3) now extracts PAT-014 + PAT-018 examples from voice.md to prove comprehension.
If your system files demonstrate a pattern, your validation must check for it. Undocumented patterns become invisible violations.
Create-Articles
05Absorption Gate + SVG Boundary Check
Two gates added to prevent silent failures. The Absorption Gate (Phase 1.3, v7.9.30): preflight listed messaging.md as “loaded” but the agent never read its contents. Key context (pillars, persona hooks, proof points) was absent from the generated article. New gate requires extracting and outputting specific elements from both rules files and context files after loading. Proves comprehension, not just file existence. The SVG Boundary Check (STR-022, v7.9.31): 7 inline SVGs bypassed Create-Images entirely during Phase 3 generation. Off-brand colours, emoji icons, sub-minimum fonts reached output. STR-022 is now a BLOCKING gate: article output must contain 0 <svg> elements. All visuals as VIP blocks for downstream processing.
“Loaded” is not “read.” Gates must prove comprehension, not just access.
Create-Articles
06Compiler Feedback + Profile-Aware Density
Create-Articles can now receive structured feedback from Create-Compiler. Phase 5 (Compiler Feedback) processes reverse manifests: Tier 0 entries add new STR/PAT validation rules, Tier 2 entries trigger Section-Patch Mode (regenerates a single H2 section without full pipeline re-run). Overflow guard: more than 3 entries triggers a STOP. Separately, Phase 2.7 density formulas are now profile-aware (v7.9.29): visual weight divisors match content profile (thought-leadership: 400, operator-log: 600). Previously all profiles used the same formula.
Downstream quality gates are only useful if they can talk back. Close the loop.
Create-Articles
07Spec Verification: 7 Hallucinated Claims Caught After Publication
The WebMCP article shipped with 7 hallucinated claims about a “Declarative API” with HTML attributes (toolname, tooldescription) that do not exist in the actual spec. The spec is purely JavaScript-based. Also had 3 unverifiable “postMessage” claims. No rule existed to prevent this class of error. Added Part 9 (Emerging Standards and Spec Verification) to AI-SEO rules: when writing about browser APIs or web standards, the official spec must be cited as Strong-tier. Blocks at Phase 2.3 if missing. Retroactive fix: Section 2 rewritten, 2 SVGs rebuilt, FAQ/schema/table/AI-summary all corrected.
When writing about specs, the spec is the source of truth. Not secondary coverage, not demos, not blog posts.
Create-Articles
08Messaging Framework + Brand Alignment
Three coordinated changes. Messaging Framework (v7.9.26): messaging.md added as first-class context file. Centralises value proposition, 3 messaging pillars, persona-by-buyer-stage tables, proof points, and hooks by content type. SEM-007 Messaging Pillar Alignment added to validation. ISO 8601 Dates (v7.9.25): date format upgraded from YYYY-MM-DD to full ISO 8601 with time and timezone. Google requires time and timezone for Article and ProfilePage schema. Brand Entity Sync (v7.9.27): Pillar 1 renamed “System Design Over Tools” (was “Architecture Over Tools”). 32 targeted terminology changes across 15 files in 2 engines.
Centralise messaging. When pillars live in 4 files, they drift in 4 directions.
Create-Articles
09Date Architecture: From Em Dashes to Semantic HTML
Three versions refining how dates work in articles. Em Dash Generation Gate (v7.9.22): article retrofit produced 5 em dashes despite PAT-001 existing. Root cause: no generation-time prevention rule. Added a generation-time warning matching the opposite-line pattern approach. Also purged em dashes from system file prose (system files are training data). Date Placement Flip (v7.9.23): aligned engine output with live site pattern. “Last updated” before H1, “Published” in footer. Semantic Dates (v7.9.24): moved “Last updated” to after H1 (Google’s recommended byline position). Added <time datetime> tags on both dates. Triple-layer date consistency: JSON-LD + <time> attribute + visible text.
System files are training data. They must follow the same rules they enforce.
Create-Articles
10Source Registry: Cross-Article Memory
Each article session started fresh with no memory of previous research. Same sources re-discovered, same failed searches repeated. Built shared/source-registry/registry.md (tracks every external link across articles) and queries.md (tracks search queries with success/failure). Phase 2.1 reads the registry before researching. Phase 4.0.1 appends new data after output. Append-only: future sessions inherit what worked and what didn’t.
Cross-session memory turns isolated agents into a learning system.
Create-Articles
11Tier-First Sources: Classify During Collection
SEO evolution article shipped with 25% Strong sources. The tier classification happened after collection (Phase 2.6), so the shortfall was discovered after prose was already written. Required 4 source swaps post-generation. Moved tier classification into collection (Phase 2.3). Strong <50% is now a STOP condition before any writing begins.
Classify quality during collection, not after. Left-shift the gate.
Create-Articles
12Sitemap Auto-Sync
approved-urls.md was manually synced (last: 5 Feb). New articles published after that date triggered false validation flags. Phase 1.0 now auto-syncs from live sitemaps before preflight. Sitemap is source of truth. The whitelist file is a cache, not a maintained document.
Automate from the source of truth. Manual sync processes decay.
Create-Articles
13Opposite-Line Hard Stop: Generation-Time Prevention
Test article contained a long-form opposite-line pattern that slipped validation. Root cause: voice.md examples covered only short-form patterns, Q5 quick-check was out of sync with full PAT-014, and no generation-time self-check existed. Fix: expanded voice.md with long-form examples, added a self-check warning at Phase 3.2 so the agent checks during writing (not just validation), and promoted Q5 to a hard stop. Validation should confirm, not discover.
Generation-time self-checks prevent problems before validation catches them.
Create-Articles
14Published Date + Dual-Date Schema
Readers and LLMs need to distinguish “when was this first published” from “when was it last updated.” Added two-line footer with immutable published date. Schema contract: datePublished matches footer, dateModified matches .post-meta. First publication date never changes.
Create-Articles
15Contrast Pattern Gap
During production article build, two issues passed validation that should have been caught: an opposite-line pattern slipped through PAT-014, and a broken URL wasn’t caught because link liveness checks weren’t enforced as blocking. Expanded PAT-014 with two new checks, promoted STR-020 from advisory to blocking gate, and added curl-based link liveness to quick-check.
Production articles are the real test suite.
Create-Articles
16Workflow Enforcement: Agent Skipped Everything
Asked to write an article, the agent loaded voice.md and a template but bypassed the entire 7-phase workflow: no preflight checklist, no source verification, no validation, no image handoff. Root cause: CLAUDE.md routed to a directory (“Load create-articles/system-files/”) which the agent treated as a buffet, cherry-picking files instead of following the prescribed sequence. Fix: CLAUDE.md now names exact workflow files. Mandatory first output is the preflight checklist. Mandatory second output is the source verification table.
Route to files, not directories. Agents treat options as a buffet.
Create-Articles
17Seven Versions in One Day: Content Profiles to CSS Dividers
Seven incremental versions in a single day, each addressing a specific issue found during production runs. Content profiles (v7.9.7) gave validation context by content type. Video embeds (v7.9.8) and visual source class (v7.9.9) were cross-engine changes requiring Create-Images coordination. Explore section redesign (v7.9.10–12) separated editorial emphasis from navigation. CSS dividers (v7.9.13) handled spacing entirely in WordPress CSS. Small, focused changes with immediate field testing.
Ship small, test in production, iterate same-day. Seven focused versions beat one ambitious release.
Create-Articles
18The Engine Split: Context Window Survival
v7.9.1 loaded at 84,312 tokens. Context window compacted before the agent could finish writing. Separated into two engines: Create-Articles at 68,994 tokens (articles, validation, workflow) and Create-Images at 37,399 tokens (14 SVG templates, image validation, tool routing). 18% reduction. Each engine fits in its own context window. This decision, made 4 days before Anthropic shipped Agent Teams, maps directly to their architecture: each teammate gets its own context window, owns separate files, and integrates through defined contracts.
Context window is a finite resource. Separate what from how.
Create-Articles
19Visual Rhythm System
Articles were text walls with 1 to 2 visuals across 8+ sections. Built a visual weight system: SVG/VIP = 1.0, table/callout = 0.5, pro-tip = 0.25. Minimum formulas by profile (thought-leadership = prose / 400). Mandatory visual positions: after hero, after intro, before conclusion. Phase 2.5 “Visual Plan” added to workflow. 14 lab-tested SVG templates (A through N).
Rules without examples get ignored. Templates without rules sit unused. You need both.
Create-Articles
20Context Engineering as Core Operator Skill
Updated Operator Function and Pile of Parts definitions to position context engineering as a core Operator skill. The Pile of Parts Problem is really context fragmentation: tools fail because they lack shared context (brand voice, ICP, business rules). Context engineering is the discipline that fixes it. Created new definition page with AI Summary block for LLM citation.
Context engineering is not prompt engineering. It’s designing the information layer all AI systems share.
Create-Articles
21AI-SEO Schema Rebuild for LLM Authority
Feedback from Grok and Gemini revealed schema gaps for LLM citation optimization. Added TechArticle type, keywords array, proficiencyLevel, DefinedTerm schema for key concepts, AI Summary blocks (hidden structured data for RAG systems), semantic HTML wrappers (article/section tags), and dfn tags for citation-ready definitions. Schema is not just for Google. It’s for LLM training data.
Schema serves two masters now. Google for rich results, LLMs for citation authority.
Create-Articles
22Voice Drift from Skill File
Output tone inconsistent. Some articles sounded like me, others didn’t. Root cause: hendry-voice skill file was updated but voice.md in Create-Articles had older rules. Two sources of truth = drift. Aligned and added reference to skill as canonical source.
Single source of truth for voice. Reference, don’t duplicate.
Create-Articles
23Renamed Content System to Create-Articles
Framework alignment. The AI Marketing Framework has a CREATE engine. This system is a specific implementation. Renamed from “Content System” to “Create-Articles” across all files. Clearer mental model.
Naming should encode hierarchy.
Create-Articles
24TOC Missing Despite Rule Existing
Posts with 7 H2 sections generated without table of contents. TOC requirement existed in components.md but wasn’t enforced in validation. Tier 3 “advisory” meant LLM skipped it. Elevated to structural validation with auto-fail.
Advisory rules get skipped. Make requirements structural.
Create-Articles
25Schema Mentions Must Match Content
Schema had mentions (tools, companies) that weren’t in the article body. Orphaned schema entities. During research, noted tools to potentially mention, added to schema, but they didn’t make final draft. Solution: Every entity in schema mentions must appear in visible content.
Schema is a contract with search engines. Don’t promise what you don’t deliver.
Create-Articles
26Pre-Output Checklist
Articles passing validation but still having issues. Individual checks passed, cross-checks weren’t happening. TOC didn’t match H2 IDs. Solution: Pre-output checklist runs AFTER validation passes. Structural gates must pass (TOC present, schema wordCount, mentions match content). Quality gates flag for review.
Validation checks parts. Pre-output checks the whole. Both are necessary.
Create-Articles
27Schema Timing Bug
Schema wordCount validation failed on every run. Schema said 1650, actual content was 1366. Spent 4 hours debugging. Root cause: Schema populated from content brief (target: 1650) BEFORE content generated. Validating against a goal, not a measurement. Solution: Separate schema properties by timing. Static values during generate, measured values post-generate from actual output.
Measured values must come from output, not input. Never validate predictions against themselves.
Create-Articles
28Tables Hard to Scan
Feedback that tables looked flat, hard to read with many rows. Added zebra striping to design tokens. Centered footer alignment.
Visual rhythm aids scanning.
Create-Articles
29Rebuilt Validation as 3-Tier System
Previous validation was binary. Some checks are deterministic, some require judgment. Treating them the same caused problems. Tried single pass with confidence scores (too complex), all advisory (too many false positives), all auto-fix (broke things needing judgment). Solution: Tier 1 (Pattern) = 100% reliable, auto-fix. Tier 2 (Structural) = 95% reliable, evidence-based. Tier 3 (Semantic) = 70 to 80% reliable, advisory only.
Match automation level to confidence level.
Create-Articles
30LLMs Lie About Validation
Asked Claude to validate the article had 10+ external links. Response: “PASS – Article contains sufficient external links.” Actual count: 4. The LLM pattern-matched the validation instruction to the expected output without doing the work. It generated a plausible validation result without checking.
Evidence-based validation defeats hallucination. Don’t ask “did this pass?” Ask “show me what you found.”
Create-Articles
31Gen 1 Complete. Production Stable
Cumulative fixes from v6.9.9.3 through v6.9.9.7. CMS theme typography aligned. Footer format updated. FAQ spacing reduced. Body font mismatch fixed. First fully production-stable version. Gen 1 complete.
Create-Articles
32Font Mismatch
Mobile text looked “tight” compared to native CMS pages. CSS variable for body font was set to ‘Barlow Semi Condensed’ but CMS theme used regular ‘Barlow’. I had assumed the font from looking at headings. Audited against actual theme settings to fix.
Match the environment. Extract values from the destination system, don’t assume.
Create-Articles
33Regression Check
New version broke things that previously worked. Missing reference files. Added regression testing against previous stable version before release.
Test against the last known good state.
Create-Articles
34Header/Footer Drift
First generated post had correct header and footer. Second post drifted. Logo changed from “Hendry” to “Hendry.ai”. URL changed from “/builders-log/” to “/blog/”. Social icons switched from LinkedIn+Email to LinkedIn+Twitter. The LLM improvised on elements that should have been verbatim. Created a “Verbatim Elements” section with exact HTML that cannot be modified.
Agents improvise unless explicitly forbidden. Lock down what shouldn’t change.
Create-Articles
35FAQ Count Insufficient
Posts generating 3 to 5 FAQs. Need minimum 6 for schema richness. Added explicit minimum, updated validation checklist, added to common mistakes table.
Make requirements explicit and enforceable.
Create-Articles
36Mobile Padding Iteration
Generated pages had tighter spacing than existing website. Tested values iteratively: v6.8 at 3.2vw was too tight. v6.9 at max(5vw, 16px) was still tight. v6.9.2 at max(10vw, 40px) for 768px finally matched. Three attempts with screenshot comparisons.
Test against production. Screenshots don’t lie.
Create-Articles
37CMS Fragment Mode
Generated complete HTML pages with DOCTYPE, html, head, body tags. CMS theme already provides page structure. Result: duplicate headers, duplicate navigation, style conflicts. Switched to fragment mode: remove page wrapper, wrap content in namespaced div, scope CSS to that div.
Design for integration, not standalone. Output format depends on destination.
Create-Articles
38GitHub Versioning + Golden Reference Files
Set up version control. Created golden reference files for each format type. Also fixed CSS variable naming mismatch. Reference files used --color-primary but canonical CSS used --c-primary. Generated content was missing rust/orange underlines.
One source of truth. Name files so the canonical source is obvious.
Create-Articles
39Rolled Back to Simplicity
v3.8 broke GPT compatibility. Rolled back to v2.2 structure. Added one thing: a completed HTML example showing exactly what good output looks like. GPT followed workflow correctly again. Stability restored.
Show don’t tell. When instructions fail, examples succeed.
Create-Articles
40The 89-Line Disaster
Added 89 lines of validation checklists with MANDATORY and CRITICAL warnings everywhere. Renamed folders. Added 8 separate validation sections. GPT stopped following the workflow entirely. Output quality dropped. Workflow grew from 205 to 294 lines. The LLM got confused by competing priorities.
One example beats 89 lines of instructions. LLMs pattern-match examples better than they parse rules.
Create-Articles
41Restructured to Core/Create Architecture
Separated core (brand, design) from creation (templates, workflow). Attempting to add organization. This would break things.
Create-Articles
42First Content Generation Workflow
Added content generation workflow with templates for different formats. First functional version that could produce output.
Create-Articles
43Initial Folder Structure
Created basic folder structure with placeholders for brand files and templates. The skeleton that would hold Foundation outputs. Starting point for everything that followed.
Create-Articles
44Engine Output: POSITION (Competitive Differentiation)
Created positioning.md with the “Pile of Parts” narrative and differentiation framework. Trigger: Early content drafts sounded like every other AI marketing take. Generic “AI is transforming marketing” noise. Fix: Documented the specific counter-position: most teams have tools without architecture. Framework-first thinking, not tool-first. This positioning now gates every piece of content. If it doesn’t reinforce the narrative, it doesn’t ship.
Positioning isn’t a tagline. It’s a filter that kills generic content before it’s written.
Create-Articles
45Engine Output: UNDERSTAND (ICP Development)
Created icp.md with Strategic Sarah persona and pain point mapping. Trigger: Content was too broad. Trying to speak to everyone from junior marketers to CEOs. Fix: Hard-coded the ICP: marketing leaders who control budget, understand the gap between AI promise and delivery, and are asking “how do I prove ROI?” Every content brief now requires ICP alignment check before generation starts.
Generic content comes from undefined audience. ICP precision forces content precision.
Create-Articles
46Engine Output: DEFINE (Voice & Brand)
Developed voice.md and brand.md. This is the DEFINE engine in action. Trigger: Test outputs sounded like generic AI-written content. Polished but empty. Fix: Extracted the “Practitioner, not Pundit” heuristics. Concrete rules: no hedge words, show the work, forward focus not retrospective. Voice rules now run as Tier 1 validation. Pattern-matchable, auto-fixable.
Voice isn’t vibe. It’s enforceable rules that kill generic output.
Create-Articles

Create-Images Logs

25 versions. Born from the engine split. SVG diagram generation, hero image production, perception rules that bridge LLM coordinates to human vision, and an 8-check exit gate with feedback receiving protocol.

01760px-Proof Hero: Primitive Library + Render-Width Tuning
Hero spec reworked for 760px article body render width. ViewBox stays 1200x675 (OG-image ready) but all font, stroke, and clearance minimums tuned for 0.633x downscale. Articles render hero SVGs at 760px wide — previous minimums (28/20/18px) were unreadable at actual display size. New minimums: titles 38px+, headings 26px+, body 22px+, absolute floor 18px. Primitive library added (P1-P8): Content Box, Schema Card, Document Icon, Flow Arrow, Browser Frame, Database Cylinder, Stat Callout, Dashed Connection. Golden example: hero-zone-architecture.svg. Sub-18px text banned. Hardcoded hex banned — all colors via design tokens.
Design for the render width, not the viewBox. 1200px SVGs that render at 760px need their minimums tuned for 0.633x.
Create-Images
02SVG-Only: Two Visual Systems + Programmatic Generator
Major version upgrade. Now SVG-only and model-agnostic — removed all multi-model tool-routing code. Two distinct visual systems: hero SVGs (freeform compositions, solid fills, inverted text) vs body SVGs (programmatic generation via a dedicated generator script, tinted surfaces, dark text). Programmatic generator provides 6 inline templates (flow, comparison, timeline, stat, barChart, grid) with JSON schema input — agent describes data, generator produces consistent SVGs. All SVG templates migrated to CSS custom properties for theme adaptability. Hero spec rebuilt: 5 composition patterns (A-E), codified visual tokens, consolidated from 4 files into one. Exit gate simplified to 8 checks. Deprecated all model-specific guideline files and the tool selection matrix.
Removing tool routing didn't remove capability — it removed decision fatigue. One tool, two modes, zero routing logic.
Create-Images
03Headless Native: VIP Blocks to CMS Fields
Major version upgrade. Input changed from HTML VIP div blocks to headless CMS article visuals array and hero fields. Output via CMS API instead of HTML figure replacement. Exit gate updated: '0 VIP entries in fields' replaces '0 VIP divs in HTML'. All SVG generation rules preserved — templates A-N, geometry, brand colors, font rules, perception rules, 9 exit gate checks all unchanged. Only the input/output contract changed.
When the I/O format changes but the generation rules don't, you know the rules were well-abstracted.
Create-Images
04Arithmetic-Proof Generation: Math Before Placement
Three structural fixes for recurring SVG alignment errors. (1) Proportional text spacing formula replaces fixed lookup table — clamp(16, round(box_height/(num_lines+2)), 28) scales with box height instead of using hardcoded values per line count. (2) EG-009 arrow breathing arithmetic — every arrow must show math in an SVG comment before placement. 9 exit gates (was 8). (3) Mandatory arithmetic comments for all boxes and arrows — forces the agent to compute coordinates before writing them. Visual self-review loop added as Step 4.5: agent renders SVGs in preview server and inspects screenshots before presenting to user. Arrows with arithmetic comments were correct on first attempt — most effective alignment rule so far.
Force the agent to show its math. Arithmetic comments before placement catch errors that visual inspection misses.
Create-Images
05Prompt Fidelity Gate + Handoff Alignment
Three versions tightening the boundary between Create-Images and its neighbours. VIP Prompt Fidelity (v2.0.23, EG-008): the exit gate now extracts named visual elements from VIP prompt text (arrows, arcs, labels, nodes) and verifies corresponding SVG elements exist. Triggered by a missing return arrow that the prompt explicitly requested. Exit gate at 8 checks. Rule Health Check (v2.0.24): cross-engine pipeline version sync with Create-Articles v7.9.35 and Create-Compiler v1.3.3. PAT-017 ID collision documented (same ID, different checks in two engines). Handoff Alignment (v2.0.25): source attribution standardised to HTML everywhere per contract v1.2. VIP data-vip-id required in all integration flows.
Contracts between engines need explicit, machine-enforceable rules. Implicit assumptions drift with every version bump.
Create-Images
06Feedback Receiving Protocol
Create-Images can now receive structured feedback from Create-Compiler. The Feedback Receiving Protocol processes Tier 0 reverse manifest entries at session start (before Step 1). Each entry adds or strengthens a validation rule in the image engine. Misroute detection rejects Tier 1 and Tier 2 entries (those belong to Create-Articles). This completes the feedback loop: Compiler detects an SVG issue, classifies it as Tier 0 (preventable), writes a reverse manifest, and Create-Images absorbs the new rule on its next run.
Every engine that receives feedback becomes self-improving. Close the loop for all producers, not just one.
Create-Images
07Exit Gate: Centering Enforcement at 7 Checks
Two versions hardening SVG output quality. Rule 11 (Inline Centering, v2.0.20): every SVG must include inline centering styles (display: block; margin: 0 auto). The recurring left-alignment bug appeared across multiple templates and hero images. Solved permanently by making centering a structural requirement with an exit gate check. Exit Gate added: 6 checks (EG-001 to EG-006) that must pass before any SVG is emitted. Text Centering Enforcement (v2.0.21): mandatory centering formula with arithmetic self-check.
Exit gates catch what generation-time rules miss. No SVG leaves the engine without passing structural checks.
Create-Images
08Hero Skip Prevention: Image Scan Summary Gate
When an article had existing inline SVGs from a previous draft, the agent skipped hero SVG generation entirely. Existing inline SVGs do not satisfy the hero requirement. Added a mandatory Image Scan Summary at Step 1 that forces explicit enumeration: hero VIP found or not, inline VIPs counted, hero status explicitly tracked as NEEDS GENERATION or ALREADY SVG.
Explicit status checks prevent implicit assumptions. Make the agent state what it found.
Create-Images
09Hero Production Hardening: From 12 Iterations to 10 Rules
First-pass hero SVGs consistently failed on layout and centering. After 12+ iteration cycles across 3 hero images in one session, codified the fixes as rules. Rule 8 (Fill the Frame): content must span 85%+ width and 75%+ height. Rule 9 (Center Everything): global and local centering with tight tolerance. Rule 10 (Edge-Based Arrows): calculate from illustration edges, not stage centers.
Codify the fix patterns, not just the rules. Rules describe success; patterns describe how to get there.
Create-Images
10Font Scale Rule + Source Attribution Journey
v2.0.8 moved source attribution out of SVGs into HTML. v2.0.14 moved it back for left-aligned templates (D, F, H, J, L) where HTML alignment broke. Meanwhile, Template A was enlarged to 900×267 and needed a font scale rule: wider viewBox means visually smaller text unless you compensate. Formula: scale_factor = viewBox_width / 540.
Test architectural decisions across all variants. What works for centered templates fails for left-aligned ones.
Create-Images
11Visual Perception Rules: The LLM Generates Numbers, the Human Sees Shapes
LLMs place SVG elements at mathematically correct coordinates, but the visual output looks wrong. Opacity 0.1 on cream backgrounds is invisible. An element at y=100 with 14px font has a visual footprint extending to y=92. Arrows using gap/2 math visually crowd source shapes. Built 7 core perception rules: visibility floor (minimum opacity by element type), clearance principle (visual footprint extends beyond coordinates), arrow breathing room, connection integrity (lines must touch shape edges), grid-text separation, proportional weight hierarchy, and context-aware contrast modes for inline vs hero SVGs.
The LLM generates numbers. The human sees shapes. Bridge the gap with explicit perception rules.
Create-Images
12Template Production Hardening
Six versions in one day bringing 14 SVG templates from lab-tested to production-ready. ViewBox heights standardised (12px padding rule). Arrow spacing formula locked (space_unit = gap / 4). VIP ID passthrough for Create-Compiler matching. Source lines moved to HTML (later reversed for some templates). Timeline enlarged twice (600 to 700 to 900px). Template I–L added for featured timelines.
Create-Images
13Engine Born from the Split
Create-Images exists because Create-Articles hit 84K tokens. Absorbed 14 SVG templates, PAT-010/015/016 validation, and tool routing from Create-Articles. Tool stack validated: Claude SVG for diagrams (exact brand colours, deterministic), Gemini for conceptual illustrations (best creative output), Grok Aurora for human figures (best prompt adherence). DALL-E dropped after testing. Result: 37,399 tokens standalone.
Focused engines that do one thing well. Context pressure forces good architecture.
Create-Images

Create-Compiler Logs

7+ versions. The downstream assembler that merges articles with images, runs a 14-check quality gate, classifies issues, and sends feedback back to upstream engines.

01Validator, Not Assembler: 22 Field-Based Checks
Major version upgrade. Role changed from HTML page assembler to field validator and enricher. Input via headless CMS API instead of HTML file reads. Old CV-xxx (CSS checks) and RA-xxx (render checks) replaced with FC-xxx (8 field completeness), CE-xxx (6 cross-engine consistency), CQ-xxx (8 content quality) = 22 total checks. Auto-fix via API for correctable issues. New enrichment step: wordCount and proficiencyLevel computed and written back to article. VIP matching deleted (handled by CMS field relationships). Post-compile-checks deleted (all checks now in compile validator). Reverse manifest system preserved for upstream feedback.
Moving from assembler to validator simplified the engine by removing an entire responsibility. The CMS handles assembly now — the compiler just checks the result.
Create-Compiler
02Rule Health Check + Boundary Alignment
v1.3.3 Rule Health Check: cross-engine pipeline version sync. README corrected (CV count to 14, was reporting 11). Folder structure updated to v1.3.3. Paired engine versions updated. 06-MEMORY/README.md CV count corrected (was 9). Intentional redundancy documented: CV-012/STR-023 (domain frequency at compile vs generation), CV-013/SEM-002 (citation quality), CV-014/PAT checks (voice at compile vs generation) — same concern checked at different pipeline stages, by design. v1.3.4 Boundary Alignment: 25 discrepancies found and fixed across Images-to-Compiler boundary. 10 cross-boundary example fixes (viewBox 600→700, data-vip-id added, dead inline SVG refs removed). Post-compile-checks.md deprecated (zero unique checks remaining). Verification manifest synced.
Intentional redundancy is a feature: checking the same rule at generation and compile is a safety net, not duplication.
Create-Compiler
03Episodic Memory + Voice Pattern Gate
Two versions expanding the Compiler from structural validation into voice awareness. Validator Expansion (v1.3.1): CV-012 (domain frequency, no domain more than twice) and CV-013 (citation specificity, link text must contain year, report name, or named document). Triggered by martinfowler.com appearing 4 times in the first production compile. Compile Validator at 13 checks. Voice Pattern Gate (v1.3.2, CV-014): deterministic PAT checks against all visible prose in compiled HTML. Runs PAT-001 (em dashes), PAT-002 (en dash ranges), PAT-004 (banned words), PAT-013 (filler words), and PAT-014 (opposite-line patterns, 7 sub-checks). The Compiler is the only gate where all visible text from all engines exists in one file. Compile Validator at 14 checks.
Memory is infrastructure. Without structured review logs and retrieval conventions, the system cannot learn from its own history.
Create-Compiler
04The Feedback Loop: From Issue Detection to Closed-Loop Remediation
Two versions transforming the Compiler from a passive assembler into an active quality system. Issue Router (v1.2.0): 4-tier classification for all Compiler-discovered issues. Tier 0 (preventable: add rule to prevent recurrence), Tier 1 (auto-patch: deterministic fix to source and compiled), Tier 2 (scoped re-gen: fix request routed upstream), Tier 3 (structural: escalate to operator). Fix-forward-only officially banned as an anti-pattern. Reverse Manifest (v1.3.0): closed-loop feedback to upstream engines. 5-step Commit-Back Cycle generates structured manifests. Entry lifecycle: OPEN to RESOLVED to VERIFIED to CLOSED.
Fix-forward-only creates drift between source and compiled. Every fix must flow back to the source.
Create-Compiler
05Quality Gate: Compile Validator + Review Agent + Episodic Memory
When three engines assemble one article, cross-boundary errors emerge that no single engine can catch. Built a two-layer quality gate. The Compile Validator runs 9 deterministic checks (schema wordCount post-assembly, TOC anchor resolution, FAQ schema-to-content match, CSS class integrity, date consistency). All blocking. The Review Agent asks 7 fresh-eyes questions (opening quality, citation specificity, voice compliance, content citability) with context isolation: it receives only the compiled HTML, brief, and voice rules. No generation history. Advisory only. Every compile generates a review report saved as episodic memory.
Validation-as-data. Review logs let the system prune its own rules with evidence, not guesswork.
Create-Compiler
06Article Assembly Pipeline
The engine split created a handoff problem: Create-Articles outputs HTML with VIP markers, Create-Images outputs SVG figures. Something needs to merge them. Create-Compiler matches VIP blocks to generated figures by data-vip-id attribute (deterministic), runs post-compile validation (PAT-015 SVG quality, PAT-016 viewBox, structural integrity), and outputs a compile report documenting what was merged and validated.
Multi-engine pipelines need an explicit assembly step. Don’t leave integration to chance.
Create-Compiler

Create-Social Logs

LinkedIn carousel generation. Stable at v1.0.2 since January 2026.

01LinkedIn Carousel Generation
Built carousel generation for LinkedIn from article content. Three versions refined slide count, visual consistency, and CTA placement. Stable since initial build. No significant iterations required. The engine consumes article HTML and produces slide decks formatted for LinkedIn’s carousel spec.
Create-Social

Listen-Competitors Logs

7 versions from basic monitoring to synthesis-driven competitive intelligence.

01Two-Layer Output
Generated 5 files and 3,000 words of competitive intelligence. The actionable insight was one sentence buried on page 4. Comprehensive collection but unusable output. Added synthesis phase with two-layer output: Brief (200 words, covering verdict, threat level, top 3 insights, one action) and Full Report (2,000+ words with all evidence).
Comprehensive collection serves different purpose than decisive output. Separate them.
Listen-Competitors
02Every Claim Must Have Source URL
Found myself writing intelligence briefs with claims I couldn’t trace back to source. Signal said “competitor claims 40% improvement” but couldn’t find where that came from. Added citation requirement: every claim must have a source URL. If you can’t cite it, don’t include it.
Uncited claims are hallucination risks. Forced citation eliminates fabricated intelligence.
Listen-Competitors
03Authoritative Source Queries
LLMs are trained on and heavily cite certain sources. Missing these meant missing how the market talks about a competitor. Added required queries for every target: Reddit (practitioner discussion), Wikipedia (notability signal), YouTube (long-form content), LinkedIn articles, Medium posts. Search where LLMs train.
Search where LLMs train. These sources shape how AI represents the market.
Listen-Competitors
04Counter-Position Scoring
Had 15 signals with no actionability indicator. Some were well-defended positions I shouldn’t challenge. Others were weak claims I could counter. Added counter-position score (1 to 5): 5 = they’re wrong (create counter-content), 3 = complementary (create complement), 1 = well-defended (learn from them).
Score before prioritizing. Not all signals deserve response.
Listen-Competitors
05Weakness Probe
Intelligence reports read like competitor marketing pages. All positioning, no criticism. No failures. No gaps. Not useful for counter-positioning. Added dedicated weakness probe phase with explicit queries for problems, limitations, and debates. Found criticisms of a well-known framework that would never appear in the author’s own content.
Weaknesses are strategic gold. Actively search for problems, not just positions.
Listen-Competitors
06LinkedIn Manual Paste
LinkedIn posts can’t be automatically scraped. Attempted workarounds failed or were fragile. Accepted the constraint. System prompts user to paste relevant posts manually. The manual step became a feature: human selection of “most relevant posts” is better than automated scraping of “all posts.”
Design around constraints. Sometimes manual steps improve outcomes.
Listen-Competitors
07Early Synthesis Creates Confirmation Bias
Formed verdicts while still collecting signals. Subsequent searches confirmed initial impressions. Counter-evidence got downweighted. The “intelligence” was just my initial hunch dressed up with selective evidence. Fixed with strict phase separation: collection (Phases 1 to 4) must complete before synthesis (Phase 5).
Early synthesis creates confirmation bias. Collect comprehensively, then conclude. Not the reverse.
Listen-Competitors
08Two-Phase Search
Searched “[competitor name] AI marketing” and got results for other people with the same name. Searched “AI marketing framework” and got generic content. Problem: searching in MY vocabulary, not THEIRS. Thought leaders brand their ideas with unique terms. Added Phase 1 (learn their vocabulary) before Phase 2 (search in their language).
Search in their language, not yours. Generic queries return generic results.
Listen-Competitors
09HUNT/MONITOR Dual-Mode Architecture
Built Listen-Competitors as dual-mode signal detection: HUNT (outbound intelligence covering ICP watering holes, pain point mining, competitor content) and MONITOR (inbound signals covering brand mentions, AIO citations, intent data). Weight ratio shifts as brand matures: Cold Start 80/20, Growth 50/50, Scale 20/80.
Detection and Generation are fundamentally different systems. Listen-Competitors uses monitors and scoring thresholds, not templates and validation rules.
Listen-Competitors

Replicate Logs

Testing if engines transfer to new brands. 48+ versions across 3 brand implementations proved the architecture is portable, not just Hendry-specific.

The Replication Journey: Started with the Hendry Content System (25 versions), then tested on three external brands. Each test compressed: Brand A took 2 weeks and 15+ versions. Brand B took 3 days and 5 versions. Brand C took 2 days and 5 versions. The pattern became formulaic.

01Agents Interpret "Copy" as "Recreate Similar"
Workflow said “copy template to output folder.” Agent rebuilt the template from memory instead of copying the file. Output had wrong CSS, missing elements, HTML comments rendered as visible text. Changed instruction to explicit bash command: cp template.html output.html followed by str_replace for content slots only.
Force literal file operations. “Copy” means “recreate similar” to an LLM.
Replicate
02Slot-Based JSON Architecture (97% Output Reduction)
Full HTML output was ~150KB. Every generation risked CSS corruption. Switched to slot-based architecture: agent outputs only JSON content (~5KB), assembly script merges with locked template. 97% reduction in agent output. Zero CSS drift since implementation.
Slot-based architectures eliminate CSS drift. Agents write content, not markup.
Replicate
03Self-Contained Templates with Embedded Assets
Demo failed because external font CDN was blocked. Embedded Inter font as base64 directly in template (~1.3MB but works offline). Also embedded founder photos. Template is now completely self-contained with zero external dependencies.
Self-contained templates eliminate external dependencies. Demos must work offline.
Replicate
04Shell Compliance Validation
Agent changed hyperlink color from brand red to default blue. Footer SVG stroke-width changed from 1.5 to 2. Small deviations that made output look “off.” Added explicit shell compliance checks: “Footer VERBATIM” and “Hyperlinks [COLOR]” in validation checklist.
Agents deviate from templates. Add explicit compliance checks for visual elements.
Replicate
05Color Assumption Error
Built system using teal (#00A9A5) as primary color based on memory. Production website uses purple (#6f2791). Founder immediately spotted the error. Extract colors from production CSS, never assume from memory or screenshots.
Don’t assume colors. Extract from actual site CSS.
Replicate
06Sites Use Multiple Font Families
Assumed one font family. Production site used four: Nunito (body), Cabin (nav), Roboto (labels), Montserrat (buttons). Each serves a different purpose. Typography extraction now includes font-by-purpose mapping, not just “the font.”
Verify fonts against production CSS. Sites use multiple families for different purposes.
Replicate
07Brand-Agnostic Base System
After three brand tests, extracted the portable core. Universal system = 80% (workflows, validation, templates, CRITICAL-RULES.md). Brand-specific = 20% (voice, design tokens, ICP, shell HTML). New brand setup now takes ~40 minutes: 15 min brand research, 10 min CSS extraction, 10 min shell creation, 5 min assembly.
Build universal systems with brand-specific configuration. Architecture should transfer; only context changes.
Replicate
08CSS Extraction is the Human Bottleneck
Attempted to automate design token extraction. Claude Cowork can’t use browser dev tools. Automated CSS parsing misses computed styles, pseudo-selectors, and platform-specific patterns (Framer loads CSS dynamically, HubSpot uses module CSS). Human extraction remains at ~10 to 15 minutes per brand. This is the irreducible manual step.
CSS extraction is the human bottleneck. Some steps cannot be automated.
Replicate
09Configurable Comparison Entity
Original Listen-Competitors had comparison logic hardcoded. But the system could compare any target to any brand. Created Listen-Competitors-Replicate as separate fork with configurable comparison entity. User fills /01-brand/ folder (company, positioning, ICP, competitors). Phase 0 loads brand context before running. Same engine, different lens.
Build the specific version first, then generalize. Don’t over-architect early.
Replicate

Key Principles

Patterns that keep showing up. Extracted from 105+ curated iterations across all engines. 75 principles and growing.

Top 10:

  1. The agent is disposable. The orchestration layer is permanent.

  2. The LLM generates numbers. The human sees shapes.

  3. Evidence-based validation defeats hallucination.

  4. Agents improvise unless explicitly forbidden.

  5. Context engineering is not prompt engineering. It's designing the information layer.

  6. One example beats 89 lines of instructions.

  7. Context window is a finite resource. Separate what from how.

  8. The problems come first, then the principles, then the product.

  9. Cross-session memory turns isolated agents into a learning system.

  10. Memory is infrastructure. Without it, the system cannot learn from its own history.

75 Principles

Source

One example beats 89 lines of instructions

Create-Articles v3.8

Evidence-based validation defeats hallucination

Create-Articles v7.0

Match automation level to confidence level

Create-Articles v7.0

Measured values must come from output, not input

Create-Articles v7.0.2

Validation checks parts. Pre-output checks the whole.

Create-Articles v7.0.2

Advisory rules get skipped. Make requirements structural.

Create-Articles v7.0.2

Agents improvise unless explicitly forbidden

Create-Articles v6.9.5

Design for integration, not standalone

Create-Articles v6.8

Match the environment. Extract values from destination.

Create-Articles v6.9.9.7

One source of truth. Name files so canonical source is obvious.

Create-Articles v6.5

Context window is a finite resource. Separate what from how.

Create-Articles v7.9.2

Rules without examples get ignored. Templates without rules sit unused.

Create-Articles v7.9.1

Route to files, not directories. Agents treat options as a buffet.

Create-Articles v7.9.14

Generation-time self-checks prevent problems before validation catches them.

Create-Articles v7.9.17

Classify quality during collection, not after. Left-shift the gate.

Create-Articles v7.9.19

Cross-session memory turns isolated agents into a learning system.

Create-Articles v7.9.20

System files are training data. They must follow the same rules they enforce.

Create-Articles v7.9.22

Centralise messaging. When pillars live in 4 files, they drift in 4 directions.

Create-Articles v7.9.26

When writing about specs, the spec is the source of truth.

AI-SEO v7.2

"Loaded" is not "read." Gates must prove comprehension, not just access.

Create-Articles v7.9.30

Downstream quality gates are only useful if they can talk back.

Create-Articles v7.9.28

Undocumented patterns become invisible violations.

Create-Articles v7.9.32

The LLM generates numbers. The human sees shapes.

Create-Images v2.0.11

Focused engines that do one thing well.

Create-Images v2.0.0

When you iterate 12 times on the same type, extract the pattern.

Create-Images v2.0.17

Exit gates catch what generation-time rules miss.

Create-Images v2.0.20

Explicit status checks prevent implicit assumptions.

Create-Images v2.0.19

Every engine that receives feedback becomes self-improving.

Create-Images v2.0.22

Fix-forward-only creates drift. Every fix must flow back to source.

Create-Compiler v1.2.0

The agent is disposable. The orchestration layer is permanent.

System Architecture

The problems come first, then the principles, then the product.

System Architecture

Audit the audit trail. Documentation debt compounds silently.

System Architecture

Separate strategy from execution. Different tools for different thinking modes.

System Architecture

Audit across engines, not within them. Cross-boundary drift is invisible from inside.

System Architecture

Voice isn't vibe. It's enforceable rules.

Create-Articles v0.1 (DEFINE)

ICP precision forces content precision.

Create-Articles v0.2 (UNDERSTAND)

Positioning is a filter that kills generic content.

Create-Articles v0.3 (POSITION)

Search in their language, not yours

Listen-Competitors v3.1

Uncited claims are hallucination risks

Listen-Competitors v3.3

Search where LLMs train

Listen-Competitors v3.3

Early synthesis creates confirmation bias

Listen-Competitors v3.1

Weaknesses are strategic gold

Listen-Competitors v3.2

Design around constraints

Listen-Competitors v3.1

Agents interpret "copy" as "recreate similar"

Replicate Brand A v2.16

Slot-based architectures eliminate CSS drift

Replicate Brand A v3.0.1

Self-contained templates eliminate external dependencies

Replicate Brand B v3.0

Don't assume colors. Extract from actual site.

Replicate Brand C v1.1

CSS extraction is the human bottleneck

Replicate v3.0

Schema serves two masters: Google and LLMs

AI-SEO v7.1

Context engineering is not prompt engineering. It's designing the information layer.

AI-SEO v7.1.1

Contracts between engines need explicit, machine-enforceable rules.

Create-Images v2.0.25

Memory is infrastructure. Without it, the system cannot learn from its own history.

Create-Compiler v1.3.2

Version propagation across engines is the most common drift vector.

System Architecture

Production compiles are the real test suite for upstream engines. Every compile run reveals gaps the generator missed.

Create-Articles v7.9.33

LLMs quantify experience to signal authority. The count itself is the tell — real operators describe what happened, not how many times.

Create-Articles v7.9.36

Changing the output format tests whether your validation rules are coupled to structure or to quality. Ours survived because they check content, not markup.

Create-Articles v8.0.1

Force the agent to show its math. Arithmetic comments before placement catch errors that visual inspection misses.

Create-Images v2.0.26

When the I/O format changes but the generation rules don't, you know the rules were well-abstracted.

Create-Images v3.0.1

Removing tool routing didn’t remove capability — it removed decision fatigue. One tool, two modes, zero routing logic.

Create-Images v4.0.0

Design for the render width, not the viewBox. 1200px SVGs that render at 760px need their minimums tuned for 0.633x.

Create-Images v4.1.0

Intentional redundancy is a feature: checking the same rule at generation and compile is a safety net, not duplication.

Create-Compiler v1.3.3

Moving from assembler to validator simplified the engine by removing an entire responsibility. The CMS handles assembly now — the compiler just checks the result.

Create-Compiler v2.0.1

Version bumps are cross-engine operations. Grep all repos after every bump, not just the changed engine.

System Architecture

Shared context files need one canonical location. Per-engine copies drift with every version bump.

System Architecture

Centralise context once. When the same file lives in 3 engines, it diverges in 3 directions.

System Architecture

Migrate real content early. Speculative schema design misses every edge case that actual articles expose.

System Architecture

Article templates needed 3 iterations because each was driven by migrating real content, not by speculative design.

System Architecture

Launch preparation is a compression event — it surfaces every issue the development phase deferred.

System Architecture

Automated security review finds real issues but can't assess business risk. The operator's job is triage.

System Architecture

Moving to headless didn't just change the output format. It changed what each engine is responsible for.

System Architecture

Document the pipeline before automating it. Cross-repo flows need a single spec both sides can reference.

System Architecture

The first automated pipeline run reveals every assumption the manual process hid. Ship the pipeline, then fix what it exposes.

System Architecture

Time-to-live depends on content complexity, not stack complexity.

System Architecture

Conversation history is not version control. When three chat windows carry the system state, every session starts with a manual sync that drifts silently.

System Architecture

The first engine built on new infrastructure validates the infrastructure more than itself.

System Architecture

Frequently Asked Questions
What are AI Marketing Operator Logs?
AI Marketing Operator Logs are Hendry Soong's public documentation of building AI marketing engines. Published at hendry.ai, each entry captures what changed, what broke, and what was learned. It's proof of iteration, not theory.
What engines are documented?
Seven engines are documented: Create-Articles (content generation, 46 entries), Create-Images (SVG diagram and hero image generation, 13 entries), Create-Compiler (field validation with 22 checks, review agent, and closed-loop feedback, 6 entries), Listen-Competitors (competitive intelligence, 9 entries), Create-Social (LinkedIn carousels, 1 entry), Create-Articles-Replicate (portable content engine tested on 3 brands, 8 entries), and Listen-Competitors-Replicate (portable competitive intel, 1 entry). Seven are production engines.
What is the multi-engine pipeline?
Articles flow through a three-engine pipeline: Create-Articles generates structured JSON with visual insertion points (0 SVGs), Create-Images generates SVG diagrams and hero images through a 9-check exit gate, and Create-Compiler validates fields with 22 checks, a 7-question review agent, 4-tier issue router, and closed-loop reverse manifests. Output publishes to a headless CMS via publishing SDK. Each engine has its own context window, own validation, and own versioning.
Why do the Create-Articles logs start at v0.1?
Because content generation doesn't start with templates. It starts with Foundation: brand voice (DEFINE), audience profiles (UNDERSTAND), and positioning (POSITION). The v0.x entries document this pre-flight work that makes the content engine useful.
How often are logs updated?
Bi-weekly. New entries are added as development continues. This page maintains the complete history.
Can I use these systems?
The logs share logic and principles, not full proprietary systems. For implementation support, contact Hendry directly.
What is the AI Marketing Framework?
The AI Marketing Framework is an 11-engine system for AI-powered marketing operations covering listening, creating, positioning, and measuring. The AI Marketing Operator Logs document practical implementation across 105+ curated iterations.
Why document failures publicly?
Failures contain the most valuable learnings. Documenting what broke and how it was fixed demonstrates real operational experience.
What does Create-Articles-Replicate prove?
Create-Articles-Replicate proves the content engine architecture is portable, not Hendry-specific. Tested across 3 brand implementations, the pattern became formulaic: 80% universal system, 20% brand-specific configuration. New brand setup now takes approximately 40 minutes.
Published
###