Document Type: Protocol
Status: Active
Version: v1.3
Authority: MWMS HeadOffice
Applies To: Ads Brain campaign review, post-launch evaluation, and structured decision preparation
Parent: Ads Brain Canon
Last Reviewed: 2026-03-19
Purpose
The Ads Brain – Campaign Review Protocol defines how advertising campaigns are evaluated after launch.
Campaign performance must be assessed using structured criteria rather than emotional judgement, impatience, or optimism.
The purpose of this protocol is to:
• evaluate campaign health
• identify early performance signals
• detect failure conditions
• separate creative, audience, platform, and funnel issues
• determine whether campaigns should continue, iterate, pause, scale, retire, or escalate
Campaign decisions must always remain data-driven.
This protocol exists to turn post-launch observation into governed review.
Scope
This protocol applies to:
• Ads Brain campaign reviews
• post-launch campaign evaluation
• early signal interpretation
• creative and audience review
• scaling-readiness checks
• pause, iterate, continue, scale, retire, and escalate decisions
• structured review of delivery, engagement, conversion, and durability signals
This document governs how Ads Brain reviews campaigns after launch.
It does not govern:
• capital approval
• final survivability authority
• cross-brain financial override
• campaign creation procedures
• creative production procedures
• experiment validity by itself
• governance override authority
Those remain governed by:
• Finance Brain
• Experimentation Brain
• SIT Brain
• HeadOffice
• related Ads Brain systems
Definition / Rules
Linked Systems
• Ads Brain – Campaign Runbook
• Ads Brain – Creative Performance Scorecard
• Ads Brain – Creative Signal Interpretation Framework
• Ads Brain – Decision Engine
• Ads Brain – Scaling Intelligence
• Ads Brain – Experiment Registry
Core Principle
A live campaign is not self-explanatory.
Metrics do not interpret themselves.
Ads Brain must evaluate campaign performance using structured review logic so that weak signal, unstable signal, false optimism, and avoidable spend are not mistaken for meaningful progress.
Campaign review exists to answer:
What is actually happening in this campaign, and what action is structurally justified next?
Campaign Lifecycle Context
Campaigns move through several operational stages:
• Launch
• Learning
• Evaluation
• Iteration
• Scaling Review
• Pause
• Retirement
The Campaign Review Protocol applies primarily during:
• Learning
• Evaluation
• Iteration
• Scaling Review
This protocol is not a substitute for the launch checklist.
It begins after the campaign is already live.
Review Frequency
Campaigns should be reviewed at structured intervals.
Initial Review
Timing:
• 24-48 hours after campaign launch
Purpose:
• confirm campaign delivery
• confirm early engagement signals
• confirm no setup failure is distorting early learning
At this stage, the goal is not confident judgment.
The goal is structural confirmation.
Early Performance Review
Timing:
• after sufficient traffic has accumulated
Purpose:
• identify early indicators of creative or targeting performance
• detect severe weakness
• determine whether signal is maturing or merely noisy
Stability Review
Timing:
• once campaign metrics begin stabilising
Purpose:
• determine whether the campaign should continue, iterate, pause, or move toward scale review
• assess whether observed performance is durable rather than temporary
Scaling Review
Timing:
• before any significant budget increase or expansion proposal
Purpose:
• confirm scaling readiness
• confirm campaign durability
• confirm no structural warning exists beneath encouraging surface metrics
Primary Campaign Signals
Campaign reviews must consider several key performance signals.
1. Traffic Delivery
Verify that the platform is delivering traffic appropriately.
Indicators include:
• impressions
• reach
• ad delivery stability
• spend pacing
• learning-phase behaviour
• delivery anomalies
If delivery is extremely low or unstable, targeting, platform restrictions, compliance issues, or configuration problems may exist.
If delivery is unstable, interpretation must remain cautious.
2. Engagement Signals
Engagement signals indicate whether the creative captures attention.
Common signals include:
• click-through rate
• video watch behaviour
• retention behaviour
• engagement actions
• hold rate
• early scroll-stop behaviour where relevant
Weak engagement suggests creative or hook weakness.
Strong engagement without downstream performance does not automatically mean campaign success.
3. Conversion Behaviour
Conversion behaviour determines campaign viability.
Signals include:
• conversion rate
• cost per acquisition
• revenue per click
• landing-page response
• downstream action quality
Conversion signals must be read separately from engagement signals.
A campaign can attract attention and still fail commercially.
4. Audience Behaviour
Campaign review must observe how audiences respond.
Signals include:
• audience overlap
• audience expansion behaviour
• demographic response patterns
• frequency exposure
• saturation risk
High frequency may indicate audience saturation.
Audience mismatch may make a good creative look weak.
5. Creative Performance
Creative performance must be evaluated separately from audience performance.
Signals include:
• retention behaviour
• CTR variation between creatives
• fatigue signals
• angle-level differences
• hook-level differences
• clarity signals
Creative fatigue may require new variations.
Weak creative performance must not automatically be blamed on the offer.
6. Platform Behaviour
Campaign review must also observe platform-specific behaviour.
Signals include:
• delivery consistency
• learning stability
• optimisation instability
• sudden spend changes
• abnormal metric fluctuation
Platform behaviour can distort or delay interpretation.
This must be recognised rather than ignored.
7. Governance Signals
Campaign review must confirm that the campaign remains inside governance boundaries.
Signals include:
• experiment integrity
• capital exposure discipline
• statistical maturity concerns
• structural compliance
• escalation conditions
If governance signals are violated, optimism is irrelevant.
Governance wins.
Review Sequence
Every campaign review should follow the same sequence.
Step 1 — Delivery Check
Ask:
• Is the campaign delivering normally?
• Is there enough traffic to interpret anything?
• Is the platform behaving stably?
If no, stop deeper interpretation and resolve delivery context first.
Step 2 — Engagement Check
Ask:
• Is the creative earning attention?
• Are hooks producing click or hold behaviour?
• Is audience interaction meaningfully above noise?
This helps distinguish creative weakness from downstream friction.
Step 3 — Conversion Check
Ask:
• Is attention converting into meaningful action?
• Is the landing-page path aligned?
• Is CPA or revenue behaviour viable enough to continue?
This prevents CTR optimism from becoming false confidence.
Step 4 — Audience Check
Ask:
• Is this audience the right audience?
• Are some segments responding differently?
• Is frequency already creating fatigue or saturation?
This helps avoid misdiagnosing targeting issues as creative issues.
Step 5 — Creative Durability Check
Ask:
• Is performance stable over time?
• Are signs of fatigue appearing?
• Is this campaign improving, flattening, or decaying?
Durability matters before scaling is even considered.
Step 6 — Governance Check
Ask:
• Is this experiment still structurally valid?
• Is the campaign operating within exposure limits?
• Is escalation required before action?
This step protects system integrity.
Decision Outcomes
Every campaign review must result in a clear decision.
Possible outputs include:
• Continue
• Iterate
• Pause
• Scale
• Retire
• Escalate
No campaign should remain in vague interpretive limbo.
Decision Meaning
Continue
Campaign performance is acceptable and more data should be collected.
Iterate
One or more weak variables are identifiable and controlled refinement is justified.
Pause
Campaign performance is weak or structurally unclear, and further spend is not justified right now.
Scale
Campaign performance appears strong enough to enter scaling evaluation, subject to governance and Finance review.
Retire
Campaign path should be permanently discontinued because recovery probability is too low.
Escalate
Issue exceeds Ads Brain authority and requires Finance, Experimentation, SIT, Affiliate, or HeadOffice review.
Campaign Review Rules
Rule 1 — No Emotional Judgement
Campaign reviews must not be driven by:
• excitement
• frustration
• impatience
• attachment to a creative
• fear of “missing a winner”
Signal wins over emotion.
Rule 2 — No Premature Scaling
A campaign must not be treated as scale-ready simply because early metrics look good.
Surface performance is not enough.
Durability and governance must both be respected.
Rule 3 — No Premature Abandonment
Weak early signal is not automatically permanent failure.
If data is immature, the correct output may be Continue rather than Retire.
Rule 4 — Separate Failure Types
Ads Brain must separate:
• creative weakness
• audience mismatch
• landing-page mismatch
• platform instability
• tracking distortion
• governance or experiment contamination
Without this separation, campaign learning becomes unreliable.
Rule 5 — Structured Review Before Action
No campaign action should be taken before review has been completed.
This applies especially to:
• budget changes
• creative swaps
• audience changes
• pause decisions
• retirement decisions
Relationship to Scaling Intelligence
Scaling Intelligence determines when campaign expansion should occur.
The Campaign Review Protocol determines whether the campaign is stable enough to enter scaling consideration.
Campaign Review asks:
Is this campaign strong enough to be considered?
Scaling Intelligence asks:
How should that expansion be interpreted and governed?
Relationship to Creative Signal Interpretation Framework
Creative Signal Interpretation Framework helps diagnose what creative metrics mean.
Campaign Review Protocol uses those insights in the broader campaign context.
One interprets creative behaviour.
The other interprets campaign state.
Relationship to Creative Performance Scorecard
The Creative Performance Scorecard provides creative-level evaluation.
The Campaign Review Protocol uses that evaluation in combination with platform, audience, and conversion behaviour.
Relationship to Decision Engine
The Campaign Review Protocol prepares the inputs.
The Decision Engine converts those reviewed signals into one valid action recommendation.
Relationship to Finance Brain
Finance Brain governs capital exposure.
Campaign reviews provide the performance signals required for Finance Brain to evaluate risk, survivability, and approval conditions.
Ads Brain may recommend.
Finance Brain decides.
Relationship to Experimentation Brain
Experimentation Brain protects experiment integrity and statistical interpretation.
If the campaign lacks enough signal maturity or suffers structural contamination, the review must favour caution and possible escalation.
Recording Requirement
Each campaign review should record:
• campaign name
• review date
• lifecycle stage
• primary signals observed
• key weakness or strength detected
• decision output
• reason for decision
• next recommended action
• escalation note where applicable
This creates cumulative campaign intelligence over time.
Future Expansion
The Campaign Review Protocol may eventually integrate:
• automated performance dashboards
• anomaly detection systems
• fatigue monitoring tools
• cross-platform campaign comparison
• review-priority scoring
• AI-assisted review suggestions
These must remain subordinate to the protocol rather than replacing it.
Final Rule
Campaign decisions must remain disciplined.
Scaling too early, iterating too frequently, or abandoning campaigns prematurely creates unreliable testing environments and weakens system intelligence.
Ads Brain must prioritise structured evaluation over reactive decision-making.
Drift Protection
The system must prevent:
• emotional campaign judgement
• premature scaling based on weak data
• excessive iteration before signal stabilisation
• campaign abandonment without structured review
• audience issues being confused with creative issues
• creative issues being confused with platform-delivery issues
• platform instability being mistaken for market rejection
• operators skipping governance checks because performance looks exciting
Campaign review must remain structured, signal-based, and operationally disciplined.
Architectural Intent
Ads Brain – Campaign Review Protocol exists to ensure that campaign evaluation inside MWMS is systematic, comparable, and resistant to reactive decision-making.
Its role is to turn post-launch campaign management into a governed review process so Ads Brain can distinguish delivery issues, engagement weakness, audience friction, creative fatigue, conversion misalignment, and scaling readiness with greater reliability over time.
Change Log
Version: v1.3
Date: 2026-03-19
Author: MWMS HeadOffice
Change: Rebuilt Ads Brain – Campaign Review Protocol to align with the new Ads Brain root-page and canon separation. Preserved the original review cadence, signal categories, decision outcomes, and relationship logic while expanding structured review sequence, failure-type separation, decision meaning, governance signals, and recording requirements. Normalised the page as the core post-launch review protocol beneath Ads Brain Canon.
Version: v1.2
Date: 2026-03-17
Author: MWMS HeadOffice
Change: Standardised page metadata and naming to align with the locked MWMS standards pack. Normalised Status from “Protocol” to “Active”, standardised title usage to “Ads Brain – Campaign Review Protocol”, preserved the original campaign review logic, review cadence, signal categories, decision outcomes, and relationship structure, and retained the document as a Protocol under the locked MWMS document taxonomy.
Version: v1.1
Date: 2026-03-14
Author: MWMS HeadOffice / Ads Brain
Change: Rebuilt page to align with MWMS document standards. Added standardised document header, introduced Purpose / Scope / Definition / Rules structure, normalised lifecycle, cadence, signal, and decision sections, and preserved the original campaign review logic, review frequency, signal categories, and decision outcomes.
Version: v1.0
Date: 2026-03-13
Author: Ads Brain / MWMS HeadOffice
Change: Initial creation of Ads Brain – Campaign Review Protocol defining campaign lifecycle context, review cadence, primary campaign signals, decision outcomes, and related system relationships.
CHANGE IMPACT
Pages Created: None
Pages Updated: Ads Brain – Campaign Review Protocol
Pages Deprecated: None
Registries Requiring Update:
• MWMS Architecture Registry
• MWMS Brain Registry
• MWMS Brain Interaction Map
• MWMS Canon Hierarchy Map
Canon Version Update Required: No
Change Log Entry Required: Yes
END – ADS BRAIN – CAMPAIGN REVIEW PROTOCOL v1.3