Raphael's The School of Athens
Pillar 2

The Brain - Logic & Intelligence

Micro-segmentation, campaign ideation, and offer optimization, turning data into decisions

Pillar 1 solved seeing. The platform now captures behavioral signals across the full customer journey: every search query, every collection browse, every return-policy check, every hesitation pattern.

But data without intelligence is just storage cost.

The challenge facing every ESP in 2026 is the gap between knowing what a customer did and knowing what to do about it. Omnisend's segmentation engine currently supports standard filters: purchase history, email engagement, browse events. Agencies create 5–10 segments per client. Those segments describe demographics and transaction history. They do not describe intent.

Pillar 2 closes that gap. It takes the behavioral signals from Pillar 1 and converts them into three outputs: who to target (micro-segmentation), what to offer (promotions engine), and what to say (campaign ideation). The combined output is a complete campaign brief (segment, offer, angle, timing) ready for execution in Pillar 3.

These are not independent features. They are an interlocking system where each component makes the others more effective. And surrounding them are two layers that make the system accessible and self-improving: MCP integration as the interface through which specialists interact with all of it, and a Content Hub where accumulated intelligence compounds over time.

Component 01

Campaign Ideation Engine: Stop Losing Strategic Thinking to ChatGPT

Here is something every agency owner already knows: the brainstorming process for email campaigns starts inside Omnisend. Open rates, click rates, revenue per recipient, placed-order data, segment performance, automation metrics, roughly 70% of the raw material that goes into planning next week's campaigns already lives inside the platform.

But it does not stay there, and this is a huge problem.

A specialist pulls campaign performance data out of Omnisend, pastes it into Claude or ChatGPT along with brand guidelines and product launch calendars, iterates on angles, evaluates which segments to target, weighs offer structures against margin, settles on three campaigns, then comes back to Omnisend to execute them. The platform receives the final campaigns. It never sees the reasoning that produced them. It does not know which angles were considered and rejected, which segments were debated, why Mother's Day messaging was aimed at repeat buyers while the collection launch was aimed at new subscribers.

All of that strategic intelligence, the most valuable signal in the email marketing workflow, lives in AI conversations that expire, get buried, and are never captured by the platform. Omnisend receives the output. It never sees the thinking.

This is an information leak. The most valuable signal in the entire email marketing workflow, how marketers think about strategy, escapes the platform every single day. It lives in ChatGPT conversations that expire, Claude projects that get archived, Google Docs that are never revisited. When that specialist leaves the agency, the institutional knowledge leaves too.

The Intelligence Leak
Where Strategic Reasoning Goes to Die
MARKETING INTELLIGENCE GENERATED DAILY ChatGPT / Claude 35% Reasoning expires, never captured Google Docs / Notion 25% Siloed, never revisited Specialist Memory 20% Leaves when they leave Slack / Email 15% Buried, unfindable Platform (Omnisend) 5% Only the final campaign

Agencies feel this problem in a concrete way. They struggle to fill content calendars with non-promotional campaigns that actually perform. They default to "20% OFF" and "New Arrival" because generating engagement-driven narrative content is hard, and the historical analysis that would reveal which narratives actually worked (the "trail running gear guide" that generated 40% higher placed-order rates, the "new year, new gear" angle that drove repeat purchases without a discount) happens manually, once a quarter at best, if it happens at all.

What to build: An AI marketing agent that lives inside Omnisend, has access to all campaign data, captures strategic reasoning from planning sessions, and generates data-backed campaign calendars with complete draft briefs ready for specialist review.

What the Engine Does

Captures strategic reasoning. When users plan campaigns, inside Omnisend directly or through MCP-connected AI assistants, the platform records not just the final campaign, but the thinking behind it. The angles considered, the segments weighed, the objections anticipated. Over time, this builds a proprietary dataset of how e-commerce marketers actually reason about campaigns.

Surfaces what historically works. The system ingests 12 months of campaign data, filters promotional noise, and identifies content themes that drove outsized engagement purely on narrative merit. Not "this Black Friday email had high revenue," that is obvious. Rather: "your educational content about fabric sustainability consistently outperforms promotional by 30–40% in revenue per recipient among repeat buyers."

Generates forward-looking campaign calendars. Based on proven themes, seasonality, segment behavior, and the brand's content strategy, the system suggests what to send, to whom, and when, with draft briefs attached.

People are already using Claude for everything. Imagine how good it would work if we can integrate it with the collective knowledge inside Omnisend.

A specialist who currently spends 4–6 hours per week per client on planning opens Omnisend Monday morning and the system has already done the analysis: "Your 'behind the scenes' series drove 2.3x higher click rates among first-time buyers. Recommendation: schedule a 'How We Source Our Leather' campaign targeting the quality-conscious micro-segment." Planning drops to 45 minutes per client.

Across a 15-client portfolio, that is 48-78 recovered hours per week, enough to onboard 5–8 additional clients without hiring.

Technical Blueprint: Building an AI Marketing Intern That Sits Where the Action Is

The right way to think about this is not as a dashboard feature or a recommendation engine. It is an agent, a co-marketing intern that lives inside Omnisend, has access to everything a human specialist would have access to, and can both analyze and act.

The positioning matters. This is not an AI that replaces the specialist. It is an always-on junior team member that does the grunt work (reviews performance, identifies patterns, drafts campaigns, writes emails) and presents its work for the specialist to accept, reject, or build on. The specialist becomes the editor and strategist. The agent does the production.

What the agent has access to: It can see everything inside the platform that a human user can see. Campaign performance across all metrics: opens, clicks, revenue, placed orders, unsubscribes. It can read email replies and understand the sentiment and patterns in how customers respond. It can look at segment composition and how segments are shifting over time. It can access automation flow performance, A/B test results, and historical trends across months or years of data. It sits where the action is, not in a separate analytics layer, but inside the same environment where campaigns are created and sent.

Potentially, the agent also has web access. It can research competitor campaigns, seasonal trends, industry benchmarks, and trending topics relevant to the brand's vertical. This is a design decision that needs careful evaluation; the value is significant, but the scope and guardrails need to be well-defined.

How it is built. The core is an agentic LLM system with tool-calling capabilities. The agent is built through extensive prompt engineering, defining its persona, its analytical frameworks, its decision-making heuristics, and the boundaries of what it can and cannot do autonomously. It interacts with Omnisend's internal APIs through structured tool calls: read campaign data, query segment performance, pull product catalog information, access the content hub, and crucially, create drafts.

The agent can create full campaigns on the platform. It selects or generates a target segment, writes the email copy, structures the layout, attaches the offer logic, sets the send time, and marks the entire campaign as "agent-generated" so it is clearly distinguishable from human-created work. The specialist receives a notification, reviews the draft, and either approves it, modifies it, or rejects it with feedback that the agent learns from.

Once the agent is operational and has access to the full data layer, a set of capabilities emerge naturally:

  • Performance pattern recognition: the agent continuously monitors campaign data and surfaces non-obvious trends, like which non-promotional themes consistently outperform in specific segments or seasons.
  • Theme clustering: it identifies recurring content patterns across top-performing campaigns and groups them into reusable strategic angles.
  • Forward-looking campaign calendars: based on historical patterns, current segment composition, and upcoming events, the agent drafts a weekly or monthly campaign plan with specific recommendations.
  • Full campaign drafts: not just suggestions, but complete campaigns ready for review: segment selected, copy written, layout structured, offer attached, send time set.
  • Institutional memory: because the agent captures specialist feedback (approvals, rejections, edits), it accumulates an understanding of how each brand's team thinks and operates, making its suggestions increasingly tailored over time.

Strategic Wedge: Three Years of Accumulated Intelligence

Every competitor is building AI that writes copy. Subject line generators, email body drafters, flow builders. These are commodities; every platform has them, users treat them as rough drafts at best.

No competitor is building AI that decides what to write about. That is the first-order advantage: Omnisend becomes the first platform that tells the merchant "based on 14 months of data, here is the campaign that will generate the most revenue this week, and here is why." But the deeper play unfolds over years, not months.

The Compounding Effect
Three Years of Accumulated Intelligence
The wedge is not the feature. The wedge is the compounding intelligence the feature generates. And every month that passes without building it is a month of reasoning data permanently lost.
DATA CAPTURE PROPRIETARY MODELS MARKET INTELLIGENCE YEAR 1 YEAR 2 YEAR 3
Year One
Reasoning Capture
  • Captures reasoning data from thousands of specialists across thousands of merchants
  • Learns which content themes work in which verticals
  • Maps seasonal patterns that hold across DTC
  • Identifies non-promotional campaigns that drive the highest LTV
Year Two
Proprietary Models
  • Dataset large enough to train proprietary models on e-commerce marketing reasoning
  • Not generic copywriting, but strategic decision-making specific to email marketing in retail
  • No competitor has this dataset. It does not exist anywhere else
  • ChatGPT has general knowledge. Omnisend would have specific, performance-validated intelligence from real campaigns
Year Three
Market Intelligence
  • Engine understands how the market itself is evolving
  • Detects which content themes are gaining traction, which angles are saturating
  • Identifies untapped narrative opportunities across the ecosystem
  • Intelligence surfaced to merchants, published as industry reports, used to inform product decisions
  • Platform becomes the authoritative source on what works in e-commerce email marketing

Feasibility

Criteria Score Notes
Impact ⭐⭐⭐⭐⭐ Reduces agency labor, improves campaign quality, captured reasoning becomes proprietary asset that compounds.
Technical Feasibility ⭐⭐⭐⭐ Achievable with current LLM capabilities and existing campaign analytics data.
Resources Required Medium 2-3 engineers, 3-4 months for v1.
Long-term Sustainability ⭐⭐⭐⭐⭐ Captured reasoning compounds over time. Each campaign adds to the intelligence base, creating appreciating switching costs.
Fit with Agency ICP ⭐⭐⭐⭐⭐ Directly reduces the highest-cost agency activity: strategic campaign planning. Agencies gain leverage.

Component 02

Micro-Segmentation Engine: 2,000 Cart Abandoners Are Not the Same Person

Every email marketing specialist has had this experience. They open their abandoned cart segment in Omnisend, 2,000 contacts. And they know, intuitively, that these are not 2,000 versions of the same person. There's the person who abandoned at the shipping cost screen. There's the person who checked the return policy three times and left. There's the comparison shopper who viewed eight similar products over four sessions. There's the impulse browser who added something at midnight and forgot about it by morning.

These are fundamentally different people with fundamentally different hesitations. The specialist knows this. They've known it for years.

But the platform gives them 3–5 filter dropdowns and calls it segmentation. "Purchased in last 90 days." "Opened email in last 30 days." "Located in US." So the specialist sends the same abandoned cart email to all 2,000 people ("Hey, you left something behind! Here's 10% off!") and watches the 2% conversion rate and wonders why it isn't higher.

Omnisend's current segment builder UI - limited to basic filter dropdowns

Omnisend's segment builder, much better.

Klaviyo's segment builder - roughly the same basic filters

Klaviyo's segment builder, slightly worse.

It isn't higher because 2,000 different hesitations received one generic response.

Generic abandoned cart email - Hey there, you forgot to check out

The generic abandoned cart email every brand sends: same message, 2,000 people, zero differentiation.

The problem is not that specialists lack segmentation instincts. The problem is that the platform cannot express what the specialist already knows. Micro-segmentation closes that gap; it gives the platform the same resolution the human already has.

Segments today are also static. A customer enters when they meet criteria and stays until they don't. There is no understanding of trajectory: why they entered, how their behavior is shifting, whether they're warming or cooling. The segment is a snapshot, not a story.

The answer is not more filters. The answer is a fundamentally different model.

What to build: A behavioral clustering system that automatically discovers 50-200+ micro-segments per brand from intent signals (search queries, comparison patterns, hesitation behaviors), not demographic checkboxes. Each segment maps to a specific messaging angle and incentive type.

What Even Is Micro-Segmentation?

Broad segments (5–10 per brand) leave substantial revenue on the table because every campaign is a compromise. True 1:1 personalization sounds ideal but is operationally impossible, as no agency can create thousands of unique campaigns, no content pipeline can produce them, and statistical sample sizes become meaningless.

Micro-segmentation operates between these extremes: 50–200+ segments per brand, defined by behavioral signal clusters rather than demographic checkboxes.

Working micro-segmentation engine - live behavioral clustering from Shopify data

Our working micro-segmentation engine: behavioral clustering from live Shopify data. Explore it live at microsegments.ai →

A micro-segment is not "women aged 25–34 who purchased recently." A micro-segment is "customers who viewed 3+ eco-friendly products, checked the return policy at least once, arrived from a sustainability-focused ad, and have not yet purchased."

That segment contains 47 people. The platform reveals exactly what objection they have (risk/returns), what they care about (sustainability), and where they are in the decision process (deep research, no commitment).

The campaign for those 47 people writes itself: sustainability credentials of the specific products they viewed, free returns emphasis, social proof from other eco-conscious buyers. That campaign will dramatically outperform "Hey, you left something in your cart. Here's 10% off."

Instead of the specialist manually building segments through Omnisend's filter builder, spending 2-3 hours per client per month maintaining and updating them, the system surfaces micro-segments automatically: "New segment detected: 'Return-Policy Researchers', 284 contacts who viewed products, checked return policy 2+ times, but did not purchase. Average cart value: $127. Recommended approach: objection-removal campaign emphasizing satisfaction guarantee." The specialist reviews, approves, and the campaign ideation engine immediately suggests an angle.

The intelligence moves from the specialist's head into the platform.

The Numbers

Segmented campaigns generate more revenue than non-segmented sends, with some claiming up to a 760% jump. That benchmark is based on current broad segmentation, 20–25 segments with basic filters.

Micro-segmentation pushes this further. Conservative estimates based on comparable personalization studies: 20–30% improvement in click-through rates and 15–25% improvement in conversion rates on top of the existing segmentation lift.

  • For a brand generating $1 million annually through email: $150,000–$250,000 in additional revenue.
  • For an agency managing 20 such clients: $3–5 million in incremental revenue directly attributable to the platform.

That is the number that goes in the agency's client report. That is the proof that solves the ROI problem.

Technical Blueprint: How We Build This

The engineering approach starts with what already exists. Pillar 1's enriched behavioral data gives us the raw material: every product view, search query, cart action, return policy view, checkout step, and DOM interaction mapped to individual contact profiles. That's the foundation. No new data collection required.

From there, we build signal interpretation rules. This is largely prompt engineering and domain expertise, not novel ML. Raw events get translated into behavioral indicators using contextual logic. A product_removed_from_cart is not just a removal; combined with checkout_shipping_info_submitted it indicates price sensitivity at the shipping cost stage. The same removal combined with return_policy_viewed indicates risk aversion instead. A repeated collection_viewed for the same category paired with search_submitted for specific product attributes indicates a customer who knows what they want but hasn't found the right match. Same events, different meaning depending on context.

Building these interpretation rules is where our domain expertise in e-commerce behavioral analysis is most critical, and where most AI implementations fail, because they treat events as flat signals rather than contextual indicators.

We then apply clustering algorithms to group contacts exhibiting similar behavioral patterns. These are well-proven techniques from recommendation systems: the algorithmic foundation has existed for over a decade. The innovation is not in the clustering. It is in the signal interpretation layer above it, and in applying the output to email marketing specifically.

This is not theoretical. We have a working micro-segmentation engine producing behavioral clusters from live Shopify data. It identifies intent-based groupings that standard ESP segmentation cannot. The POC exists. The implementation for Omnisend integrates with Pillar 1's data layer, scales across the merchant base, and connects directly with the campaign ideation and promotions engines.

Strategic Wedge: Prediction vs. Intent, and Why the Gap Widens

Klaviyo's strongest competitive asset is predictive analytics: CLV prediction, churn risk scores, predicted next order date. These capabilities are genuinely best-in-class.

But prediction and intent are fundamentally different things.

Prediction looks backward. It analyzes historical purchase patterns across millions of customers and says: "This customer will probably buy again in 14 days." It tells the marketer when to send.

Intent looks at the present. It analyzes what this specific customer is doing right now and says: "This customer checked the return policy twice, compared four yoga mats, arrived from a sustainability ad. She's hesitating because of a specific objection." It tells the marketer what to say.

Klaviyo tells the marketer a customer will churn. Micro-segmentation tells the marketer why they're about to churn and what message will prevent it. The first is a forecast. The second is an intervention.

The moat is not the algorithm. The moat is the data the algorithm generates over time.

Omnisend cannot out-predict Klaviyo, as they have years of data advantage. But Omnisend can out-understand Klaviyo by capturing behavioral signals Klaviyo's architecture was not designed to ingest. If Omnisend starts now, in 18 months they will have 18 months of intent data that Klaviyo cannot replicate backward. In 36 months, the system has observed multiple full purchase cycles for most customers, and it can predict intent shifts before they manifest in behavior. That dataset does not exist anywhere else.

Here is what this looks like in practice. A single customer session generates raw behavioral events. The micro-segmentation engine extracts intent signals from those events, not just what happened, but why. Those signals map directly to marketing vectors: the specific message, angle, and offer that addresses this customer’s actual hesitation.

Signal Extraction
From Raw Logs to Marketing Vectors
SM
Sarah M.
Customer #48,291 · alpine-gear.co
Active
Raw event stream
shopify_web_pixel · live47 events
14:23:05sessionsession_start · google/cpc · mobile→ R1
14:23:08page/collections/yoga-gear · ref=google
14:23:41scrollscroll_depth=85% · 33s dwell→ R1
14:23:58clickfilter_toggle · "Material: Cork"
14:24:05clicksort_by · "Price: Low to High"
14:24:12clickproduct_card · "Blue Harmony" · pos=3
14:24:14page/products/blue-harmony-yoga-mat · $45
14:25:50clicktab_switch → "Reviews" · 96s PDP→ R2
14:27:44clickreview_helpful · #8291 · ★★★★★→ R2
14:29:33scrollreviews · 12 read · 18s avg dwell→ R2
14:30:01nav/policies/return-policy · footer→ R4
14:31:15nav/policies/shipping · 74s on returns→ R4
14:32:44search"yoga mat vs pilates mat" · 6 results→ R3
14:33:02clicksearch_result · "Ultimate Mat Guide"
14:33:55navback → /products/blue-harmony
14:34:08clicksize_guide_toggle · dimensions
14:34:12scrollpdp_bottom · viewed "You may also like"
14:34:15clickvariant_select · color=Indigo · 6mm
14:34:17page/cart · items=1 · subtotal=$45.00
14:34:18scrollcart_page · shipping_estimate viewed
14:34:19clickpromo_code_field · focused · no entry
14:34:20cartadd_to_cart · Blue Harmony · $45→ R5
14:34:22exitexit_intent · cart=$45 · no checkout→ R5
14:34:23endsession_end · 11m18s · 7 pages
Reasoning engine
🔍R1 · Pattern Detection
High dwell (85% scroll) + policy views + abandon → hesitant buyer
← 14:23:05 session · 14:23:41 scroll
🧠R2 · Behavioral Inference
12 reviews read, marked helpful, photos → deep evaluation
← 14:25:50 · 14:27:44 · 14:29:33
🏷️R3 · Classification
Search "yoga vs pilates" → category research, not brand loyal
← 14:32:44 search
📊R4 · Derived Metric
Return 74s + shipping page → cost sensitivity: 0.87
← 14:30:01 return · 14:31:15 ship
⚠️R5 · Risk Assessment
Cart $45 abandoned → 62% churn in 48h
← 14:34:20 cart · 14:34:22 exit
🎯R6 · Routing
3 segments, 6 vectors, schedule sequence
← all signals aggregated
Extracted signals
Purchase Intent
High research, hesitant buyer
Return policy 74s · Cart abandoned
0.91
conf
Engagement Depth
Deep product evaluation
12 reviews · 96s PDP · gallery
0.94
conf
Session Timing
Afternoon research window
11m active · Search-to-cart
0.78
conf
Category Affinity
Yoga & pilates gear
Collection + search + product
0.96
conf
Risk Factor
Shipping cost sensitivity
Shipping policy pre-checkout
0.87
conf
Acquisition Channel
Paid search - intent match
Google CPC · Mobile
0.99
conf
Social Proof Need
Review-driven decision maker
Marked helpful · Photos
0.89
conf
Price Behavior
Value-conscious, not cheapest
Sorted low→high, picked $45
0.82
conf
Marketing Vectors: Automated Email Sequences
Free shipping + guarantee
Address shipping hesitation and return anxiety
"Your mat ships free, returns on us"
2h post-abandon · Free ship badge + 30-day guarantee
+34%
recovery
Cart reminder + social proof
Surface the reviews she engaged with
"12 yogis love this mat, here's why"
Next day 2pm · Includes review #8291
+28%
conversion
Comparison guide: yoga vs pilates
Educational content from her search query
"Yoga vs pilates mat: what matters"
Day 3 nurture · Blue Harmony = answer
+41%
engagement
Low-stock urgency on her variant
Scarcity on her exact Indigo 6mm selection
"Only 4 left in Indigo, your pick"
Day 5 if no purchase · Exact variant ref
+22%
urgency lift
Cross-sell: yoga starter bundle
Category affinity + mid-tier price behavior
"Complete your practice: mat + blocks"
Day 7 post-purchase or day 10 · Bundle 15% off
+18%
AOV lift
Win-back: calibrated incentive
Dynamic % from sensitivity score (0.87)
"Sarah, 12% off, just for you, today"
Day 14 last resort · % from sensitivity score
+15%
win-back

This is the gap Klaviyo cannot close by copying features. The intelligence is not in the algorithm. It is in the accumulated behavioral understanding that only exists because Omnisend started capturing these signals first.

Feasibility

Criteria Score Notes
Impact ⭐⭐⭐⭐⭐ Cornerstone. This is the foundation of the entire intelligence layer. Every other Pillar 2 component depends on it.
Technical Feasibility ⭐⭐⭐⭐ Core clustering algorithms are well-established. Signal interpretation is domain expertise, not research. Pillar 1 data integration is the primary dependency.
Resources Required High 3–4 senior engineers, 4–6 months for production v1.
Long-term Sustainability ⭐⭐⭐⭐⭐ Behavioral understanding compounds as data accumulates. Cross-merchant patterns create intelligence no single brand could develop.
Fit with Agency ICP ⭐⭐⭐⭐⭐ Agencies need differentiated segmentation to justify their value. Micro-segments unlock personalization at scale.

Component 03

Promotions & Offer Engine: Stop Giving Away Margin to People Who'd Buy Anyway

"20% OFF EVERYTHING" is the most expensive sentence in email marketing.

Here is what actually happens when a brand sends that email to their entire list. Within that audience: 15–20% would have purchased at full price within the next week anyway, so giving them 20% off is pure margin destruction. Another 30% are comparison shoppers who might convert with social proof or a satisfaction guarantee, not a discount, and the money spent on their discount bought nothing. Another 20% are price-sensitive first-time visitors where a targeted $10-off-first-purchase would have worked at a fraction of the blanket cost.

Agencies know this. And they use various loyalty platforms for big brands to automate this.

We are not suggesting to build the entire Loyalty/Promotion Engine internally. But we are suggesting to build it enough so that mid-market brands see a meaningful reason to use Omnisend over long periods of time and have much more friction if they ever consider switching.

Every agency owner has looked at a post-campaign report and thought: "we just gave away 20% to a thousand people who would have bought regardless." But they had no alternative. Omnisend's current promotional tools apply the same offer to everyone in a segment. There is no mechanism to match the incentive to the reason someone is hesitating.

The question isn't "should we discount?" The question is: "why did this specific person hesitate, and what is the cheapest intervention that addresses their specific hesitation?"

That question is worth $500K in recovered margin for a $10M brand. And no ESP is asking it.

What to build: A decision layer within the agent that maps each micro-segment's behavioral signals to the cheapest effective incentive (guarantee, social proof, free shipping, or discount), protecting margin instead of giving it away.

The promotions engine is what happens when the agent (from Campaign Ideation) has access to micro-segments and can see why someone is hesitating. From the behavioral signals that define each micro-segment, the right incentive type follows almost logically:

  • Return-policy researchers → satisfaction guarantee messaging, not a discount
  • Shipping-stage abandoners → free shipping offer
  • Multi-session comparison shoppers → social proof ("847 customers purchased this month. Rated 4.8/5") or urgency
  • Genuinely price-sensitive abandoners → targeted discount at 10%, not blanket 20%
Offer Strategy
The Anatomy of a Blanket Discount
Blanket
Optimized
"20% OFF EVERYTHING" → 1,000 recipients, identical treatment
Smart interventions → 1,000 recipients, 5 distinct treatments
18%
30%
20%
12%
20%
Would Buy Anyway
18% of audience
No offer needed
$0 discount cost
Full margin preserved
Comparison Shoppers
30% of audience
Social proof + reviews
$0 discount cost
"847 customers purchased this month"
Price-Sensitive New
20% of audience
Targeted $10 off first order
~$2,000 cost
Fraction of blanket cost
Shipping Abandoners
12% of audience
Free shipping offer
~$1,200 cost
Addresses actual hesitation
Genuinely Price-Sensitive
20% of audience
Targeted 10% discount
~$5,000 cost
Half the blanket rate, proven need
Annual discount spend
$1.5M
On a $10M brand at 15% avg discount rate
Effective discounting
~35%
The rest wasted on wrong interventions
Optimized discount spend
$600–900K
40–60% reduction through smart matching
Recovered margin
$450–600K
Pure profit recovery, not revenue growth

The system maintains a library of incentive types: percentage discounts, fixed-amount offers, free shipping, free returns, early access, bundle deals, loyalty rewards, satisfaction guarantees, social proof packages. For each micro-segment, it recommends the incentive most likely to convert at the lowest margin cost.

In practice: instead of "abandoned cart gets 10% off after 24 hours, 15% after 48" applied to every abandoner, the agent identifies three distinct micro-segments within the abandonment audience. Price-sensitive abandoners get free shipping. Risk-averse abandoners get guarantee messaging. Comparison shoppers get social proof. Only the genuinely price-sensitive, roughly 25% of abandoners, receive a discount, and it's targeted at 10%, not 20%. Conversion holds or improves. Overall discount cost drops 40-60%.

$450K-600K in Recovered Margin for a $10M Brand

For a brand doing $10 million annually with a 15% average discount rate: ~$1.5 million in margin given away every year.

If the engine reduces unnecessary discounting by 30–40% through better-matched incentives: $450,000–$600,000 in recovered annual margin.

This is not revenue growth. This is pure profit recovery. For a brand at 20–30% net margin, recovering $500K in margin is equivalent to generating $1.6–2.5 million in additional top-line revenue. That changes the conversation with the CFO.

For agencies: the hardest client question is "why are we giving away margin to people who would have bought anyway?" With this engine, the answer becomes provable: segment-level incentive data showing each offer type, its cost, and its conversion contribution.

How to Build This: Classical ML + New Age AI

The promotions engine is not a separate system. It's a decision layer within the agent. This will start with Survival Analysis and end with LLMs acting as the reasoning engine on top of raw numbers and analytics.

When the agent creates a campaign for a micro-segment, it doesn't just pick a message, it picks an incentive. It maps the segment's dominant behavioral signals to incentive type affinity. Then it optimizes: which incentive achieves the conversion at the lowest margin cost? It respects brand constraints: maximum discount caps, free shipping thresholds, offer frequency limits.

Over time, campaign performance data feeds back. The system learns which incentive types actually convert which behavioral patterns across this specific brand, and across the broader merchant base. Year one: rules-based mapping (return-policy viewers → guarantees). Year two: data-informed optimization (for this brand's audience, 15% off converts comparison shoppers better than social proof, but for that brand, social proof wins). Year three: the system has the largest dataset of incentive-to-behavioral-pattern effectiveness in e-commerce email marketing.

No ESP Connects Behavioral Segments to Incentive Optimization

Every ESP offers promotional automation: "if cart abandoned, send discount." That's a blunt instrument that treats all hesitation as a price problem.

No ESP currently connects behavioral micro-segmentation to incentive optimization. The gap between "everyone gets escalating discounts" and "each segment gets the intervention that addresses their specific hesitation" is the gap between spending margin and investing margin. Omnisend would be the first platform where the system understands not just that a customer abandoned, but why, and matches accordingly.

Feasibility

Criteria Score Notes
Impact ⭐⭐⭐⭐ Direct financial impact through margin protection. Dependent on micro-segmentation quality.
Technical Feasibility ⭐⭐⭐⭐ Logic layer on top of micro-segmentation output. Decision framework within the agent architecture.
Resources Required Medium 2 engineers, 2–3 months. Builds directly on micro-segmentation infrastructure.
Long-term Sustainability ⭐⭐⭐⭐ Offer effectiveness data compounds. System learns which interventions work for which hesitation patterns over time.
Fit with Agency ICP ⭐⭐⭐⭐ Agencies can demonstrate measurable margin savings to clients. Shifts conversation from cost to ROI.

Component 05

MCP Integration: The Platform That Learns From How Specialists Actually Think

There is a pattern that every SaaS company needs to internalize: users have started interacting with their tools through AI assistants rather than the tool's own dashboard. Notion through Claude. Shopify through ChatGPT. Slack through Claude Code. GitHub through Cursor. MCP adoption has been rapid, and the protocol is mature, well-documented, and integrated by dozens of major platforms.

Users who work this way do not go back. The cognitive load of context-switching disappears. The friction of navigating dashboards is replaced by natural language.

When a specialist using this workflow switches to Omnisend, they are forced into manual mode: separate dashboard, click-through menus, manual filter configuration. The most intelligent part of their stack becomes the most friction-heavy. This is not a future problem. It is happening right now, and the gap widens every month as more platforms integrate.

What to build: An MCP server that lets specialists query Omnisend data, create campaigns, and execute through Claude or ChatGPT, while the platform captures the strategic reasoning behind every decision.

What This Looks Like in Practice

A specialist is in Claude planning next week's campaigns. They ask: "Pull up last month's performance for Client A's eco-conscious segment." Omnisend returns the data, inside Claude. The specialist sees open rates, revenue, placed orders. They ask: "How did guarantee-based offers compare to discounts for this segment?" The data comes back. They decide on an approach. They say: "Create a campaign for the eco-conscious segment. Sustainable sourcing angle. Satisfaction guarantee offer. Tuesday 10am EST." The agent builds the campaign, applies the segment, sets the schedule, confirming each step. The specialist approves without ever opening the Omnisend dashboard.

That's the cognition side, querying data through the assistant. And the action side, executing campaigns through the assistant. But there's a third function that changes everything.

Reasoning Capture. Every time a specialist plans a campaign through Claude connected to Omnisend's MCP, the platform doesn't just execute the request. It captures the reasoning chain. Which segments were considered, which angles were debated, what past performance was referenced, why one approach was chosen over another.

That reasoning is the fuel for the Campaign Ideation Engine. The more people interact through MCP, the smarter the platform gets. The more it learns about how real marketers think, the better its suggestions become. This is not a side effect. This is the strategic purpose of MCP integration.

How We Build It

MCP is an open, well-documented protocol. The engineering lift is moderate, primarily exposing Omnisend's internal APIs as MCP-compatible tools and handling authentication and permissions. The protocol itself is mature and well-adopted.

The real expertise is in knowing which Omnisend operations to expose for maximum specialist value:

  • Read operations: Query segment composition, pull campaign performance, access A/B test results, retrieve contact behavioral profiles, compare metrics across time periods and segments.
  • Write operations: Create campaigns, generate segments from descriptions, schedule sends, apply offer logic, draft email content, set up automation triggers.
  • Reasoning capture: Log the conversation context surrounding every action: what the specialist asked before creating a campaign, what data they reviewed, what alternatives they considered. This becomes structured input to the Campaign Ideation Engine's learning loop.

The domain knowledge matters more than the engineering. We've worked extensively with MCP, and we understand the protocol's capabilities, its auth model, and where implementation typically breaks down. The hard part is getting the tool definitions right so that the specialist's natural language maps cleanly to Omnisend's operations.

Klaviyo's MCP Is Read-Only. Omnisend's Can Read, Write, and Learn.

Klaviyo already has an MCP server. But Klaviyo's implementation is a read layer: AI assistants can pull data from Klaviyo. Query segments, retrieve campaign results, access contact information. Read-only.

Klaviyo MCP server announcement

Klaviyo's MCP server announcement, they're already moving on this. Read the announcement →

We're proposing bidirectional, read and write, with reasoning capture. Those are fundamentally different products. Klaviyo built MCP to keep pace with the ecosystem. Omnisend can build MCP to capture value from the ecosystem.

Feasibility

Criteria Score Notes
Impact ⭐⭐⭐⭐ Future-proofs the platform, meets emerging user expectations, and creates the data pipeline that feeds the Campaign Ideation Engine's intelligence.
Technical Feasibility ⭐⭐⭐⭐⭐ MCP is mature and well-documented. Implementation is API exposure and auth. Can ship independently of other Pillar 2 components.
Resources Required Low-Medium 1–2 engineers, 2–3 months.
Long-term Sustainability ⭐⭐⭐⭐⭐ MCP adoption is accelerating. Being an early, full-featured integration builds user habits that persist.
Fit with Agency ICP ⭐⭐⭐⭐⭐ Power users and agencies are the first to adopt AI-native workflows. MCP becomes their primary interface.

Component 06

Content Hub: The Switching Cost That Appreciates Every Month

The most valuable thing Omnisend can own is not data, not features, not even the AI. It's the accumulated marketing intelligence that builds up inside the platform over months of use. It can't be exported as a CSV. It can't be migrated to another platform. It stays.

Every component in Pillar 2 produces outputs: campaign suggestions, segment insights, performance analyses, generated emails. Users will take those outputs and refine them. They'll adjust campaign angles for brand voice. They'll add context about an upcoming product launch. They'll note that a specific segment responds better to long-form storytelling than punchy promo copy. They'll build on the system's suggestions with their own expertise.

Where does that refinement live?

Right now: Google Docs. Notion. Slack threads. The specialist's memory. Outside the platform. Lost to Omnisend. Another information leak, the same one we identified in Campaign Ideation, but for accumulated knowledge rather than strategic reasoning.

What to build: A brand intelligence repository that stores voice guidelines, campaign performance history, specialist refinements, and accumulated marketing knowledge, feeding every other Pillar 2 component and creating switching costs that grow monthly.

Content Hub is not a day-one feature. It emerges naturally as the other Pillar 2 components are used, the place where accumulated marketing intelligence collects. An internal workspace holding everything inside Omnisend:

  • Content Strategy: a living document maintaining positioning, voice guidelines, seasonal planning, audience notes. Updated as the brand evolves.
  • Campaign Idea Bank: structured database of campaign concepts with themes, target segments, performance data, and sources (system-suggested vs. specialist-created). Operates like a Notion database, not a spreadsheet.
  • Story Bank: narrative assets the brand draws from: customer stories, product origin narratives, educational themes. Each tagged by tone, audience fit, seasonal relevance, and past performance when used.
  • Performance Learnings: interpreted insights, not raw stats. Not "Campaign X had 4.2% click rate" but "Educational content about product care outperforms promotional by 25–35% among repeat buyers in Q1. Hypothesis: post-holiday buyers are in maintenance mode, not acquisition mode."

Every other Pillar 2 component becomes dramatically more effective when it has access to this context. Without Content Hub, the AI suggestions are generic, drawn on aggregate patterns. With it, they incorporate the brand's specific voice, proven angles, and accumulated learnings. The difference between "send an educational email" and "send a 'How It's Made' story using your ceramic workshop narrative, which drove 3.2x engagement among design enthusiasts last February."

Traditional switching costs fade over time, as teams adjust, workflows rebuild, the pain of migration is forgotten within six months.

Content Hub switching costs appreciate. Every month adds intelligence that makes the platform more valuable and departure more costly. At month 1, losing the Hub is inconvenient. At month 12, it's painful. At month 24, it's devastating. The brand would be abandoning every campaign angle tested, every segment insight discovered, every performance pattern identified. That's not workflow disruption. That's institutional memory loss.

Feasibility

Criteria Score Notes
Impact ⭐⭐⭐⭐ Transforms platform stickiness and dramatically improves AI suggestion quality. Value compounds over time rather than being immediate.
Technical Feasibility ⭐⭐⭐⭐⭐ Structured content management. Rich text editor, database tables, tagging. No novel engineering required.
Resources Required Low-Medium 2–3 engineers, 2–3 months for v1. Can ship as a basic version early and expand based on usage patterns.
Long-term Sustainability ⭐⭐⭐⭐⭐ Every month adds intelligence that makes the platform more valuable. Switching costs appreciate rather than fade.
Fit with Agency ICP ⭐⭐⭐⭐⭐ Agencies manage multiple brands. A centralized intelligence hub per client is operationally transformative.

The Full Picture

The Compounding System: Five Components, One Flywheel

What the System Looks Like When It's Running

Before analyzing the business mechanics, here is what changes when all of Pillar 2 is operational.

Today's workflow: A specialist opens Omnisend. They see contacts and basic segments. They manually decide who to email, what to say, what offer to include. They build the email in the template editor. They send it. They pull a report. They paste the data into ChatGPT to figure out what worked. They repeat this for every client, every week.

Pillar 2 workflow: The specialist opens Omnisend (or opens Claude connected to Omnisend through MCP). The platform has already surfaced: "3 new micro-segments detected. Campaign Ideation recommends a 'How We Source Our Materials' angle for eco-conscious researchers, as this theme outperformed promotional campaigns by 40% last quarter. Promotions Engine suggests guarantee messaging, not a discount, based on this segment's return-policy browsing behavior. Draft email generated and ready for review." The specialist reviews, adjusts the tone, approves, and sends, in 30 minutes instead of 6 hours. And the system captures why they made the adjustments they made, so next time it gets closer.

Why the Components Cannot Be Separated

Each component solves a specific problem. But the reason this works as a strategy, not just a feature set, is that each component creates the conditions for the others to deliver more value.

  • Without micro-segmentation → the ideation engine has no one specific to target. It can suggest "send an educational email," but it can't say "send it to the 284 return-policy researchers with this specific angle addressing their specific hesitation."
  • Without campaign ideation → micro-segments exist, but agencies are left staring at 200 segments wondering what to send each one. The analytical power is there. The strategic direction is missing. The specialist is overwhelmed, not empowered.
  • Without the promotions engine → campaigns go out with the same blanket discount to every segment. Margin gets destroyed. The granularity of micro-segmentation is wasted because the incentive is still one-size-fits-all.
  • Without AI email generation → micro-segmentation creates 200x more production work. Agencies can't keep up. The insight exists but the execution bottleneck prevents it from reaching the inbox.
  • Without MCP → everything requires manual dashboard interaction. The specialist who uses Claude for every other platform has to context-switch into Omnisend's traditional UI. More critically, the strategic reasoning that happens during campaign planning is never captured by the platform. The intelligence leak continues.

The system is not five features. It is one flywheel with five components.

The Flywheel Mechanics: Three Dimensions of Compounding

The compounding happens across three dimensions that operate on different timescales and create different types of competitive advantage.

Dimension 01

Individual Merchant Intelligence (compounds from month 1)

Every campaign sent through the system generates performance data that feeds back into every component. Micro-segments get refined as contacts move between segments as new behavioral data flows in. The ideation engine learns which themes resonate with which segments for this specific brand. The promotions engine learns which incentive types convert which behavioral patterns for this specific audience. The email generator improves its understanding of what "on-brand" looks like for this specific merchant.

At month 1, the system's suggestions are based on general patterns. At month 6, they incorporate the brand's specific history. At month 12, the system knows this brand's audience better than a new specialist would after weeks of onboarding. That accumulated intelligence is what makes leaving the platform increasingly expensive, not because of contracts or migration pain, but because the intelligence is genuinely valuable and non-transferable.

Dimension 02

Cross-Merchant Intelligence (compounds from month 6+)

This is where the network effect begins. As hundreds, then thousands of merchants use the system, patterns emerge across the ecosystem. The ideation engine doesn't just know what works for one brand. It sees which content themes perform across verticals. "Educational behind-the-scenes content outperforms promotional by 30–40% across DTC brands in Q1." "Guarantee messaging converts return-policy researchers at 2x the rate of discount offers, regardless of vertical." "Story-driven campaigns targeting repeat buyers have 60% higher LTV impact than product-focused campaigns."

This is aggregate intelligence that no individual agency or brand could generate on their own. It is derived from the combined experience of thousands of merchants sending millions of campaigns through the system. And it is proprietary to Omnisend. It doesn't exist in ChatGPT, in Klaviyo's datasets, or anywhere else.

Dimension 03

Market Intelligence (compounds from year 2+)

At sufficient scale, the system sees how the market itself is evolving. Which content themes are gaining traction across the ecosystem. Which angles are saturating and losing effectiveness. Where the next untapped narrative opportunities are. What seasonal patterns are shifting year-over-year.

This is intelligence Omnisend can surface to merchants ("your competitors' audiences are responding strongly to sustainability messaging this quarter"), publish as industry reports (establishing thought leadership and authority), and use internally to inform product decisions. The platform evolves from a tool that sends emails to the authoritative source on what works in e-commerce email marketing.

How This Maps to Omnisend's Customer Journey

Every SaaS platform has a user journey with specific drop-off points. Pillar 2 addresses the most critical ones.

Stage 01

Evaluation: "Why Omnisend over Klaviyo?"

Today, the honest answer is: similar features, slightly cheaper, better support. That is a weak position. With Pillar 2, the answer becomes: "Omnisend is the only platform that identifies customer intent from behavioral signals, suggests what campaigns to run, optimizes offers per segment, and generates the emails for you. Klaviyo predicts when to send. Omnisend tells you what to send, to whom, why, and produces the campaign."

That is a differentiation story the sales team, the partnership team, and agencies can all articulate. It is specific enough to be testable, "connect your Shopify store and see what micro-segments the system discovers in your data," and bold enough to shift the perception from "the Klaviyo alternative" to "the platform that actually works for you."

Stage 02

Onboarding: "I signed up. Now what?"

The biggest early churn driver in any ESP is the blank canvas problem. A new user connects their Shopify store, imports contacts, and stares at an empty dashboard wondering what to do.

With Pillar 2, the moment a merchant connects their Shopify data, the micro-segmentation engine begins analyzing behavioral signals. Within hours, the system surfaces: "We've identified 14 behavioral segments in your customer base. Here are the top 3 by potential revenue impact, with recommended campaign approaches for each." The user sees immediate, personalized value before they have done any manual work. That is a fundamentally different onboarding experience, one that demonstrates the platform's intelligence from the first interaction.

Stage 03

Daily Usage: "How do I fill my content calendar this week?"

This is where agencies spend the most time and where Pillar 2 delivers the most operational value. The campaign ideation engine replaces the weekly "what should we send?" cycle with system-generated recommendations backed by data. The promotions engine replaces "should we discount?" with segment-specific incentive logic. The email generator replaces hours of template customization with production-ready drafts.

The cumulative effect: a specialist who currently manages 5–8 clients can manage 12–15 with the same effort. That is not a marginal improvement. That is a structural change to the agency's unit economics.

Stage 04

Value Realization: "Is this platform actually working?"

The proof problem is Omnisend's most critical retention challenge. Agencies need to demonstrate ROI to clients. Brands need to justify the subscription to their CFO.

With Pillar 2, the proof becomes granular and specific. Instead of "attributed revenue" (which everyone knows is inflated), the report says: "We identified 847 return-policy researchers. We targeted them with guarantee messaging instead of a discount. Conversion was 34% above baseline. Margin saved: $12,400 this month."

That is a story a brand CEO believes. It is specific, falsifiable, and describes an action-to-outcome chain they can follow.

The promotions engine adds a dimension no competitor can report on: margin recovery. "By matching incentives to behavioral intent, we reduced blanket discounting by 40%. $47,000 in annual margin recovered." That number speaks to the CFO directly, in their language, on their terms.

Stage 05

Retention: "Should we switch to Klaviyo?"

Every agency and brand periodically evaluates alternatives.

Without Pillar 2, the evaluation is about features and price. Klaviyo has more features. Someone else is cheaper. Omnisend loses on both axes.

With Pillar 2, the evaluation has to account for accumulated intelligence. Switching means losing months of learned micro-segments, proven campaign angles, optimized incentive mappings, and the system's accumulated understanding of this specific brand's audience. That is not a spreadsheet comparison. That is institutional knowledge loss. The longer the brand has been on the platform, the more painful the switch becomes, not because of lock-in tricks, but because the intelligence is genuinely valuable and non-transferable.

Competitive Positioning: What Nobody Else Has Built

The ESP market is crowded. But when mapping what each competitor is actually building, a clear gap emerges.

  • Klaviyo: strongest data layer and best predictive analytics (CLV, churn risk, next-order-date). AI features are execution-layer: subject line generators, flow builders, basic segmentation. They predict when to send. They do not tell the marketer what to send or why. MCP integration is read-only. No behavioral micro-segmentation, no campaign ideation, no incentive optimization, no email generation. They have the foundation but have not built the intelligence layer.
  • Sendlane: strong anonymous visitor tracking (Beacon). AI features are basic. No micro-segmentation, no campaign intelligence. Smaller team, narrower resources. Competing on support and mid-market positioning, not AI capability.
  • Mailchimp: part of Intuit. Moving slowly. AI features are generic. Enterprise bureaucracy limits innovation speed. Iterating on the existing paradigm, not building toward a new one.
  • ActiveCampaign: CRM-first, email-second. AI features focused on deal management and CRM automation. Different strategic trajectory entirely.
  • New entrants (Hyros, LTV.ai, Alvas): point solutions. Each solves one piece (attribution, LTV prediction, specific analytics) but lacks the sending infrastructure, template system, automation engine, and scale. Features, not platforms.

The gap: No competitor is building an integrated intelligence system. Some have better data. Some have isolated AI features. None have connected behavioral data → intent-based segmentation → campaign intelligence → incentive optimization → email generation → execution into a single compounding flywheel.

That integration is the moat, not any individual component.

The Impact: Omnisend, Agencies, Brands, Consumers

The business impact cascades through every layer.

  • For Omnisend: Premium pricing justification, as intelligence features command higher ARPU than commodity email sending. Structural churn reduction, as accumulated intelligence creates appreciating switching costs that get stronger each month. Product-led growth hooks: free email generator, free micro-segment discovery as top-of-funnel acquisition tools. Competitive narrative that can be articulated in a single sentence: "We don't just send. We think."
  • For agencies: More clients per specialist (5–8 → 12–15 with same headcount). Higher-value retainers, because provable ROI justifies premium pricing to brands. Defensible positioning against both other agencies AND against brands bringing email in-house: "We deliver intelligence-driven campaigns, not just email execution." The agency becomes the strategist who interprets the intelligence, not the executor who pushes buttons.
  • For brands: Higher email revenue (20–30% click-through improvement, 15–25% conversion lift from micro-segmentation alone). Lower discount costs ($450K–$600K recovered annual margin for a $10M brand). Better customer experience with relevant, intent-matched communications instead of blast-and-pray. Reduced dependency on paid acquisition as the owned email channel becomes more productive.
  • For end consumers: Fewer irrelevant emails. Recommendations that address their actual hesitations. The customer worried about returns gets guarantee information instead of a pressured discount. Better purchase decisions, higher post-purchase satisfaction, fewer returns, stronger brand relationships.

Every layer benefits. And every layer's benefit reinforces the one above it. Satisfied consumers improve brand metrics, which improves agency reports, which improves Omnisend retention. The value flows down and the proof flows up.

Why This Starts Now

Every month Omnisend runs this system is a month of compounding intelligence: behavioral patterns learned, campaign performance accumulated, incentive effectiveness mapped, cross-merchant insights generated. That intelligence cannot be replicated backward.

Every month the system doesn't run is a month of strategic reasoning permanently lost to ChatGPT conversations and Google Docs. A month of behavioral data captured but not interpreted. A month where competitors could be building toward the same goal.

The compounding starts on day one. So does the cost of waiting.

The Intelligence Layer Is Ready
See the micro-segmentation engine in action

We built a working prototype. Explore it at microsegments.ai, or book a call to discuss building this for Omnisend.

Continue to The Hands → Book a Call