Blog
Agents are finally bringing analytics and activation together
Agents are finally bringing analytics and activation together

Soumyadeb Mitra
Founder and CEO of RudderStack
6 min read
May 5, 2026

The analytics-activation split was always a bad arrangement
For as long as I can remember, the customer data stack has been split down the middle. On one side: analytics. You open Amplitude or Mixpanel to run a funnel, study retention, find the stage where users drop off. If your data lives in the warehouse, you do it in Hex or Mode instead.
On the other side: activation. You jump over to your CDP (RudderStack, in our case), or straight into a CEP like Braze or MoEngage to build the audience and push it into a campaign. Two halves of the same job, sitting in two different tools, with two different UIs, and two different copies of your data.
This was never a good arrangement. It was just the one we had.
Two problems nobody fully solved
The split caused two distinct headaches, and they compounded each other:
The interface problem
Suppose you run a funnel in Amplitude and identify users who dropped off at checkout. Now you want to send them a re-engagement email through Braze. How? The Braze segment builder doesn't have the same primitives as Amplitude's funnel analysis. Some properties aren't there at all. You're not porting a segment across tools. Instead, you're rebuilding it from scratch, hoping you got it right.
The data consistency problem
Even if you rebuild the segment perfectly, the audience sizes won't match. Amplitude and Braze are each ingesting events through their own SDK. Two pipes, two slightly different definitions of the same event, two different versions of the truth.
You run the same funnel in both places and get different numbers, and nobody can tell you which one is right. We built RudderStack specifically to solve the consistency problem: one event stream, fanned out to every downstream tool, so the data is the same everywhere. It works. But it only works for the companies who use it. (I wish that were everyone. It's not.)
Warehouse-native was a step forward — not the last one
The warehouse-native CDP wave was the next leap.
The idea is clean: Your warehouse is the single source of truth. You run your funnel analytics on top of it. You activate from the same data via reverse ETL into your CEP and ad platforms. Data consistency, solved. There is exactly one definition of user dropped off at checkout, and both the analytics tool and the activation tool are reading from it.
But the interface problem? Still there. You still had to translate between tools. The funnel you built in Hex didn't carry over to your CDP's audience builder. SQL was the universal language between them, but SQL is not a universal skill.
The moment a non-technical operator needed to slice a segment a slightly different way, you were back to either teaching them SQL or rebuilding the segment by hand in the CDP UI. We had traded one problem for a smaller version of the same problem.
Agents are the interface layer we were missing
This is what changes with agents-as-UI: Connect an agent to your warehouse (or BI tool) on one side and to your CDP/CEP on the other, both over MCP. Now the agent is the interface—not the BI dashboard, not the segment builder. A marketer types, "Find users who started checkout in the last 7 days, didn't complete it, and have opened at least two emails this month. Send them the abandoned cart sequence in Braze." The agent translates that into a query against the warehouse, pulls the audience, and pushes the segment through the CDP into Braze.
One sentence. No SQL. No tool-switching. No segment rebuild. The interface problem isn't solved by picking a winner between the BI tool's UI and the CDP's UI. It's solved by removing the UI as the bottleneck altogether.
Natural language is the interface that ports across every tool, because it doesn't depend on any single tool's primitives.
But this only works if the plumbing underneath is right. The agent needs a consistent, warehouse-native source of truth to query, and activation infrastructure it can invoke programmatically: an MCP-exposed CDP. Without both, you're back to either inconsistent data or manual porting. The agent doesn't fix a broken stack; it amplifies a coherent one.
The next step: Agents that don't wait to be asked
The version above, where a marketer types a prompt and an agent acts, is the obvious near-term win. But it's still operator-driven. A human notices a problem, frames a query, the agent executes. The next step is the agent doing the noticing. Watching the funnel for anomalies. Detecting that a cohort is regressing. Proposing a segment and a treatment. Running the activation. Closing the loop by measuring whether it worked, and feeding that back into the next decision.
That's the holy grail: a closed-loop decision system where the analytics-to-activation cycle compresses from days to minutes, and most of it happens without anyone touching a UI at all. The constraint that's held this back has always been the handoff between the “observe” step (analytics) and the “act” step (activation).
Different tools. Different data. Different people. Agents collapse that handoff. And once it collapses, the question stops being "which dashboard do I look at" and starts being "what should the system be doing on my behalf."
That's the conversation the next few years of customer data infrastructure are going to be about. Not which analytics tool or CDP wins, but whether the stack underneath is coherent enough for agents to operate on it.
Published:
May 5, 2026
More blog posts
Explore all blog posts
Understanding event data: A guide to behavioral data collection
Danika Rockett
by Danika Rockett

How AI data integration transforms your data stack
Brooks Patterson
by Brooks Patterson

Behavioral segmentation: Examples, benefits, and tools
Brooks Patterson
by Brooks Patterson


Start delivering business value faster
Implement RudderStack and start driving measurable business results in less than 90 days.


