Google is expanding how advertisers can test performance across campaign types.
With Campaign Mix Experiments, Google Ads is introducing a new framework that allows advertisers to test combinations of campaigns, Search, Performance Max, Shopping, Demand Gen, Video, and App, inside a single, unified experiment.
That may sound incremental, but the implication is bigger: Google is explicitly acknowledging that performance is not driven by one campaign type in isolation. It’s driven by the mix – which should convince advertisers once and for all to stop over-weighting campaigns like search, especially brand search.
At a high level, this beta lets advertisers test different configurations of campaign types, budgets, and settings against one another, using a structured experimental framework.
Advertisers can:
The testing use cases are broad:
This builds on visibility Google started to introduce with Performance Max channel reporting and expands it into a more formal testing model.
The most important shift here isn’t the mechanics of the experiment. It’s the framing.
Google is effectively saying: performance isn’t about choosing the “best” campaign type, it’s about finding the right mix for your business.
That’s an important acknowledgment, especially as more accounts rely on combinations of Search, PMax, Demand Gen, and YouTube to drive results.
The primary benefit of Campaign Mix Experiments is clarity around impact.
Specifically, this framework can help advertisers:
Google has gradually given advertisers more visibility into cross-campaign dynamics, and this is a logical next step. Instead of guessing whether Demand Gen is feeding PMax performance, or whether Search is being over-weighted (it probably is), you can test those questions directly.
We’re excited to use this in real accounts, especially where there’s uncertainty around campaign mix decisions.
One clear example: accounts running Performance Max that are seeing quality issues, such as spam leads. Campaign Mix Experiments create an opportunity to test whether introducing, or shifting budget toward, Demand Gen materially changes outcomes, using a controlled setup rather than a gut call. (This is also where I note that you absolutely need to give Performance Max guardrails to improve lead quality and conversion value whether or not you test this beta.)
That’s where this tool has real value: helping make structural decisions with data instead of opinion.
It’s important to be clear about the scope.
Campaign Mix Experiments only evaluate campaign types within Google Ads. They do not account for any external channels (e.g. Meta) you’re also running.
That doesn’t limit the tool’s usefulness for Google-specific decisions, but it does mean it’s only one piece of the puzzle. If you want to understand total cross-channel impact at a granular level, you still need something like MMM.
This tool helps optimize within Google, not across your entire media mix.
Potentially, yes, within the Google ecosystem.
While it’s more accurate to say “campaign types” than “channels,” this framework could help advertisers reassess the relative value of lower-funnel and upper-funnel efforts on Google.
For example, an advertiser may find that Demand Gen contributes more incremental value than expected compared to Performance Max, or vice versa. That kind of insight is difficult to isolate today without disruptive testing.
While the framework supports multiple test types, the most compelling insight for us is still budget allocation.
Specifically, we use the tool to understand:
Those are the decisions that materially affect accounts, and where this beta could have the biggest payoff.
There are a few important realities to frame your expectations. Remember:
Because of the runtime and scope, this isn’t a tool to use casually. Tests need to be intentional, well-framed, and aligned with meaningful business questions.
Campaign Mix Experiments are a promising step forward.
They won’t replace deeper cross-channel analysis, nor will they answer every attribution question. Still, they do offer a more structured way to evaluate how different Google campaign types work together.
Used thoughtfully, this beta can help advertisers move away from “which campaign is best?” and toward the more useful question of what mix of Google campaigns actually drives the best business outcomes.