Contributors Dropdown icon

Most marketers believe they should be experimenting more, learning faster and moving quicker. But in reality, few teams have built the conditions that make this possible.

It’s not because teams lack ideas, tools or ambition. They are surrounded by advanced platforms with sophisticated automation capabilities and a wealth of data. But experimentation requires confidence. Confidence in data. Confidence in leadership support. Confidence that failure will be treated as a learning opportunity, not penalized.

For many teams, that confidence is still missing. So, they remain stuck in a state of “progress without transformation.”

Understanding the real barriers that block experimentation, from cultural fear and organizational design to broken measurement and data mistrust, makes it clear why so many teams struggle. It also reveals what high-performing teams do differently to create the systems, culture and measurement foundations where experimentation can thrive.

Why experimentation doesn’t happen inside most organizations

The gap between wanting to experiment and actually doing it exists because most companies treat experimentation like a plugin rather than the OS. The real blockers are deeper than innovation.

Experimentation is a culture problem, not a tactics problem

There has never been more technology available to marketers. Data is everywhere, and AI tools and automation make it possible to access and action that data faster than ever before. On paper, it’s the golden age for testing, but organizations are still playing by the same rules.

According to our 2026 Marketing Intelligence Report, organizations continue to invest in new platforms and technology, but aren’t changing the way they work. Most marketers sense this disconnect, with 82% giving their performance a B-.

Something in the operating model isn’t working, and it isn’t technical; it’s structural. Even when leadership asks for innovation, only 13% of marketers believe experimentation is built into their day-to-day. The underlying structures of the company are designed to prevent it.

Most teams work inside invisible guardrails that prioritize safety over innovation:

  • The budget trap: Budget plans are usually set during annual planning. If a marketer wants to test a new platform mid-quarter, they have to steal from a winning campaign to fund it. That’s a career risk most aren't willing to take.
  • The approval loop: Even small tests, like a new CTA on a landing page, often get bogged down in approval chains. When it takes three weeks to approve a 48-hour test, the data might not even be relevant anymore.
  • The performance review: If your yearly bonus is tied to hitting a static ROAS target, you will never run an experiment that might fail, even if that failure would have taught you how to double your revenue next year.

Organizations are asking marketers to be brave while working in systems that only reward compliance. It’s challenging to create and sustain a marketing experimentation culture when 56% of marketers don’t feel empowered to experiment.

The deeper blocker is fear driven by data mistrust

Organizational structures explain part of the problem, but what ultimately stops innovation is fear.

Our 2026 report shows that there’s a growing cultural fear and risk-aversion in the industry, particularly among Gen Z, who experience it four times worse. They’re entering a workforce where every click is tracked, but can they trust the metrics they’re being judged on?

Our report identified this mistrust in the data as the biggest barrier to experimentation. As the volume and complexity of data increase, it becomes harder for teams to find one source of truth. Messy data makes it impossible to isolate the variables that actually drive growth.

So, it’s no surprise that 41% of marketers aren’t comfortable challenging existing strategies. If you can’t confidently defend the why behind a failed test, it looks like you wasted budget. It’s safer to stick to a plan that produces mediocre results but is defensible.

Without confidence in data and outcomes, even the most innovative teams will default to self-preservation.

The psychological barriers holding teams back

Experimentation comes down to how people feel at work. Teams don’t experiment when they don’t feel safe, supported or informed enough to do so.

Teams struggle to ask “why”

The modern marketing organization is rich in data and poor in insight. According to our report, 72% of in-house marketers and 55% of agency marketers say they have plenty of data, but turning it into useful insight is the challenge.

Most marketers struggle to turn large volumes of data into meaningful insight.

There’s a surface-level understanding of what channels are performing, but very little confidence in what actually drives performance. Dashboards explain what happened, but rarely why it happened. Even 41% of in-house marketers themselves admit that they don’t analyze the why or identify actions to take when they report results.

And without why, experimentation stalls.

Tom Roach, the VP of Brand Strategy at Jellyfish, captures this dilemma perfectly: “Data analysts are very good at reporting on what happened. But to interrogate why something happened requires additional skills… that’s less about data analysis and more about detective work. It requires strategy, curiosity and storytelling… a rare combination of skills.”

Roach is describing a core capability gap. Organizations have invested heavily in data collection and reporting tools, but far less in the skills required to interpret, connect and reason with that information.

Organizations resist change even when they know they need it

When there’s uncertainty in the market or pressure on performance, organizations often cling to what feels familiar. Risk tolerance shrinks, and teams double down on established processes and strategies. We’re seeing this happen even in the era of agentic AI, which is designed to handle the heavy lifting of data and free up teams to innovate.

As Paula Gomez, Global Data and Measurement Director at Johnson & Johnson, observes, “AI opens a window where we can start to do new things. The problem is that sometimes clients aren’t ready to change.”

This hesitation is emotional. Even when organizations recognize the need to evolve, emotionally, they’re anchored to what feels safe.

In this state, experimentation is threatening because it forces leadership to confront uncertainty they aren’t prepared to manage. What if testing new creatives in our biggest market backfires? How do we know pulling budget from the ad channels we’ve always used will pay off? What if we get this wrong?

Organizations demand innovation but fear uncertainty. Over time, this tension trains people to protect what exists rather than pursue what’s possible.

Bad data makes even AI confidently wrong

AI is only as smart as the information it’s given. If you train AI with messy data, it amplifies the confusion.

Henry Arkell, Commercial Director at Funnel, warns that marketers are often too quick to trust AI: “Marketers feed AI platforms datasets for analysis, and get very self-assured answers back. But often, if you truly examine the data, it tells a very different story.”

AI systems are incredibly capable because they’re built to find patterns in massive amounts of information and can process millions of data points in seconds. Even when you feed them fragmented, inconsistent data, they still have the ability to generate outputs that appear authoritative.

Arkell adds, “AIs are fantastic, PhD-level minds, but… they require significant training and ongoing refinement to be truly useful.”

Let’s say you have two campaigns, one is named "Winter_Sale" in your CRM, and the other is "WS_2026" in your ad platform. A marketer knows they’re the same, but an AI looking for a pattern might not.

If the WS_2026 campaign has high costs in the ad platform, but all the revenue is being attributed to Winter_Sale in the CRM, AI sees one campaign that only spends money and another that makes money. It’ll then give you a confident recommendation to cut spending on WS_2026 because it thinks it’s failing.

Without clean, unified data foundations, AI can’t produce reliable outputs, and flawed inputs are magnified across downstream decisions.

The organizational blockers that kill marketing experimentation culture

Even when teams are motivated to test and learn, they often fail to do so because the organization is wired against it.

Leadership rewards safety over innovation

Marketers in our study have named the root problem that kills experimentation: “leaders who aren’t open to new ideas and company cultures that actively discourage risk-taking.”

Leadership behavior directly shapes whether experimentation can exist in an organization. What leaders reward or penalize will become the real operating model.

Leadership resistance and risk-averse culture prevent marketing experimentation.

In many organizations, success is defined by hitting a predictable target rather than finding a new opportunity. When a leader spends an entire meeting scrutinizing a small dip in a test campaign rather than asking what was learned, they’re penalizing curiosity.

There’s a hidden cost for trying something new. If an experiment underperforms, it’s perceived as a personal failure. Teams internalize this logic and learn not to overstep boundaries.

As Tom Roach puts it, fear of failure pushes teams into safe, incremental behavior instead of learning-driven growth.

Siloed teams prevent shared accountability

Experimentation is a team sport, but most organizations are structured to play it alone. Building an experimentation culture requires cross-functional trust between marketing and finance, yet only 13% of marketers say they communicate well with finance.

Without strong marketing-finance alignment, it’s difficult for ideas to move past the testing phase. Marketing teams that operate in silos struggle to defend their results in a language that finance understands.

Maybe marketing tests a new high-intent channel and sees a 10% lift in lead quality. But because finance cares more about the volume of leads to hit a quarterly target, the experiment is deemed a failure due to higher costs. They’re not aligned on what success looks like.

Siloed structures make experimentation too politically risky, and no one wants to take sole accountability. If the risk of an experiment falls on one team while the reward is questioned by another, it’s just not safe enough for experimentation to happen.

Reporting cultures vs. actionable insight cultures

Many organizations confuse reporting with learning. Chief Data Officer at SPAIK, Thijs Bongertman, captures the problem precisely: “A lot of companies have a reporting culture instead of an actionable insight culture… what’s often missing is business acumen.”

Teams need to move from reporting culture to insight culture.

Teams that have a reporting culture rely on dashboards that summarize what happened but rarely clarify why it happened, and they seldom define the next step.

There’s also often a skill level issue. Bongertman says, “The marketing team does not understand numbers, statistics or basic split testing, but remains adamant that they are in the driver’s seat.”

The lack of analytical literacy keeps teams fixated on surface-level numbers because they struggle to interpret results, challenge assumptions or design meaningful tests. Experimentation loses its purpose if insights are not acted upon.

Broken feedback loops and attribution blind spots

Privacy regulations, platform changes and the loss of third-party data have made it harder to observe and understand marketing performance. In the past, you could track a user from first click to purchase, but now privacy blocks mean platforms have to guess or model results for users they can no longer track.

The problem is that each platform uses its own proprietary logic to fill in these blanks, often leaving you with conflicting data sources. This isolation creates attribution blind spots.

Imagine a customer sees your ad on Instagram, then searches for the product on Google and finally buys. Meta will claim credit for that $100 sale because they saw the view, and Google will claim credit for the same $100 sale because they saw the click. Your reports show $200 in revenue, but in reality, you only made $100.

What used to be a stable launch-measure-learn-adjust cycle is now fragmented across tools and platforms. You might see your total sales go up, but feel unsure about which campaigns caused it or whether customers would have bought from you anyway.

Without a reliable feedback loop, experimentation feels a lot like gambling. When you can't prove your marketing actually caused the result, the rational response is to stop taking risks.

The data and measurement crisis beneath it all

Data and measurement can either power experimentation or prevent it. Teams often fail when they can’t see, explain or trust what’s happening.

Weak data foundations make experimentation impossible

Marketing generates enormous volumes of data, yet most teams lack the visibility required to learn from it. Our 2026 report reveals that 68% of teams lack up-to-date cross-channel visibility, and 86% don’t have a clear signal through all the noise.

In other words, teams are flying blind. As one marketer in the report puts it, “We’re drowning in information, but what’s the insight? What’s the action that needs to be taken?”

Without visibility, it’s nearly impossible to observe outcomes consistently or determine each channel’s impact on overall performance. So many teams fall into the trap of producing “impressive” reports that look the part, but lack the depth to identify next steps.

The root problem is a data maturity gap: data foundations are not being built. Only 33% of in-house marketers invest in structured data and metadata — the infrastructure that makes measurement reliable.

Teams constantly run into these dilemmas:

  • Inconsistently defined metrics across tools: Teams don’t know which metrics to trust, or each team trusts a different metric.
  • Unverifiable reporting logic: Results are calculated in a black box or a messy spreadsheet that nobody can explain or audit.
  • Unreliable historical comparisons: You can’t tell if you’re actually doing better than last year or if the tracking just changed.

This lack of clarity shows up most when you attempt something new.

For example, your team might test a new incremental bidding strategy on YouTube. But because your data foundation can't reconcile top-of-funnel views with bottom-of-funnel search spikes, the test appears to have a 0% ROI on paper. Without the infrastructure to see the assist, your team kills a winning strategy because you didn’t have the proper infrastructure to track its impact.

Weak foundations don’t support long-term learning or experimentation because you’re working without a single source of truth.

Data mistrust turns into fear of testing

For experimentation to work, failure has to feel safe and low-cost. But unstable measurement raises the stakes of every decision.

Inconsistent reporting and unclear attribution compound this fear because marketers lose the ability to explain why something happened. If you can’t justify your results with evidence, it’s natural to want to avoid any test that might lead to an unexplainable outcome.

In low data maturity environments, experimentation is just too risky. The organization enters a self-reinforcing loop: Data mistrust → fear of being wrong → fear of testing.

This dynamic only gets worse as marketing tools become more complex. The pressure to perform keeps rising, but teams are rarely given the time or the stable systems they need to keep up. The real cost of this fear is becoming average. It’s easy to default to best practices from LinkedIn or whatever the ad platform’s automated recommendations suggest. But if everyone is doing the same thing, you lose your competitive edge.

Until leadership invests in data maturity that lowers the emotional and operational cost of being wrong, a marketing experimentation culture won’t survive.

Higher data maturity reduces the cost of failure and enables continuous experimentation.

What high-performing teams do differently

Many teams talk about experimentation, but high-performing teams are built for it.

They invest in the structural foundations that support innovation and create a psychological environment that feels safe to experiment.

This is how they do it:

Use advanced analytics to replace guesswork

High-performing teams don’t experiment based on instincts; they experiment based on analysis-ready data. Their data is cleaned, standardized and trusted before any decisions are made.

Rather than betting everything on a single measurement model, these teams use a layered measurement framework to get the full picture. They triangulate their measurement so no single source is treated as the whole truth:

  • Attribution: Tracks the paths and touchpoints of the customer journey
  • Incrementality: Identifies causal lift by isolating the impact of your marketing efforts
  • Marketing mix modeling (MMM): Reveals long-term and cross-channel impact

How MMM, attribution and incrementality testing help marketers experiment with confidence

(Image source)

But advanced measurement is only as strong as its data. Even the strongest models will produce faulty results if they’re fed fragmented or inconsistent data.

High-performing teams solve this problem at the foundation by fixing the data layer before they optimize the analytics.

Funnel’s marketing intelligence platform automatically pulls data from 600+ connectors across marketing sources into one data hub. It handles all the heavy lifting, like normalizing metrics across platforms, applying transformations and managing schema changes, so your data is analysis-ready when you need it.

Attribution, incrementality and MMM all operate on the same reliable dataset. There are fewer disputes over which numbers to trust, so teams can move faster and make more confident business decisions.

Have strong alignment with finance and leadership

Experimentation is only successful when organizations agree on three things: how success is measured, how risk is evaluated and how learning is valued.

Vanity metrics like clicks and impressions may look great on a report, but they’re mostly just fluff. Seasoned teams focus on value metrics, like LTV, CAC and contribution margin, which are directly tied to growth, profitability and long-term value.

For example, instead of celebrating higher click-through rates on a campaign, smart teams evaluate whether the test improved lifetime value or reduced acquisition costs. This keeps experimentation grounded in business reality.

But successful collaboration requires shared visibility. Teams need to see the same data and trust the same numbers to stay aligned. Funnel makes this possible by unifying marketing data into a single source of truth that both marketing and finance can trust.

With built-in reporting tools, teams can visualize their data directly in Funnel Dashboards and share insights with leadership. Reports are automatically updated as new data comes in, so everyone is working with the latest numbers.

Treat dashboards as starting points for discovery, not reporting endpoints

For high-performing teams, dashboards are for discovery, not just documenting. The best teams don’t stop at numbers. They use dashboards to:

  • Investigate anomalies that don’t immediately make sense
  • Spot trends and patterns that signal opportunity or risk
  • Identify performance gaps that require new tests

These habits create a discovery culture, where curiosity is part of the job and insights turn into action. But for discovery to be sustainable, teams need to be able to back every marketing decision with solid evidence.

Funnel supports a marketing experimentation culture by providing a transparent data lineage for every metric. Because Funnel preserves the connection between the raw data and the transformed output, teams can zoom in on any data point and trace it back to its original source. They can verify exactly which accounts and campaigns contributed to a specific number.

For instance, if your CPA suddenly spikes, you can drill down to see whether a single campaign in a specific region is skewing the average or whether a currency conversion error is inflating the costs. Having this granular visibility empowers teams to ask why without having to second-guess the math.

Run fast experimentation cycles because the cost of being wrong is low

When it takes longer to clean the data than it does to actually run the test, it’s too expensive to experiment. High-performing teams move quickly because they remove this friction and build a continuous learning loop: insight → hypothesis → test → evaluate → adjust → repeat.

This loop relies on a trustworthy data foundation. Data is organized into a clean taxonomy before it reaches the team, so reports make sense immediately, and analysis is repeatable.

Through standardized mapping, Funnel identifies common metrics across hundreds of platforms and then maps them to built-in fields. So your "Cost" from Facebook and "Spend" from Google are automatically combined into one "Total Cost" metric. Your KPIs are calculated using the same logic across every experiment.

Teams can also create their own labels that apply across every channel. For example, if you want to see how retargeting performs globally, you can create a custom dimension that automatically groups campaigns from LinkedIn, TikTok and Meta under one "Retargeting" label.

Being wrong is cheaper when your data is always ready for analysis. You’re able to adjust quickly without cleaning up after every experiment.

Invest in structured data, metadata and automation

One of the biggest differences between teams that struggle to experiment and those that scale it is how they treat their data over time. High-performing teams don’t just think about what their data looks like today. They design it so it still makes sense months from now.

Their data is structured, meaning metrics are defined consistently across platforms, and every data point follows the same rules, no matter where it comes from. Metadata, like campaign type, region, funnel stage or product category, adds the context needed to interpret results.

Instead of spending hours adding up costs from ten different social media platforms to see if their spring sale campaign is working, the data is already standardized for them. They can switch from seeing a global overview to a specific "US/Sneakers" view in seconds because the labels are already built in.

So when a spike happens, these teams see exactly which category or region drove the change instead of just viewing a higher total number.

They invest in automation to keep all of this together. By setting rules, like how to calculate profit or convert different currencies, they ensure historical data stays comparable over time, even when APIs change, platforms rename fields and schemas drift.

If they’re comparing this quarter’s Meta retargeting performance to last year’s, structured data and metadata ensure they’re not accidentally comparing two different campaign definitions or cost calculations. They can trust that changes reflect performance.

With Funnel, marketers maintain historical comparability because transformations, mappings and definitions are applied consistently across time. Your year-over-year reports remain accurate, even if ad platforms have completely changed their data structures.

Teams can compare performance, train models and evaluate experiments without wondering whether the data underneath them changed.

Experimentation follows confidence

Experimentation doesn’t fail because marketers lack courage. It fails because the systems around them make curiosity expensive.

When data is fragmented, when measurement can’t be explained and when failure feels personal, the safest move is to repeat what worked yesterday — not because it’s right but because it’s defensible.

High-performing teams flip that equation. They lower the cost of being wrong. They invest in data foundations that make learning fast, outcomes explainable and decisions credible across marketing, finance and leadership.

When confidence replaces fear, experimentation stops being a risk and becomes the default. That’s what marketing intelligence is really about: creating the conditions where teams can move faster, learn continuously and make braver decisions because the data, the systems and the culture support them.

If your organization is serious about experimentation, start by fixing the foundations that make it safe to try.

Contributors Dropdown icon
Want to work smarter with your marketing data? Discover Funnel