As we all know, digital has become the heartbeat of modern marketing and the amount of data & platforms we use within marketing are growing at a considerable rate. One of the biggest challenges is that the data we use to measure and improve our marketing performance is siloed across numerous channels and tools.
In order to make the right decisions and ensure that we’re not exceeding our budgets, we need to centralize and clean this data.
This process is usually manual, time consuming and prone to human errors, but there are now solutions on the market which aim to solve certain parts of the problem.
We’ve ended up with modular analytics stacks where you can choose any modern cloud-based data warehouse, in combination with pretty much any visualization tool you can think of. It’s never been easier to conduct business intelligence really well.
However, we still have some of the same challenges that we’ve faced in the past.
Challenge #1: Data is still siloed
Powerful data warehouses and better visualization tools did not solve this fundamental problem. You still need a way to collect data from all of your different platforms, and consolidate them in one place. This isn’t easy as it is, and the number of data sources and complexity keeps accelerating.
In an attempt to solve this problem, new cloud based ETL tools have shown up. Data Pipelines built for the cloud. They are used to move raw data from point A to point B with automated scheduling, handling API limits and sometimes doing basic data cleaning.
If you manage to get all your data into one place using these tools, you’re still stuck with the tremendous job of trying to prepare this data for analysis. Making the data business-ready, so that you can actually start extracting value from it.
Challenge #2: Preparing data for analysis
Making data business-ready requires technical resources and a deep understanding of each platform. You pretty much need a dedicated team equipped with SQL to clean, normalize, combine and aggregate all this data into a model that supports all of your sources and your unique business logic. Even data within the same category requires a lot of work to make it ready for analysis.
Let’s take a simple example from the digital marketing category. If you want to compare how much you’re spending across your marketing channels, you would need to normalize it into a single Cost metric. This isn’t as straightforward as you would think, since In Facebook that metric is called “Amount Spent”, in Google Ads it’s called “Cost”, in Twitter, “Spend” and so on.
Providing you have a fairly consistent naming convention for your advertising campaigns, you might also want to create new dimensions which would allow you to dig deeper into your metrics. For example, if you include the target market in your campaign names, you could extract that and map it with your tracking data to create this segmentation.
If you manage to do this, maintaining this data model can be even more of a challenge.
Challenge #3: Keeping up with changes
Let’s take a look at a common situation where many data teams end up.
On the left you have the source teams, this is where data is generated. For example, by the marketing team setting up new campaigns in an advertising platform, the customer success team submitting data into a CRM system or the sales team working on deals. Data formats are continuously changed and new sources are added to solve new use cases. The teams are fairly happy using these systems as long as it gets the job done. However, they have little incentives to make sure that changes are reflected well in the data platform.
The data consumers are stuck waiting. They want to use data to better inform their decision making and strategy. They will always have new requirements, new questions and a need for fast feedback, but they will grow frustrated having to rely on a busy data engineering team.
The people responsible are in the middle, spending most of their time cleaning and preparing the data. Handling changes and new needs from both sides. There is very little time left over to do more valuable work, like working closely with the business or performing deep dive analytics.
Challenge #4: Different data consumer needs
A marketer might want to analyse aggregated data in a spreadsheet or a visualization tool, a data analyst might want to have an SQL interface and a data scientist might want the full granular data available in a parquet file.
Exposing the data to consumers in their preferred formats and tools can end up being a massive undertaking. Some of the tools on the market today only support one use case/destination which makes the whole solution rigid and difficult to change. This is exacerbated if your company decides to use another data warehouse solution all together.
In this case you would end up building a new set of logic and having to repeat the process over and over again to support new destinations, on top of the the previous 3 challenges we outlined above. Switching out just one part of the stack is usually very expensive and causes strain on internal resources.
How can these challenges be overcome?
What you need to do is invest in a solution which takes siloed data from all of your marketing, advertising and sales platforms, and feeds it automatically to the locations of your choice.
The solution needs to be able to map and harmonize data in real time, whilst preserving the full granularity and raw data. Data pipes struggle in this area as you still need to do a lot of cumbersome cleaning and mapping using SQL, manually in spreadsheets or relying on simple prebuilt templates.
This would free up your technical resources from tedious data collection and manipulation tasks to focus on higher-value activities whilst also cutting down on platform maintenance and the number of support tickets.
Whether you create your own solution or invest in something to help, ensure that the cleaning and mapping is taken into consideration since this is by far the most complex and time consuming piece of the puzzle.