There is a better way to measure incrementality in market mix, writes Mutinex CEO and co-founder Henry Innis.

When it comes to efficiency in advertising, marketers have placed their faith in “lift testing” for a long time. And for good reason. In isolation, lift testing has certainly been the most reliable way to find out how much incrementality media spend delivers on a channel-by-channel basis, helping marketers optimise their channel spend and allowing them to uncover insights about creative. Lift testing (when conducted well) is a great tool in the marketing tool belt.

But lift testing usually reduces campaign spend by 10-20% of a media budget. And it’s here that marketers often need to make a trade-off between the valuable data points generated by the exclusion of specific markets for a portion of the bottom-line campaign results and a certain level of disruption.

We help customers tread this line all the time. And we’ve come to understand something valuable about how marketing mix and lift testing can work together. Now we want to help marketers make the most of lift testing by using the results to make your model work harder and smarter. It all comes down to how your model processes information.

The issue of multi-collinearity

Econometric modelling is a great partner with lift testing. Why? All econometric models face a problem called multi-collinearity. Basically, market mix models have a lot of data in them.

That data creates noise and the model can’t tell what’s vital – which sets of data are important to incremental growth versus just kicking around. So we need to train the model by telling it what’s important. One way to do that is by conducting regular lift tests. Lift tests are great because they create variation in the baseline data. Models thrive on variation because it allows them to learn how each investment decision will likely impact results.

But there’s more than one way to give a model information.

The first way to present this information to the model is to simply tell it the results of the lift test. Once the model has the results, it can incorporate them into the modelling moving forward. “Aha! (says the model), now I know what happens if you turn your social media spend off!”. This is called forcing a ‘prior’ (a prior in this context is just something we already know) and is the most common way that market mix models have traditionally incorporated lift testing.

But models that use lift tests as ‘priors’ are not trained to recognise incrementality naturally in the data. They either aren’t granular enough (for example, not being able to see media go dark or go light) or smart enough (able to understand and automatically recognise an experiment, using it to inform the rest of the main model). In short, using lift tests as hard priors makes it obvious the model isn’t working in a game of information. Your model isn’t looking for the answers, because it’s waiting for you to tell it the answers.

There is a better way to measure incrementality in market mix and it’s something we’re working hard to unlock. The key is how the model works. If a market goes dark, the model should pick that up. And it should see the result and use it as information. By translating it this way, we unlock models driven by information, rather than heavily configured priors everywhere.

What does this mean for marketers?

If we look at the average marketing program in detail, there is actually already quite a bit of variation in marketing spend across the course of any given year – most companies are not consistently running campaigns everywhere on every channel.

If your model is able to learn from more subtle cues, then your reliance on lift testing during major campaigns may be significantly reduced over time. That’s why we firmly believe that the solution to this problem lies in the information a model can process by itself. In this scenario, your model is actively learning about the impacts of spend variation (including tests) rather than you telling it the results.

There will likely always be a place for lift tests in the marketing tool belt. When conducted well, they provide a simple illustration of what’s effective and can provide results very quickly. But reducing reliance on lift tests converted to priors as a way to train econometric models will mean that they don’t need to be used as often and marketers can reap the rewards of less disruption. After all, who wants a lazy model?