Marketing Mix Modeling, Media Mix Modeling, Marketing Effectiveness, Experimentation, Causal Inference, Adstock, Marketing ROI, Statistics, Machine Learning, Marketing Attribution, Media Planning, Marketing Budget Optimization, Robyn, Multi touch Attribution, First Party Data, Privacy Proof Marketing Solutions.

Blogs

Innovating with Robyn

Innovating with Robyn

At Aryma Labs we constantly try to push the frontier of what’s possible in Marketing Measurement and Attribution. In this regard, we have been working on a very interesting problem for a long time. What is this problem? Well, basically your MMM model should not be just good at prediction but should also have good goodness of fit (inference) or vice versa. The question then arises, can we have a model that is good at both? We hence decided to choose models based on 3 metrics – R squared value (provides goodness of fit), Decomp RSSD ( provides business goodness of fit – see link in resources) and finally MAPE

Read More »
Difference in Difference (DID) Experimentation is the ideal way to validate your MMM model.

Why Difference in Difference (DID) Experimentation is the ideal way to validate your MMM model.

In my last few posts, I touched upon the following points (link to all in comments): – Why you should not use experiments to calibrate your MMM model. – Why you can’t use Geo experiments to fix priors of your MMM model. – Why you should instead use experiments to only validate your MMM model. So all this begs the question – How should one ideally validate MMM models? 📌 What about hold out sample test? Some would opine that a simple hold out sample test could also help you validate the MMM model. Well not really. At best, hold out sample test may only prove the predictive ability of

Read More »
Which technique provides for great manipulation in MMM - Bayesian or Frequentist?

Which technique provides for great manipulation in MMM – Bayesian or Frequentist?

Hands down Bayesian. Ask how? Through priors and posterior distribution. It is well known fact that one of the Achilles heel of Bayesian technique is the subjective priors. In a small data (relatively speaking) problem like MMM, one never has enough data where the evidence in data could overwhelm the priors. This is hence a serious flaw in the technique. But this flaw could also be used in a nefarious way. I have seen Bayesian MMM analysts confidently specify the location and scale parameters ‘for example – the contribution from youtube should be 7% or that the ROI of a Instagram ad spends should 3.4’. Remember above that I mentioned

Read More »
Experimentation to validate your MMM models

Use Experimentation to validate your MMM models, not calibrate it.

I come across a lot of literature and talks on the internet that one should or can calibrate their MMM models through Experimentation. I disagree. Why? Because Calibration and Validation are entirely different things in statistics. ICYMI we wrote a detailed article on this subject (link in resources). But a TL;DR version is: Calibration is a process where you try to improve the model fit by tweaking various knobs and levers. There are metrics that tell you how well you have calibrated your model. The primary goal of calibration eventually is to reach a ‘Final Model’. Validation on the other hand is a way to test your ‘finalized model’. Simple

Read More »

Why Experimentation is not a substitute for Marketing Mix Modeling (MMM)

So a lot of myths have been propagated by digital marketers of late, such as MMMs should be adopted only if : – Companies have revenue of $50M+ yearly. – they spend a lot on non click media (tv, radio etc.) – you have 5 or more channels. I have been studying and practicing statistics for nearly 17 years now, take it from me that MMMs have no statistical limitations for any of the above. So what gives? where do these numbers and limitations come from ? Answer – The reasons are not statistical but commercial. 📌 The myth of $50M + revenue Digital marketers came up with this number

Read More »
Why a β-hat outlook is more beneficial than Y-hat in MMM

Why a β-hat outlook is more beneficial than Y-hat in Marketing Mix Modeling (MMM)

I know some of you must be wondering what is β-hat and Y-hat. So lets start this post with a few explainers. 📌 β-hat : β – hat problems are inference focused. We care about what variables go into the model. What are the parameter values and How much each of the Independent variables affect the dependent variable. Here the goal is not just prediction but how that prediction was made. Linear Regression’s main goal is retrodiction and not prediction (See my related article in resources). 📌 Y-hat: Y-hat problems are pure prediction focused. One does not care about what variables go into the model. One does not care about

Read More »
Why you shouldn't use Geo tests to fix priors in Bayesian MMM

Why you shouldn’t use Geo tests to fix priors in Bayesian MMM

Are you using Geo tests to fix priors in Bayesian MMM? You might want to rethink that. Priors are subjective in nature, making it difficult to get a consensus on what to set, especially in a domain like MMM where you have multiple stakeholders. While some MMM vendors try to add objectivity by using Geo tests, A/B tests, or incrementality tests, this method has severe limitations and statistical problems. Let me explain both: 📌 Statistical problems: 1. Geo tests don’t give you attribution coefficients. In MMM, the goal is attribution. How much of the change in KPI can be attributed to the marketing/media variables. Geo tests on the other hand

Read More »
Type 1 Error Control

Want performance guarantees ? choose Frequentist MMM

One of the hallmarks of Frequentist philosophy is the adoption of Type 1 error rate control. Type 1 error is about false positives. In MMM, one has to be more wary about type 1 error than Type 2 errors. Why? 📌 Because the implication of falsely attributing a media/marketing channel for the change in KPI means real dollars gets invested in those media that in reality was not instrumental in driving the KPI !! You lose your hard earned dollars through wrong attribution. The saying of John Wannamaker “Half the money I spend on advertising is wasted; the trouble is I don’t know which half” becomes ironically true. It is

Read More »
Bayesian uncertainty ≠ Frequentist uncertainty

Bayesian uncertainty ≠ Frequentist uncertainty

The word uncertainty means different things in Bayesian MMM vs Frequentist MMM. From a Frequentist perspective, the uncertainty is due sampling variability. That is, there is a true fixed parameter of the population. But given the sample at hand, you may or may not capture this true parameter. In MMM parlance, one could think of a true ROI of medium as that of a fixed parameter. But because we create our models on samples, chances are that our models may not capture this true ROI accurately. Hence a more representative and adequate sample is always warranted. From a Bayesian perspective, the uncertainty is not due to the sampling variability, but

Read More »
Bayes vs Frequentist MMM

Adopting MMM for the first time ? Use Frequentist MMM

So Google just deprecated third party cookies for 1% of users worldwide. In nearly 270 days, third party cookies will be totally deprecated. The domino effect of this would be that Marketing Attribution will get more tougher. But Marketing Mix Modeling (MMM) is a good solution to fill the void. We are now seeing a lot of companies trying their hand at MMM. One fundamental question that many first time adopters of MMM have is – Should we try Bayesian MMM or Frequentist MMM ? We would suggest go with Frequentist MMM and here is why: 📌 Prior Elicitation: In Bayesian MMM, the vendors make the client work. The vendors

Read More »
Scroll to Top