Samir Balwani 0:03
Hi, I'm Samir Balwani, host of Chief Advertiser and founder of QRY, join me as I talk to industry leaders about their strategies, challenges and successes in managing their advertising and marketing. On our episode today. I am Michael Kaminsky, the co founder at Recast. He's super smart, really. Just always blows my mind whenever I get a chance to talk to him. I'm so excited to have you here. Thanks for joining us, man.
Michael Kaminsky 0:32
Thanks for having me. I'm excited for this conversation. Let's start
Samir Balwani 0:34
from the beginning. So tell us more about who you are, what you do, how you got to where you are now.
Michael Kaminsky 0:39
Yeah. So I'm Michael. My background actually, is in econometrics. So I studied economics and econometrics in school. Did a bunch of like academic type research understanding price elasticity in terms of responses to changes in the price of water in desert cities. I then moved into health care. I did a lot of what's called health care outcomes research. So trying to understand the comparative effectiveness of different you know, health treatment interventions, right? Does treatment A or does treatment B lead to better or worse outcomes over time? Did a lot of work using observational data. So looking at like records of insurance claims to try to understand, okay, in these different populations, how do different interventions yield different types of outcomes. I did a bunch of work in clinical trial, so all of my background is, you know, doing econometrics. And really, you know, I think about it as doing science right, trying to understand what are the causal mechanisms actually at work in the world, and how can we use that to help people? And over the last 10 or 15 years, basically what I've done is I've taken, tried to take all of those methods that I worked on in healthcare and in academia and apply them to marketing science, right? How can we do that same work in the world of marketing, to understand what really works what doesn't? How can we help brands grow?
Samir Balwani 1:58
It's so funny. So when I went to school, I did. I have an E comm degree as well, and my favorite class was econometrics, because the teacher first comes into the class and goes, this is gonna be one of your hardest classes, but at the end of this, you'll be able to tell the future. And I was like, All right, I can. I'm on board with this. And
Michael Kaminsky 2:16
what a great start to a class that is amazing.
Samir Balwani 2:22
And he's not wrong, right? Like that is the beauty of the work that we do. And it's interesting because digital marketing, AI, all of this have just created a whole additional set of data that we can work with and that we can use. And I love that. That's kind of where Recast is. That's what you guys do. So I'm going to ask you a lot of questions about measurement, because i You are the measurement guy. So let's start with, how, how should a consumer brand just be measuring paid media? That's always the number one question?
Michael Kaminsky 3:02
Yeah, I mean, it's a good question. And there's sort of the easy answer, which isn't very helpful. And then there's maybe, like, more detailed, slightly more helpful answers. But I'll start with the easy answer, which is, you should be thinking about incrementality, which is to say, every brand should be focused on, if I spend an index an additional $1,000 on media of in some channel. How much additional revenue is that getting us beyond what would have happened anyway? And it's the beyond what would have happened anyway, that's actually the tricky part that a lot of brands miss, because it is very easy in the world of marketing measurement to confuse correlation and causation, or make the post hook fallacy, which is to say, confused that just because something came after something else means it was caused by that thing. And so incrementality is the thing. Causality is the thing. How much additional revenue are you generating? That's the important thing to be focused on. Now, how do you actually achieve that? Is going to vary depending on where you are in the life cycle of your business, newer, small brands really doing touch based attribution, right? So if you're like a D to C Company direct to consumer, no one's ever heard of you before. Your touch based attribution, last touch attribution, even what shows up in the you know, meta ads reporting platform is actually going to be very close to incrementality, right? No one else has heard of you. Your purchases without advertising are going to be zero because no one's heard of you, and you're very small. And so that gets you very close to incrementality. But as brands get more complex and get bigger, that gets harder, you actually start to have a brand. You have more customers that have heard of you, maybe that have heard of you from other customers. You are advertising in more channels. And so just because someone saw an ad on Facebook doesn't mean that that ad caused that purchase, because they might also seen you on Tiktok. And so now you have a lot more complexity that you have to start managing, as you are thinking about, okay, how you know? How can we optimally allocate our marketing budget? And so I think the thing, if you know, if you're a marketer or you're a CEO of one of these. Companies. What you want to be always thinking about is how close or how far away are the measurement, the metrics that I'm looking at from true incrementality, true causality, that is the thing to keep your eye on, so that you don't end up getting confused, because this thing that used to work when you were small doesn't really work anymore, now that you've gotten bigger and more complex. Can
Samir Balwani 5:19
we help people just frame what small is, I guess, in your mind, when you're talking about small, media spend, revenue, size, what are the metrics you're looking at to define, you know, small, medium, large? Oh, it's
Michael Kaminsky 5:31
so hard. I mean, it varies by geography, but like in the US, right? When I'm thinking about small, I'm thinking, you know, two, like one, $2 million of paid media spend per year, I would still consider as small, right? That amount of media spend you're probably really only on one or two marketing channels. Generally, that's going to be a paid search Google effectively and a meta and if you are thinking about expanding to more channels when you're still at that level of spend, it's really like actually you need to be focusing on either your creative or your product or your pricing or something that's going to allow you to get to get to more scale just on those two channels, once you're in the four or five, ten million of paid media spend, that's when things start to get more complex, and you need to add more complexity into your marketing measurement stack to really make sure that you're optimizing against true incrementality.
Samir Balwani 6:15
Yeah, that's awesome. And so help me understand how incrementality versus Mmm. Where do they fit in? How do they work together? What's the difference?
Michael Kaminsky 6:25
So just to be super clear, when I say incrementality, I just mean causality. And there's no like incrementality versus mmm, all marketing measurement approaches should be judged to the extent that they capture true incrementality or not. And the important thing to know is that there is no measurement method that can fully capture true incrementality. True incrementality is a thing. There is a true relationship between the $1,000 that you invest in meta and how many, how much additional revenue that drives for your business, or additional profit that drives your business. Yeah, but we humans do not have access to that number. That true incrementality is unknown and unknowable. There's nowhere that we can look it up. We do not have access to it. All that we can see are sort of, you know, the shadows that that true incrementality casts on the wall in front of us. And so there are times, again, as I mentioned, when digital tracking last of distribution might be very close to true incrementality. There might be times when it's far away. Experimentation, this is a thing that a lot of people are talking about, experimentation. Controlled experiments lift tests, and they just use the word incrementality, but that actually gets things confused, and so I think it's important to be very careful here. In certain situations, controlled experiments can help us approach measuring incrementality, but not always. And also it's important to understand that experiments are often noisy measures of true incrementality. Media mix modeling is another way of trying to measure true incrementality. They're often wrong, and so often they don't measure true incrementality. But the idea is that digital tracking, touch based attribution, experiments and mmm should all be helping us try to measure true incrementality. And as we are thinking about when to use which measurement method or how to evaluate how well it's working, what we should be thinking about is how well is this thing capturing true incrementality, the thing that we actually care about,
Samir Balwani 8:17
it's fascinating, because I find it I love that you and I are sitting here talking math and feeling as if, in the sense is that to everyone else, that you should have a really clear understanding of I have an ad. It did this, and this was my output, right? And that that is just inherently not how advertising works. And you know in economics, you learn that you are trying to measure a human behavior, and so you what works today will never work tomorrow or will not happen. And so I do, I love that people are starting to get comfortable with the uncomfortable again, which is what we used to do when we first started on marketing and then performance media made everyone have this false sense of security, and now we're back into, oh, wait, actually, I don't think we're measuring things right, and we need to rethink all of this. And so now being back into this world of triangulation, right, where you are not trying to know for certain what happened, but at least have a sense of where to go.
Michael Kaminsky 9:21
Yeah. I mean, that's exactly right. And I think I don't know to what extent people are actually getting uncomfortable with uncertainty, but it's a thing that I think needs to happen is really sort of embracing this idea that, look, we can't know everything. There are certain things that we cannot know for certain, and we just have to think about what is the best that we can do, and what is the return on getting additional precision or learning additional things? And those are much harder questions. They're much more sophisticated ways of thinking about, you know, different types of analyzes that went with that we might do. But that's actually, like, really critically important, especially for brands that want to get to that next scale. Right? If? Your only dream is like, look, we're going to be a small brand, we're going to stay on Facebook and Google, and we're going to, you know, capture the profit that we can from there. That's totally reasonable. And it's like, great, honestly, I think, for a lot of brands, but for those brands that want to make it to that next level, to get into the 10s and hundreds of millions of dollars of revenue, you have to get more sophisticated in how you think about operating the business?
Samir Balwani 10:21
Yeah, you know, we have a thesis on even for some of our smaller brands that are spending your two or $3 million in media, of not only being on Google, on meta and like diversifying early on, just because the risk right? So when we are even if the even if there is opportunity on meta to continue to scale, we'll try and find additional channels for as first and try and scale into them. Because your meta account gets paused for a weekend, what you're out, like Google got paused for a handful of clients over the weekend, and what they were out 1000s and 1000s of dollars, and so adding some operational risk for some diversity is worthwhile, and I'd recommend that to a lot of people. I am curious. Do you have a framework by which you recommend people think about it? You know, how do like? How do you put an MTA? When do you do MTA? When do you start doing incrementality lift tests, and how often are you doing that? And then how do you ultimately come to a Recast for mmm yeah.
Michael Kaminsky 11:27
So, MTA, I think is really interesting. I have there are a lot of very smart people that disagree with me on this, so if you're thinking about this, definitely feel free to get a second opinion. But my belief is that if you're going to do a digital tracking based approach, you get most of the value by doing the following. Have a report that shows you last touch attribution, have a report that shows you first touch attribution, and have a report that shows the ROI or the CPA implied by some post checkout survey. If you have those three things, and you can sort of look at them all in one place, regularly updating that gets you, like, 90% of the value that any touch based attribution is possibly going to get you multi touch attribution of like, being like, Oh, we're going to wait the middle touch and the second to last touch in different way is, I just Don't think adds that much beyond what you get from being able to look at first touch, last touch and post checkout survey. So my recommendation in general is that if you're you know, once you're spending really any serious amount of money on advertising, you should get something like that set up so that you can keep an eye on your first touch, last touch and post checkout survey. None of those is like Correct, right? They will be different numbers. But to the extent that they are different, that should raise interesting questions for you as a business and give you ideas for things about how is our marketing actually working. Maybe you'll see that some channels look better at first touch, other channels look better at last touch. That might sort of imply things about like, oh, look like they're operating at different parts of the funnel, depending on what our strategy is we might want to and like those sorts of questions and discussions that that report provokes, I think are really healthy and very, very good for your business. And again, you'll have to sort of manage the business by feel. But as long as you have that report updating frequently, you'll be in a very good spot. Same idea
Samir Balwani 13:15
with only one column I'm going to add to that platform attributed. Just race, yeah, just
Michael Kaminsky 13:23
again. None of them are right, right again. Like, I look at these four columns, which one's the right one, and I'm like, they're all right and they're all wrong. Like, that's whatever one you want for that day is the right one. But I mean, really, the idea again, is this idea of triangulation, right? You need all of those different data points. They're measuring slightly different things. Different things, but that goes a really, really long way, and is a really good starting place to base your business after that. The thing that I start to recommend is starting to build the experimentation muscle. And so this is a place where what you want to do is really make is start to pressure test how close or far away those different metrics are from something like true incrementality. And so experimenting with either running conversion list studies via the platforms themselves, like both meta and Google have conversion list studies that you can run in the platform. I think it's limited based on how much you spend, so you might need to talk to your rep to find out if you qualify. But run that, look at the numbers, put them in the put them in a fifth column, right, and start to compare, and start to think about how close or how far away are they from. The other things start to build that muscle so that your business starts to understand incrementality. For people who haven't done this before, what I often recommend as a first experiment is run an incrementality experiment with something like branded search, right? So you can do a gene lift test where you turn off branded search in certain geographies, and then see how well does that? Does the results of that experiment line up with what you are measuring? And probably what will happen is that your last touch attribution will be way over accrediting branded search versus what. Is truly incremental, maybe not, right? Again, this varies for different brands, but this is often a thing that happens and that helps, again, just to spur really interesting and helpful conversations internally, right? Really break, you know, sometimes the CEO or the finance team can get obsessed with last touch attribution and think that that's actually a causal relationship when it's really not. And so running that first experiment and being able to show people look, we turned off branded search in these states, and we saw that revenue didn't change at all. That like, you know, is a light bulb moment for a lot of people who are not really deep into marketing analytics and marketing measurement, and that can be a really helpful way to start again having these more productive conversations about the thing we care about, incrementality and causality, and how that's different from any of these potential reports that we're looking at. So build the experimentation muscle. Try to focus on the most high impact experiments that you can run. Where do we have doubts? Where are the biggest differences between first and last touch attribution, or the biggest differences with post checkouts? Can we run an experiment that helps us narrow in on which one of those might actually be closer to true incrementality. Get the organization used to thinking that way, and that then starts to build a really nice base of understanding in the organization the right way of thinking about marketing. And then from there, you can start adding on more complex methods, like doing an mmm when you have a more complex media mix. When you have forecasting problems that you need to solve, that's where you start to, like, pull all of these different disparate threads together.
Samir Balwani 16:28
Yeah, I would almost say, I would, I would tell people one we've run brand search, hold out groups pretty often. It ends up being incremental in a lot of unique ways, if you're in a highly competitive market, if you've got Amazon and just a wholesaler, if you're running on retail, all those times you like, don't realize how much you lose to branded search, and so we do. We recommend running the incrementality test. Have an understanding of run an incrementality test. The the issue with incrementality test is it doesn't account for time. So an incrementality test when you're running sale versus when you're not running sale will have very different outputs. And so definitely, you know, keep that in mind as you're running incrementality test the other area. And
Michael Kaminsky 17:15
again, the other thing that I note is like, not only when it's on sale or not, but even just seasonality, right? Again, that can change. You know, if you're selling sunscreen, the test that you run in the winter is probably going to yield different results in the test that you run in the summer. And so you need to keep that in mind. So there's like, timing effects of how long does it take for people to purchase, there's promotional effects. There's also seasonality. And so lots of good reasons why you can't just say, Oh, we ran one experiment one time, and now that's the answer forever, like, unfortunately,
Samir Balwani 17:44
oh man. And if you've run mmm before, and you see your seasonality is very high, run incremental tests, incrementality tests throughout those periods to really understand, okay, while seasonality is high, what about my media spend? So if I, if I keep this variable flat, what happens? And so, yeah, I think that's really valuable. And then just in terms of conversations with CFOs, who absolutely, for whatever reason, double down on last click, always right? And it is. So if I were to think about the columns, I would prioritize channels that have high first touch or high attributed and low last click and just prove out the value of Michael, you got my brain turning in that. I'd love to just have a report that index is all of it, even if you just give everything even weight and just index the whole thing, it gives you a better understanding of, okay, how are my channels and campaigns performing against one another? But, yeah, I think having a good framework for measurement is really important. And this concept of getting people to get their muscle on incrementality lift tests. So you know, Brand Lift test, revenue lift test, both of those are really important. But Michael, I run tests. I've got some baseline data. How do I go to when and why do I start to invest in? Mmm,
Michael Kaminsky 19:05
yeah, so it's a really good question. So I think there's been a lot of hype around media mix modeling over the last couple of years, and I think we're starting to come down off the hype. Because, like, there was, like, a bunch of hype, and people talked about it, and everyone like, this is the solution to measurement. We don't need anything else. And then a bunch of people tried it, and we're like, oh, wait, this does not live up to my expectations, which is good. I think that that's healthy, right? That's a good part of the cycle, but basically so media mix modeling, what is it? Right? It's a it's a top down statistical model. And so what you're doing is you're looking at aggregate data, and you have some statistical, econometric or machine learning algorithm that's trying to find the patterns in that historical aggregate data. So nothing user level is being looked at. It's just looking at, hey, look, we have this pattern of marketing spend over the last couple of years. We have pattern of sales over the last couple of years. Try to find the relationship between those variables. That's what we're doing. These models are very complex. And they're very complex, meaning that they have. What that means practically is that there's lots of assumptions. There's lots of assumptions about, okay, what are the you know, how are the time shifts going to work when we spend money? How does that get spread out over time? What does the saturation curve, or the diminishing marginal return curve look like? Lots and lots and lots of different assumptions. And the problem is that any of the assumptions could be wrong, and to the extent that any assumption is wrong, it could potentially invalidate all of the results coming out of the model. And so that amount of complexity makes it so that it is a very it's a tool that is very easy to get wrong, and therefore requires a fair amount of investment in order to get it right and make sure that it's right for your business. And so I don't recommend that you just like, jump in and say, hey, yeah, we're just gonna do this, because it's hard, right? And so you need to embrace that there's gonna be some work to really get it working and get it working correctly. Why would you use a media mix model? So I think media mix models start to become valuable as you have more and more different marketing channels, and that's partly because it's just difficult to run as many experiments as you would like. If you're in three marketing channels, it's pretty easy to build an experimentation schedule. It's like, okay, we're gonna test all three of these. You know, every third or fourth month. We'll be able to test them throughout the year. We're gonna feel really good about our ability to run enough experiments to keep track of what's going on in those different channels. When you have 20 different marketing channels that all of a sudden is a lot more complicated, right? Your experimentation schedule is a lot more complicated. Some of those channels cannot be. You cannot construct a controlled experiment for them, something like podcast, right? That's not easily addressable. Linear TV for a bunch of different reasons, like it's very difficult to actually experiment with. And so those problems start to add up, and they make it so, like, ooh, actually, we're only able to test every channel once a year, and only our big ones, and so some of the other channels just never get experimented with. And we have a bunch of problems where we need to be able to forecast the business, right? We need to be able to, you know, we're a public company, we need to be able to make a forecast for how much, you know, with this marketing budget, what do we think we're going to be able to return? Going to be able to return? We need able to do that accurately. We able to
Samir Balwani 22:06
do I think that is the reason why last click still exists, and why CFOs like last click is because it makes it really easy to forecast well.
Michael Kaminsky 22:15
So here's the deal. It makes it really easy to build a forecast by building things across, but making an accurate forecast, it actually makes it worse, and that is the problem, yes, fair and so CMOS like it, because they like, just drag the numbers across. Unfortunately, once you like, actually start doing that and comparing the actual results to your forecasted results, they end up like diverging a lot, especially again, once you're a larger company with a more complex media mix. So that's a core problem. And so the businesses for whom a media mix model makes sense, they tend to be more complex. They have these problems, operational problems around forecasting, doing budget optimization, doing scenario analysis, and that's where the value of a media mix model really starts to shine above and beyond just trying to run as many experiments as you can. That's really
Samir Balwani 23:03
interesting. Yeah. I mean, I think the to your point, at the beginning of this around Mini Mix modeling is people, when they hear is expensive to build and manage. It's not only like there are tech fees and like tech costs against it, and they're not like, in the grand scheme of things, the tech costs are not the problem. It's the resourcing. It's the historical data. It's going back in time and being like, All right, we're gonna launch mmm for the first time. Let's gather three years of data and what exists and what you know when we run beyond that, it's like, how long did we run? Yeah, we
Michael Kaminsky 23:41
have to educate the CFO. We have to explain why these numbers are different from the last touch attribution. The CFO is gonna be skeptical. We have to have a plan for validating to be able to prove that. It's like, all of that work is just, it's a lot of headache. And again, it can be worth it. Like, I'm, you know, I run a media mix Bottle Company, like, we do this. But again, it's like, if you're I would say, like, make sure that the juice is going to be worth the squeeze. You have to be at a stage of business where you're going to get a lot of return in order for that investment to be worth it. Because otherwise it's going to be like, oh, man, we've got to go hire two people to think about it, to analyze the it's and it's just like, it starts to add up to be a lot of internal effort. And the question should be like, Well, should we do that, or should we just, like, hire someone who's really good at making video creative so that we can start to watch experiments on YouTube and like, for a lot of companies, like, hiring the person that's really good at YouTube creative is gonna be a better ROI than trying to, like, really fine tune your measurement on Facebook when it's like, I'll actually just try to run more conversion list studies, And that'll get you 90% of the
Samir Balwani 24:42
way there. Yeah. I mean, when it comes to measurement, I'm always just because you can measure it doesn't mean you should measure it is a really important piece. And you know, for us, when we look at marketing mix modeling, when we tell our clients it's time, it's time for an M and tool, it's because we're really trying to understand the. Less about the media and more about the outside forces. So how much impact does seasonality have? What is Black Friday, Cyber Monday's true incremental lift for us when we run a promotion? What should that promotion be? What is the impact of one channel on another? Because it allows us strategically to say, Hey, I know you want to give us that extra million dollars, but you should go do a catalog with it, because we actually see better impact on our performance channels when you run a catalog. So it's when you're starting to answer, I think that this is the part people have kind of lost. The spread on measurement is it's not to prove the value of your media, it's to build a strategic perspective on your media and your marketing. And so when you have that point of view, it shifts the question around, when should I invest in this? Not to prove to my CFO that this media is advertising is worth it, but advertising drives like, 50% of the total revenue for a business for most of the time, like, it's obviously worth it. Just the question is, how are you what are you doing to optimize that resource and and where are you getting the best bang for your buck? And from my perspective, that's when people should start to look at this. It's when your tools start to to stop working. I mean, to give us an analogy, like, when do you buy a hammer when your screwdriver isn't heavy enough? And it's like, that's like, pretty much the same perspective here, right?
Michael Kaminsky 26:17
Yeah. And I think that's exactly right, like helping think think through these larger strategic decisions, is exactly right. And, yeah, it's just, it's just a thing. It's not a magic it's not, it's not magic beans, it's not a magic bullet. Like you have to keep that in mind, like it's going to have its own problems. It's not the magic solution to incrementality that I think some people have pitched it as and so it's really, really important that you make sure that you're thinking hard about, like, okay, what are we actually going to use it for? What are the drawbacks going to be? How are we going to overcome those? Is it going to be worth it? And then at that point, it's like, okay, yes, like this, you know, let's evaluate if this actually makes sense
Samir Balwani 26:55
for us as a business. Yeah, that's awesome. So I know Michael, we're coming up on time. So I want to be there is one question I have been like itching to ask you, and it is this thing that has started people have started talking about, and something that's been top of mind for me, too, around this concept of an inferred mmm versus causal mmm, because we talked causation versus correlation. And same thing in MMM is an academic corral. We're just looking at what happened. So can you help educate me on what the difference is and when and how I should be thinking about that kind of as we move forward?
Michael Kaminsky 27:28
So yeah, interesting topic. Again, I'm gonna have a slight, maybe a slightly spicy take. My belief is that every media mix model should be causal. If it is not causal, I would say, What are you even doing? Like that seems like a waste of time. Yeah, the reason why? A couple of reasons why. So, first of all, like, again, what I said at the beginning is that every marketing measurement strategy should be judged based on how well it approaches incrementality, where incrementality is also just another word for causality. So incremental causal? That's the whole point of everything that we're doing, any measurement methodology that we don't think is helping us get closer to incrementality or causality, I think is, you know, should be thrown at right? It's not, it's not helping us make the decision that we care about as a business. Because what do we care about as a business? We care about being able to optimize our marketing budget, which means we have to understand the relationship between the levers that we have, investment in different marketing channels, the outcome that we care about, profit, revenue, new customers, whatever that means. Understanding causality, understanding incrementality. Those are the same thing. So the whole point of doing in an mm should be to understand incrementality. Should be to understand causality. If you're not doing that, I think you're wasting your time. So that's like the overall view. So why are people talking about this? I think there's a couple of things going on here. So unfortunately, I think a lot of media mix modeling vendors historically have tried to obfuscate what they're actually doing, and so they the models in general, again, media mix modeling very hard for lots of different reasons, very hard to get a good result, and so. But people have been selling media mix models for a very long time, and unfortunately, been selling bad media mix models for a very long Yeah. And so in order to avoid being judged the Right Way on how well does this model actually predict the future, on how well does this line up with outside of the experience? Experiments, some vendors or modelers have said, Oh no, no, you can't judge it this way. This is only looking backwards, so anything that happens in the future doesn't apply and we can't look at it. And again, I would say that that's a cop out. That's you as a model are saying you have a bad model, because it cannot be judged outside of the modeling framework itself. Anyone, if you're talking to anyone who makes a claim like that, I think you should just be super skeptical. And so people are talking about causal Mmm. We talk about it a lot, but the point should be that every mmm should be causal, and you should be able to use the tools of causal inference, experimentation, controlled experiments, quasi. Experiments to evaluate how well your mmm actually is, you know, approaching causality, approaching incrementality. And so I do think you know that every mmm should, should work this way. Anyone who has an mm that says, Oh, don't judge it based on causality or based on incrementality, I think, is trying to obfuscate something. They don't want their model to be judged the right way, probably because it falls short. Yeah, it's interesting,
Samir Balwani 30:25
because it goes back to the question around, am I doing this to prove out value or to build, you know, future expectations, and so, like, having a mm model that allows you to be confident in how we're actually going to be investing in the future is all that actually matters to me, right? Like, I don't. I don't need to prove out what already happened. I can just look at the P and L, it's like, so, so, yeah, I hear you on that. How does somebody determine if their mmm is causal or just for backwards? Yeah.
Michael Kaminsky 30:57
So this is, this is, I think, the thing that is tricky, and that I encourage everyone to spend a lot more time thinking about. So the trick, and I alluded to this earlier, the problem with media mix models is they're very complex, which means that, you know, there's a lot of assumptions. Also, what it means is that for any given media mix model, you generally have different combinations of parameters that all fit the data equally well, right? And so if you've run Robin, you've probably seen that. Robin is an open source media mix modeling package. You've probably seen this like, what Robin does is it just produces like, 50 different models, and it says all of these fit the data equally well. You choose, you choose, which I think you know, sort of defeats the purpose, but that's how that framework works, and it's also true of many other modeling approaches, you can make very small, minor decisions that will impact totally impact the output of the model, its estimate of causality, and they all fit the data equally well. And so when it comes to media mix modeling, the challenge is not just training the model right. That's very easy. The challenge is, how do we validate it? Which of those 50 is actually right, like model number one and model number 50 totally inconsistent. They say totally different things about how our marketing program is working. They fit the data equally well. How do we know which one is actually more correct or less correct? And this is the place where you need to use the tools of causal inference right in order to validate the model. And so there's two main tools that I recommend that people look at for using. So one is corroborating the model or the results of the model with outside experiments. And so those experiments could be controlled trials, so like a conversion lift study on meta or a geo lift structured experiment, or even a quasi experiment, right? So you can say, hey, look, the model says that branded search has an incremental ROI of 2x Great. Let's turn off branded search for two weeks and see how much revenue drops. And we should be able to say, like it drops in line with what the model predicted, or it doesn't drop in line with what the model predicted. Similarly, the model says that TV has a 5x ROI. Okay, great. Next quarter we're going to spend up on TV, and do we get more revenue in line with what the model predicts? And these aren't going to be perfect measures, right? There's going to be uncertainty, but you should be able to say, like in general, does this line up with what the model is predicting? And so again, any media mix model should be tested this way, but this is how you evaluate to what extent the model is actually producing causal estimates that match the real world. And it's the most important step. And it's not even like a step, like you need to be doing this in an ongoing way, as you are using a media mix model. It's not just like we do it one time and the check passes and we're done. We need to keep doing this. And so the thing that we talk about here is actually, like, you shouldn't think about a media mix model in isolation. You should think about building an incrementality system where you are running your media mix model, you are designing experiments, you're validating the media mix model, you're using that to optimize your budget, and then you're repeating this process over and over and over again. It's a thing that's never done. It's a constant way of using the media mix model and running experiments and optimizing your budget all at the same time in this virtuous cycle to do continuous improvement.
Samir Balwani 34:10
It's really interesting because in my mind, the decisions I make on my incrementality testing change then if I have an mmm because if I have an mm tool, and I've got 50 models, and they all say every all 50 models said Facebook, so 1.8x return, but Google's a 1.2 to a 3x All right. Well, then I want to validate the Google one. I don't need to incrementality test the Facebook one. And now it makes sense why incrementality tests are the first step before an mmm is because that data set gets added into your mmm tool as previous benchmarks and helps the tool actually identify the right model on the output
Michael Kaminsky 34:52
that is exactly correct in theory. But again, the details not like all none of the open source packages. Example, allow you to include multiple different lift tests in the package. Yeah. And so it's the thing that sometimes you have to be thoughtful about in terms of, how are we going to use these different pieces of evidence that we have in conjunction? But those end up being the hardest questions, right? Of like, I ran the model, it gives this result. I have two other, you know, experiments that we run in the last two years. They're different. What does that mean? And that's the question that you have to be able to answer as an organization as you're going to start going down this path.
Samir Balwani 35:27
I love that, Michael, I could literally talk to you about this stuff forever, and I'm always fascinated by your perspective on it. But before we go, I'd love for you to just tell everybody what is Recast. Where can they learn more about you, all of that kinds of stuff. Yeah.
Michael Kaminsky 35:45
So Recast, we're building the world's most rigorous incrementality platform. Helps companies build this incrementality system that I'm talking about that includes a tool for doing media mix modeling. Also includes a tool for doing geo lift. Check out our website, getrecast.com, and then if you are interested in this type of content, follow me on LinkedIn. That's where I publish. You know, deep dives, thoughtful analyzes about marketing, science, marketing analytics. Don't make it a sales pitch. So if you're interested in this, you know, type of discussion, please follow me there.
Samir Balwani 36:13
I highly recommend it. I learned so much from following him. So thank you, Michael, so much for coming and joining us today. No, thanks so much for having me. This was a fun conversation. Thank you for tuning in to this episode of Chief Advertiser. If you enjoyed today's conversation, please subscribe at chiefadvertiser.com share the episode with others who might find it valuable and consider leaving us a review. Your support helps us bring more insights with each episode. See you next time.