No items found.
311

Attribution is Broken: Understanding MTAs, MMMs, and Incrementality

Tom Leonard
May 8, 2025
SUBSCRIBE: iTunes | YouTube

This one is a bit nerdy. But it’s soooooooooo important to the success of your marketing efforts and the future of your business.

For this episode I sat down with Tom Leonard, a fractional marketing leader who specializes in operationalizing Media Mix Modeling and incrementality testing. 

We dive deep into the often confusing world of marketing measurement. We debunk myths about attribution and we reveal what truly drives customer acquisition. 

For ecommerce brands struggling to understand where their marketing dollars are actually working, this conversation offers practical insights on how to move beyond misleading platform metrics.

Key Takeaways

  • Attribution is broken: Learn why Multi-Touch Attribution (MTA) tools often fail to deliver accurate insights, especially in today's privacy-first digital landscape, and why they can lead to poor budget allocation decisions.
  • The power of incrementality testing: Discover how properly designed tests can reveal the true causal impact of your marketing efforts by comparing control groups against test groups, allowing you to see beyond your .com sales to impacts on marketplace and retail channels.
  • Media Mix Modeling demystified: Understand how MMM correlates marketing activities with business outcomes over time, providing insights into diminishing returns and helping answer the crucial question: "Where should I put my next marketing dollar?"
  • Avoid the high ROAS trap: Find out why channels showing the highest ROAS in platform metrics (like branded search) might actually be the least incremental to your business, and why cutting them first during budget constraints could be your smartest move.

-

<div style="width: 100%; height: 200px; margin-bottom: 20px; border-radius: 6px; overflow: hidden;"><iframe style="width: 100%; height: 200px;" frameborder="no" scrolling="no" allow="clipboard-write" seamless src="https://player.captivate.fm/episode/9a76506c-ed89-4a6a-abc5-fe645253928e/"></iframe></div>

-

Tom Leonard:

How much is media contributing relative to customer base is a really nice place to start. And the benefit of running incrementality and media mix modeling is informing the model with some of that causal data.

Brett Curry:

Well, hello and welcome to another edition of the E-Commerce Evolution podcast. I'm your host, Brett Curry, CEO of OMG Commerce. And today we have got a doozy of an episode. We're talking about the three horsemen of measuring your marketing effectiveness. We're talking MTAs Multitouch attribution. We're talking M'S. Media mixed modeling. We're talking incrementality. It's going to be nerdy, but I also promise you it's going to be practical and it will make you more money. And so we'll hopefully make it fun as well. And so my guest today is Tom Leonard. We are LinkedIn friends first. So I saw Tom on LinkedIn posting about incrementality, talking about MMM, throwing shade on certain tools and stuff like that on LinkedIn. And I'm like, this is my type of guy. So I reached out, we had a call, and then we're like, Hey, we got to record a podcast. Let's create some insights for people on the pod. And so Tom is a fractional marketing leader. He's operationalizing MMM and incrementality testing, and I'm delighted that he's my guest today. So Tom, with that intro, how's it going? And welcome to the show.

Tom Leonard:

Good. Thanks for having me, Brent. Excited to be here. And yeah, some of my favorite things to talk through, so excited to do it. Good stuff.

Brett Curry:

It's good stuff, man. So briefly, before we dive into the meat of the content here, what's your background and how did you become a guy who's operationalizing MMS and incrementality?

Tom Leonard:

Yeah. And what does that even mean?

Brett Curry:

That's a good point,

Tom Leonard:

For sure. Yeah, totally. Yeah. So spent most of my career thus far on the agency side at performance agencies. And I'd say the crux of how I got to where I'm now, or I've been reflecting back a little bit more on the why I have such a passion for measurement. And I was at a pretty hardcore DR agency, and it was right shortly after TRUBY for Action came out when YouTube was starting to invest in, DR. Moved into a new role we had created with a centralized group of basically people who had different areas of subject matter expertise and a few analysts that ran tests across a pretty large client base. And I was our YouTube SME, and worked with a couple analysts to run a bunch of tests. And really it was to evangelize how to, and is YouTube a platform to drive growth?

And it was really interesting because I started spending a lot of time on YouTube and then also connect to TV and broader programmatic video. And it was this really interesting, for me, the biggest learning was less about how to make YouTube as effective as possible, but more how to help brands think about demand creation as opposed to just demand capture. And frankly, the difficulty of getting brands to leverage YouTube relative to connected tv, because YouTube sat so close to Google ads and therefore last click attribution and see tv, you couldn't click and was sexier in a deck. And it was just this sort of recognition of the irrational kind of human behavior just in any sort of industry or any thing in life. But it sort of helped frame up this idea of you really have to do more than just, I don't know, represent logic or rational arguments.

You really have to also bring the easy to understand clear data. And that's, I think what draws me to incrementality testing specifically and why that's sort of the backbone of a lot of what I do now. And I think I use the word operationalizing, NMM and incrementality testing. And really what I mean by that is a lot of people will run medium mix models or run incrementality tests, but oftentimes they'll sit in a slide or in a report to be shown once, but never to be looked at again. And so what I'm really trying to do with brands now is how do you build a framework and a repeatable methodology to get insights from tests, but not just leave them as insights but to take action? Because the only way that you create value from any of these sort of testing methodologies and measurement methodologies is by acting on the insights. And so that's sort of what I mean by my funky little headline of those words.

Brett Curry:

Yeah, it's so good, man. And it's one of those things where data really doesn't matter if you don't take the right actions from it. And what's so interesting, and our paths are similar in that I got my start in actually TV and radio and doing traditional media, and then I got into SEO and paid search, but I loved video. Video was my thing, but I love paid search as well. And then when TrueView and TrueView for Action came out, I was like, whoa, these are all my world's colliding.

Tom Leonard:

This is

Brett Curry:

Video and there's some search components, at least some search intent involved there. And it's direct response. I've always been a direct response guy. I believe that marketing should drive an outcome, right? Advertising should drive a measurable outcome, and that should be measured in terms of new customers and profitable new customer acquisition. And what's really interesting, Tom, and I think this kind of feeds into the conversation we're having today. There was a period of time, so I grew up reading some of the classics. So David Ogilvy of course, but John Cap's tested advertising methods, Claude Hopkins Scientific Advertising. And they would do things like they would run and add in a newspaper or magazine and people would clip a coupon and bring it in, or they would call a certain number and they would track it and they would have codes and stuff. And I remember thinking once I got into e-commerce, I was like, oh man, we've got so many tools.

The world is so clear now we have every piece of data at our disposal. And now the more I've gotten into it and the more I've matured, I'm like, we've got more data. But I don't know that we've got more insights, and I don't know that we've got any more clarity. In fact, there's maybe more confusion. And I think it goes back to what you said a minute ago, this idea of demand generation versus demand capture. We're really good at measuring channels and campaigns that are demand capture, meaning they're capturing demand that's already out there. That's harder to measure the demand generation, which is usually where the magic happens. And so super excited to dive in here. I think what might be useful is let's talk about what are these kind of three horsemen that I laid out there, MTAs, multitouch, attribution, and incrementality. So let's start with MTAs first. So Multitouch attribution tools, what are they and what is your take on them?

Tom Leonard:

Yeah, big question. Great question. Yeah, I mean, MTA been around for a while, different flavors and ways of trying to make it work, especially as so much has changed in privacy and the tech and tracking landscape. But ultimately the goal is to try to give fractional credit to all the touchpoints along a customer journey with a recognition that the last touchpoint click or last impression is ultimately not what drove that person to purchase. That may be the last or the only thing that you might see in something like Google Analytics or your analytics suite. But there's this general recognition that that is not what drove the purchase. So MTA, the kind of promise, which I ultimately think is a failed promise, is whether all the different touch touchpoint and then how can you value those differently. So maybe you use first touch, maybe you use even distribution.

The idea of data-driven attribution was the holy rail or the Promise many years ago, and I guess still to a degree for some is like, how do you know this channel was more additive or more necessary and therefore should get more credit than that channel? Which I think makes a ton of sense in promise. I think in reality it's really hard and I would argue impossible to do, especially as a lot of the ability to track users at a one-to-one level degrades generally my perspective, I'm very bearish on MTA, so that'll probably come through pretty strongly. But I guess I don't think the toothpaste is going back in the tube in terms of the ability to track a customer across all these different touchpoints, especially as the ability to track through or impression based touchpoint erodes. And then you really get reliant on clicks, which I think then leads to a lot of all the issues that just last click in general has. So I think it's really hard to make a compelling case for MTA. I've seen too many brands, especially trying to build MTA tools internally and just be a huge time and resource suck. And then when you ask to compare, show the multi-touch view versus last click, it's like, I don't know, 80 or 90% only had one touch point anyways, that's all that MTA model could see. So is it really that much more useful than last click?

Brett Curry:

It's sort of multi-touch when that can be measured, but usually it can't be.

Tom Leonard:

Yeah, and

It never really answers the causality question either, which we'll get to when we talk about incrementality. And I always kind of tell this, I think the short story of why MT A isn't really viable anymore as all the tracking and privacy changes. But I think the slightly longer story is the kind of recognition that just because an ad was shown or a click occurred doesn't mean that that medium was needed or that channel was needed. It doesn't answer the causal question, what would've happened without this ad running? Did somebody just happen to use multiple touchpoints as navigation or was it more convenient to click on one of these ads that happened to be served? But if you're not comparing that to some sort of control group to really hard to assign causality to the fact that there just was a touchpoint.

Brett Curry:

Yeah, it is so good. And it's one of those things where I remember again, early on, you would look inside of Google ads or you look inside of Meta or was back when it was Facebook only, and you were like, the data's here. I see row ads and I see clicks and I see performance and all that. Then you realize, well, wait a minute, this isn't fully accurate. If I add the two together, that's double my total revenue, so I can't just rely on what's in the platform. And that got worse as I was 14 was introduced and other privacy changes were made. But then MTA came along and it's like, oh, finally we're going to get to see the full picture. It's going to decipher, decode the shopping journey, and we're going to finally see with a keen eye in perfection exactly what caused this ad or what caused this purchase to happen.

And then we finally realized MTA is maybe just a third option. It's like, okay, Google's imperfect, Meta's data's imperfect, and then mt A, it's just imperfect too. So now we just got three imperfect things to look at and make decisions from. And in some ways it leads to more confusion than it leads to clarity. And now I don't want to wholesale discard MTAs because I do believe there's some helpful insights that can be gained there, but it's incomplete and incomplete at best. And one of the best analogies I've heard, and this actually comes from Ben Ter, who's also a LinkedIn friend, but I met him in person as well, but he talks about this analogy of, Hey, if we're trying to measure what caused people to watch this movie at our movie theater, and we look at all these results and 30% say they saw a billboard for our movies, 20% say they saw a TV ad, but you know what?

A hundred percent say they saw the poster on the door. So we're like, let's just cut everything. Let's just do the poster at the door and that's it. And you're like, well, wait a minute. Everybody saw it. Everybody was walking in the door. But the movie poster is not what caused someone to purchase. It was the billboard and the TV and some of the other things, word of mouth and other things that caused them to come in. And so this idea of causality, super, super valuable. So that really leads us to incrementality. So talk about incrementality. What is it and why are you on a quest to operationalize it?

Tom Leonard:

Yeah, it's really the best way, if not the only way to establish that a causal portion that we've been talking about. It has a distinct control group, so it has a counterfactual, it has what would've happened without this intervention, whatever that intervention is. And there's a handful of ways to derive that counterfactual that control. The most common would be geographic based. So like a match market test. I've got this market over here that historically has behaved similarly to this market over here. I can see that in an AA test, the lines sort of move similar to one another. They're not, if they're influenced by outside factors, they're influenced

Brett Curry:

In what's an AA test for those who don't know

Tom Leonard:

Before an intervention happens. So just over time are those lines essentially moving together? Are external factors or stimuli equally impacting both sides of that test so that you can feel confident that when you do intervene and it becomes comparing A to B, the delta is what was a result of that intervention. So oftentimes it's my Atlanta and I don't know Memphis, maybe some other midsize city that you've done this market matching for. Historically, they both look like this on a line, all of a sudden you turn off ads on Facebook in Atlanta, what happens to your top line that Delta is what was attributed or should be attributed to advertising in Atlanta. Whereas the flip side of that would be attribution would say basically anything that was attributed to that could be attributed to that would really, it should just be the gap between a world where that ad does not exist compared to a world where that ad does exist.

We can't take credit for everything. We can only take credit for as much above and beyond what would've happened anyways. And so that's the basis of incrementality testing. There's other ways to do it. If you use a Facebook or Google conversion lift study because they own that auction or anybody that owns an auction, they can do that hold out for you at a user level. They can track all of those users regardless of if you serve an ad. Good examples are maybe easier to describe in a first party data capacity. If you're running email, you may blast all of your customers and say, Hey, I sent an email to all my customers and this many purchased. They went back to the website or clicked it. But if you just said, Hey, I'm going to serve just to odd number of customer IDs and not to even number customer IDs, I can then just compare, forget about who clicked on ads, who did anything.

I'm just going to look at my backend. I know I exposed these users, but not these users 50 50 split. They've historically kind of done the same thing. All I did was even an odd and just measuring the difference between those two groups. So really any way that you can establish a true control that passes that AA test. So before you intervene, do they continue to look similar? Are they influenced at the same rate so that you can feel confident that when you do intervene with new media, retracting media, some new sort of test that you are confidently comparing to what would've happened in a world without that intervention?

Brett Curry:

Yeah, yeah. It's applying the scientific method with some rigor behind what happens when I turn this channel on, or what happens when I turn this channel off? What is the actual impact of this channel? And what's interesting is I remember back in my early days of being in the advertising world, this was when online stuff was just getting kind of warmed up. I was talking to this furniture store owner and I'm like, Hey, what do you do? Do you invest in radio ads? Do tv, do you do newspaper? And so as I went through Themm like, Hey, do you do radio ads? And he is like, yeah, I mean, yeah, I sort of do. And I'm like, newspaper's like, yeah, there's a big sale, something will happen. I'm like, well, what about tv? And he said, yes. And his eyes lit up and he is like, when I run TV ads, I feel it. People walk in the door, it happens. And I remember early on in my online career thinking, man, that was so unsophisticated. Did that guy really know what's going on? But now looking back, I'm like, yeah, that's maybe all that matters. That is incrementality in a real loose easy just to observe with your eyes think because you had one. Totally.

Tom Leonard:

Which I think people take for granted. Yeah,

Brett Curry:

They do.

Tom Leonard:

Yeah,

Brett Curry:

That's not exciting. That's not like, where's all your data? It's in my cash register. That's where all the data

Tom Leonard:

Is, especially for smaller brands, when you have the ability to feel if something's working or not working, if you double spend in something that you think is working really well because attribution says it's working really well, and all of a sudden your cash just doubles, even though your attributed number scales linearly, something has to give, right? And what has to give is it wasn't really causing any additional top line growth. It was just really good at getting the attributed credit. So I think the feeling it in the p and l is definitely overlooked.

Brett Curry:

It's valid, and it is overlooked though. You're a hundred percent, especially now that we have so many tools at our disposal. And I think another way to look at this, and look, I'm a Google guy, YouTube and Google is kind of where I really got my start in online

Tom Leonard:

Marketing.

Brett Curry:

But listen, branded search is a perfect example here. What happens, we see this all the time. What happens if you turn branded search completely off? Now, I believe, and this is top of front of the podcast, there are strategic ways to use branded search and there's ways to run it and not waste money, but a lot of people could shut it off and nothing happens, nothing. Maybe sales get in a little bit, but you take meta meta's really working and you shut it off and you feel it. Sales go down and that's an incrementality. Same is true for YouTube if you're doing YouTube the right way. And so yeah, I really like this. And one kind of anecdote here to share, we just did a test with Arctic, Arctic coolers, Yeti competitor, my favorite cooler, my favorite drinkware as well. And so they wanted to see, Hey, can YouTube drive an incremental lift at Walmart?

So they had just gotten into most Walmart stores, coast to coast. So we did exactly what you laid out there. We had a 19 test markets, 19 matched control markets. So similar markets. So think like a Denver and a Kansas City or the example, use Atlanta and whatever else that's kind of comparable. And hey, let's run YouTube in one and not in the other. And let's measure then the growth in Walmart sales, and let's do a comparison between the two in Walmart sales. And it was remarkable. It was about an eight week test. We had three test regions, so 19 markets, but three test regions, test region. One, we saw an average of 12% lift in Walmart sales. The test region two was like 15% lift. And then our final test region was 25% lift. And there were some standouts, like Oklahoma City was up 40% and Salt Lake City was up 48%.

But it was one of those things where, okay, now we look at that and we can say, okay, YouTube had a big impact. And what's also interesting, Tom, is we just ran the YouTube portion at OMG. They also did a connected TV test in other markets, not related, didn't see a lift, didn't see a measurable lift. And so it could be lots of that was not to throw shade on CTVI like CTV, so maybe they just did a wrong or wrong creatives or who knows what. But it's one of those things where it's like, okay, if you do this the right way, you should see an impact.

Tom Leonard:

And I think touching on the piece that I didn't mention, the other beauty or value of incrementality testing relative to attribution or mt a is the ability to see beyond your.com to be able to see what's happening on third parties like Amazon, what's happening in store. If you get that data own an operated store or if you can get that through wholesale data, it really simplifies. There's so much complexity. And I think that's, again, one of the rubs that I have with MTA is all of them, all of the data you have to wrangle together to try to patchwork this kind of story together. Whereas in incrementality testing, it's pretty straightforward. It's what did I spend and how did I run that spend in these by market by day or by week, and what was my sales? What were my sales? What were my new customers or whatever metric I'd want to look at with that same granularity and same dimension. And that's really it because you're really just trying to understand the relationship that calls the relationship between spend and outcomes, all that kind of muddy middle in the middle, trying to get it at the user level, which again, not going back into the tube really simplifies things.

Brett Curry:

Yeah, it does. And another thing that was kind of interesting that came a light doing this test for Arctic is all of the ads we tagged with available at Walmart, shop at Walmart, find on the shelves and Walmart, whatever. We measured everything though in those markets. So you could look at Walmart sales, online sales, so the.com and Amazon. And what's interesting is the push to Walmart really worked. It's a reminder of what you ask someone to do in an ad is what they're going to lean towards. Because in some of the markets, we didn't see that much of an online lift. We saw some clicks and stuff like that, but the Lyft was at Walmart. But we also saw a pretty strong lift at Amazon as well, because I think that just speaks to, there's some people that are just going to buy everything from Amazon right there, tell 'em to go online value pro proposition. Is it on Amazon? Yeah, yeah,

Tom Leonard:

Yeah. Here in a day or two, it's hard

Brett Curry:

To beat, dude. It's hard to beat same price in a couple days. I don't have to leave my house. But yeah, really, really interesting. And so we'll circle back to that of course, but let's talk about then MMM or media mix modeling. What is that? How are you using that? And then how does that kind of relate to incrementality testing? Because again, going back to your tagline, Tom, you did not say operationalizing NTAs. You said operationalizing m and ms and incrementality. So what is MM and how does that pair with incrementality?

Tom Leonard:

Yeah, basically a big correlation exercise trying to suss out without a true kind of holdout group, what is the impact and contribution of each media channel and also what would happen without media. So trying to suss out a lot of the same questions as incrementality, but basically using correlation as opposed to having a true holdout group. So basically, and I'm sure all the hardcore MMM people and data scientists will thumbs down this or whatever you can do to podcast, but hey, in this period of time, sales went up and nothing could really explain that other than the fact that TikTok spend went up and essentially doing that at a mass scale over longer periods of time trying to take into account anything that could explain that. So you'll always kind of flag it with these are promotions that happen, it should because you're going to give a model at least like two years worth of data or two years worth of data, it'll bring in seasonality and try to understand those sort of trends.

So it's trying to pull out if not seasonality, if not promotions, if not some other things that we are flagging. And it wasn't price reductions, it wasn't all these pieces, what was happening in media that could explain that change. And so that's ultimately what MMM is doing. It's a big correlation exercise, figuring out roughly what is the channel contribution to a top line revenue or order number and what's really important. I think the nicest part or the best first step with M is trying to get an understanding of a base, which is what it's going to be called or intercept what without the presence of ads, does this model think that my sales would be such that I can then calculate not a total CAC of just looking at total new customers divided by cost, but incremental to media or remove base from that equation, how many conversions were contributed because of media as this model sees, which no model is going to be perfect, no measurement method is going to be perfect, but it's a really nice place to start to say, I knew I couldn't account all new customers to advertising, but what's a good number to use or to start with?

Well, it looks like, and this will depend on the maturity of the brand, but a really mature brand, I mean super mature brand, the big CPGs might be like 99% base smaller brand might be something like 50% because you've got this word of mouth flywheel, you've got product market fit, but trying to get an understanding of how much is media contributing relative to customer base is a really nice place to start. And the benefit of running incrementality and media mix modeling is informing the model with some of that causal data. You see that a lot and there's a really powerful feature of media mix modeling is saying, Hey, yes, that's a correlation exercise, can't pull everything out, but let me inform the model or at least restrict the priors it can use or the coefficient, whatever you want to call 'em, what it's searching for to try to find a fit in this model and say, well, I did a hold out test. I know you don't have the causal data, but we ran this in this channel and that channel and helping that restrict the model and giving it data that it can't have without that human intervention can be a really powerful flywheel.

Brett Curry:

So using your incrementality test data, feeding that back into your MMM model to make it more accurate and more causal and make that correlation

Tom Leonard:

Stronger. Because the two things that are really like you're really trying to get, but you don't get with Multi-Tech attribution or attribution in general. And you do get with the combination of media mix modeling and incrementality testing is the incremental impact, the causal impact of what would've happened without the presence of ads as well as the diminishing returns curve, which we know can be really powerful and important too, is what has happened over time as I spend in that sort of a feature of big feature of media mix modeling is understanding where are you on a diminishing returns curve? Is there if I keep spending more, I know it's not going to scale linearly, but are there channels that diminish faster? Is there more headroom in other channels? And it really becomes this true optimization game of where do I put the next dollar? Ultimately the question that every marketer, every finance team is trying to answer is, Hey, if I find $20,000 into couch cushions, where do I put it? And if I need to give back $20,000, where do I pull from to have,

Brett Curry:

I want to hang out at your house and look at your couch cushions and find 20 grand? That's

Tom Leonard:

Great. Yeah, it's easy to give it back, but yeah, right. We're trying to figure out what is going to be the least impactful if I have to give the money back and cut budgets and where is it going to be the most impactful if I have another $20,000? Because the answer is not going to be found in what has the highest or the lowest ROAS in an attributed view. And in fact, that can have the complete opposite impact that you want.

Brett Curry:

Yeah, yeah, it's really great. So I want to actually talk about that point in a minute where if you've got cut budgets, which hey, listen, there's been some uncertainty even as we record this, tariffs up, tariffs down, markets up, market down, whatever consumer sentiment is all over the place. So if things get a little bit tight, what are we going to do? We can't slash marketing, we can't slash growth. I think that sends you into a death spiral, but we might have to get pull back and get more efficient. And so let's talk about that actually for a little bit. So where can you be led astray? I think you just made a post on LinkedIn about this, right? Where you start looking at performance, which feels like the smart thing to do, looking at ROAS and whatnot, and you're like, well, great, well, let's just cut the lowest ROAS campaigns and channels. We'll be fine. How does that lead you astray? And if you want to talk about your specific example to help illustrate these points, that'd be great.

Tom Leonard:

Yeah, totally. I think the other one you're referring to is I think branded search, which we were talking about earlier. And I love using both a, because it can be really, if a brand is spending a lot of money there, it can be a really great place to go find those savings without impacting top line. But also frankly, it's really easy to understand. I think most people understand that up and down the organizational chart across departments, everybody sort of understands the idea of, Hey, if somebody's already searching for my brand, do I need to pay to get that click and that conversion? And I found that just the fact that it's easy to understand can be a really good gateway to incrementality testing because it's easy to get buy-in. Everybody understands that idea, whereas it may be more challenging to express that idea in other types of campaigns.

But branded search is a good example, and the example that you're referring to, kind of a midsize brand that I was working with went through that exact exercise, had to cut budgets. They looked at up and down the campaigns they were running. It was like, Hey, we just got to make the best decision we can with the best available data. They were basically running p max non-branded search and branded search and p max and branded search where had the best attributed roas Best CPA non-brand was really hard to justify in a lower budget kind of environment based off the attribution data cut that leaned a little bit more into branded search as a percentage of their budget. And over the next couple months, new customers in total revenue was declining despite the attributed ROAS and CPA looking even better than ever. And that's where was brought in, looked at all these things, saw the loose correlation to non-brand and new customer acquisition and top line, just the general skepticism that many have around branded search, especially in a low competition environment, which they were in.

There weren't many competitors in the auction that we could see in Auction Insights. So yeah, ran a very blunt instrument match market test, which at a brand of that size and for a branded search I don't think is ever a bad idea. And yeah, no impact to branded search. It was about 20% of their budget, which was substantial that you can either make the decision, I'm going to put that 20% back in my pocket or save it for a rainy day or give it to some other place in the org or say, Hey, I'm going to redistribute this to something that I see in correlation data that might help drive top line backup. Let's reinvest that in non-brand as opposed to keeping it in branded. Again, complete opposite of what attribution would say. And you see that a lot frankly with branded search is an easy one to pick on.

Same with retargeting, but really anything that's especially challenging with the black box solutions that blend, and I'm sure we could do a whole talk show on p max Advantage plus some of the things that bundled together historically radically different levels of incrementality can be a real challenge when you're then measuring on attribution. But yeah, a ranty way of saying yes, finding areas to cut oftentimes if you follow the attribution kind of data can lead to really kind of impactful in a negative way business outcomes because the attribution view just does not take into account what would've happened without the presence of those ads like Incre Ality does. And so can definitely lead brands astray as they're looking to cut.

Brett Curry:

Yeah, really interesting. And yeah, max notorious for leaning into remarketing or branded search. If you're not diligent about that, it can lean into both of those things. And so got to be mindful of that. You also quoted something that totally ties into this. It's from a shop talk talk that you went to shop Talk the show, and I can't remember who said it, but if you see high roas, I know something is wrong and that the auto targeting is just finding existing customers. Do you remember actually who said that and unpack a little bit?

Tom Leonard:

Yeah, I forget his name and I could look real quick. He worked for

Brett Curry:

Mic

Tom Leonard:

The Post Dan Danone, the big CPG. Yeah, I just really appreciated that quote because I mean always wonder if I live in sort of a bubble of being super passionate about incrementality versus attributed metrics, but that was just really refreshing to hear because I don't think that's the natural,

Brett Curry:

It's not

Tom Leonard:

Thought in people's

Brett Curry:

Head spend more,

Tom Leonard:

But I really think it should kind of spark some skepticism, especially when your goal really is to try to drive new customers.

My first, especially if you think about both incrementality in the context of a SC or pex that's blending retargeting and prospecting by default and knowing diminishing returns are my first dollars, yes, they're going to be the most effective, but if they are focused on people that are already buying from me and my goal in my head is new customers, I should be shocked that I can spend a hundred dollars and drive this amazing new customer revenue and not think that something is up or even over time as I continue to spend our BS meters should probably go up a little bit more. And I don't think they do by default. So I found that comment really refreshing.

Brett Curry:

Yeah, I think that really illustrates that, right where it's like most of us would think, oh, ROAS is going up great, we're printing money. Whereas maybe you should say BS detector, something's wrong here. This campaigns leaning into customers that we're going to buy anyway. And I'll give two examples here to illustrate this a little bit more. And I'll also, since we've been picking on branded search so much, I'll share a couple of ways I think we should use it. One,

Tom Leonard:

If

Brett Curry:

Other competitors are aggressively bidding on, just know that if you're not Nike and you're not Adidas and you're not like Ford or something, it's not a lock. If it's a new customer, they could be swayed by a competitor. And that's generally how we like to separate it out is like, let's have branded search for returning customers and let's make that crazy efficient or just turn it off altogether.

Tom Leonard:

If

Brett Curry:

It's a new customer, then again, we want it to be very efficient, but maybe we want it on because we don't want our competitor to come in and swipe us to give and swipe our customer. And so one example of this, I did a podcast with Brian Porter, he's the co-founder of Simple, modern, great Drinkware brand has become a friend and they did a study incrementality study and they found, I'll get these numbers off, but it was like branded search was 10% incremental. So basically what that means is if it shows that I got a hundred new customers from Branded Search, I probably would've gotten 90 of those if I had shut it off, right? Only 10% were incremental. So then what you would need to do there is you need a 10 x row as on branded search for it to even make sense. If it's below that, you're completely wasting money.

Pair that with, and you and I were commenting on the House analytics, HAUS, Olivia Corey and team did 190 incrementality studies involving YouTube and they showed with tremendous amounts of rigor that hey, YouTube is probably 342 times more incremental, meaning if you see a one in platform, it's actually like a 3 42 in terms of incremental impact. And so wildly different between those two. But again, we're just so drawn to in platform row as man, we'll just say spin, spin spend on p max and branded search when really we should be saying, let me lean into YouTube or let me lean into top of funnel meta.

Tom Leonard:

I think both those examples too are really good examples. To me it also speaks though to the importance of cost per incremental almost being more important than incremental percent incremental. And that's something I always use with branded search. I think you and I have a very similar feeling around branded search. There's definitely a time and a place for it, and it's one of those things where it might not matter that it's 10% incremental, 10% incremental relative to what Google's attributing. If your attributed CPA is a dollar and now it's $10, but your margin when you sell a product is a thousand dollars like hammer that all day long, that cost per incremental is still extremely profitable and valuable. And same with the YouTube piece. If YouTube was four times as incremental as Google said, but your YouTube was crazy expensive, it still might not be worth it even though it's four times

Brett Curry:

More

Tom Leonard:

Incremental than the platform was making. And that's how I think a lot about this with connected tv where connected TV can be super powerful and maybe more so than linear tv, but if you can buy scatter linear TV for a 10th of the cost of CTV, well it just has to be more than a 10th as effective and it's accreted, it's a positive. So it becomes more of comparison of a cost per than just a blanket. How incremental is something which I always think is important to focus on and call out

Brett Curry:

To. Yeah, it's so good. I mean measuring something in terms of percentages can provide insights and help make decisions, but ultimately it's the cost per right. Translate that into real dollars to see if it makes sense. 100% agree with you, but I think this also goes back to and use your linear TV example, and I still love TV and connected TV and stuff. Again, I'll use YouTube just because I've got the numbers in my brain, but with YouTube sometimes we'll see a $5 CPM or a $7 CPM in certain audiences compared to other channels that are 15, 20, 30, 50, whatever. Totally. And I'm like, well, if we're reaching the right person and if the message and offer are good, how could this not work? And it's one of those things where it's like, okay, we're either one of those is off, we're talking to the wrong person, that's the wrong message, or we're just not measuring it properly and that's where we need to look at it. So did you have a thought on that? You another question on MM here in just a second.

Tom Leonard:

Yeah, yeah, totally. But it made me think of the idea of, I think the reason I'm starting to become way more bullish on any channel that's historically been hard to measure where I think there's that arbitrage opportunity of costs are still relatively low because people haven't all moved in because it's easy to attribute. It'll be really interesting with a house example, does that inspire a lot more YouTube buyers? That's something that Google should have put out way long ago, but I think it would undermine undermine search and that's their bigger business. And I could do a whole kind of rant and I'll save you that, but the idea of incrementality first measurement probably wouldn't be great for the search business. So probably exactly, haven't been able to make such a good point that case on YouTube. But you think about all the channels that have historically been harder to attribute, that's where costs are deflated just from a supply and demand perspective. So when you can move in and get CPMs at five to $7 and it's really effective, but most people that are measuring through attribution don't know it's really effective, that's a huge win for certain period of time until everybody's flood, everybody and the costs go

Brett Curry:

Up the market. I'm sure there's a lot of people that were not excited to see that study from house like dang it, that means my costs are going up. I don't like that at all. So really good man. So we talked about incrementality testing and I think you can use tools like House and then there are others. We're just talking about work magic and there's a number of others you can lean into. Full disclosure, they're pretty expensive, but you can also do stuff on your own too. If you've got someone that can measure this stuff, you can do a little bit of it on your own. What about the MMM side of things? What's kind of the easy way to start there? Is there an easy way to start? What do you recommend to people

Tom Leonard:

There? I don't know. I dunno if there's an easy way to do anything. I think, well, I guess that's not totally true. I think there's some ways to run relatively easy incre tests. So I think that's the easier place to start. Certainly you can always ratchet up the scientific rigor. I think the problem with looking for an easy MM solution is anybody could run a model with Robin or there's a lot of open source packages, but just because you can run a model, it could say anything. It's not necessarily rooted in this can all of a sudden predict the future and tell you exactly the contribution from media. Whereas incrementality can do that a little more out of the box. You may have wildly wide confidence intervals, but it answers the question. It gives you the comparison. I didn't do it in this market, I did it in this market.

What is the Delta Media mix modeling? You could build a model to tell sort of any story. The proof is sort of in the pudding of if I do the thing that the model says, does it change my top line? Can I see over time that when I listen to the model that improves my top line? So it's a lot easier to get started with incrementality testing. You can run poor man's match market tests as I sort you can just sort of pick, some markets historically behave similarly and there's certainly some risk there, but with a model you might think that it's an amazing model. I just don't feel like there's a great place to DIY that together without some real scientific or statistical rigor. Or if you do, you've just got to try to prove it over and over by taking some big swings.

And that's really, I sort of feel like you can get away with the kind of feel it sort of tests without really running a true incrementality test or model. If you're a small enough business and you spend a decent amount on Facebook, maybe you're not willing to turn off Facebook, but are you willing to drastically increase spend and see if you can feel something at the top line? Okay, then what happens if you cut it in half? What happens? And start to understand those curves on your own is probably a less risky way than trying to, I've never done anything in R and I'm going to run or done any sort of medium amount. I'm going to try to run one. That's probably a risky proposition.

Brett Curry:

Yeah, it's a really good insight. I'm glad you answered the question that way. I think, yeah, leaning into the poor man's incrementality test or just leaning really heavily into a channel and measuring your top line if you've got a small enough business to look at that, but probably if you're going to lean into MM M1, you need a couple years of data and so to be able to make some correlations and you probably need to lean in to someone or a tool with quite a bit of experience because you can do that astray.

Tom Leonard:

And on your comment on cost too. I mean it's all relative and a lot of times where you're going to need a medium mix modeling is when you're spending a significant amount in a significant number of channels, which you're probably only doing if you are spending a lot total, which you're probably only doing if your revenue can support that high level of spend, which means that a tool may not be all that expensive relative to the opportunity you could derive from it, which is where I always net out.

Brett Curry:

So I'm paying 10 or 20 grand for a tool monthly, but it's allowing me to redeploy millions in ad spend. And it totally in completely makes sense. So Tom, this has been fantastic. I'm just watching the clock. I know we're kind of coming up against it, but one, I recommend people follow you on LinkedIn. You put out some awesome content. I love reading it.

Tom Leonard:

Thank

Brett Curry:

You. People should definitely follow you on LinkedIn and you are, is it Tom, what is your handle on LinkedIn? You are Thomas B. Leonard. Thomas B. Leonard. That's probably confusing.

Tom Leonard:

I'm very self-conscious of LinkedIn, so I'm glad to thank you for saying that.

Brett Curry:

I think it's good, man. I think it's really good. I like it a lot. Yeah.

Tom Leonard:

Yeah, it's been fun to start doing connecting with folks. Definitely an area that had a lot of excitement and passion for, it's fun to have these sort of conversations, so I appreciate you reaching out a while ago and that we could connect. Absolutely,

Brett Curry:

Man. Absolutely. So then if other people were like, Hey, I just want to talk to Tom because maybe you can help my brand or my business, how can they connect with you and who are you looking to or who do you feel like you can help?

Tom Leonard:

Yeah, definitely appreciate that. Yeah, reach out on LinkedIn. I spend time there. I love reading everybody's thoughts and content. So yeah, reach out on LinkedIn mostly we work with consumer facing brands that are trying to understand where to put the next dollar or where to pull in the scenarios. They have to really kind of rescue people from attribution, trying to better understand where they can get more with their ad dollars. I think to your point that you teed up now is such an interesting time or anytime that there's margin pressure, there's more scrutiny on a marketing budget. Really want to try to help empower marketing teams to feel more confident with what they're doing and ultimately the finance teams to feel more confident with what marketing team is doing. Hundred percent. That's where I love to plug in, but also just love to talk about this stuff probably more than I should. So always open to the conversation.

Brett Curry:

Yeah, I talk about that a lot. I've read analytics and measurement books on vacation and my wife is like, what is wrong with you? And I'm like, it's interesting. I don't know. I like it. And so totally, we are just a different breed I suppose, but I love that. And then I think this is a great way to end it where if I've got an extra dollar to spend on marketing, where do I put it? If I need to cut a dollar of spend, where do I cut it from? And that's really what this approach is about MMM and incrementality. And so I think their necessities, I think attribution is broken and or misleading in so many different ways. There's some correlations there, so we don't have to throw it out completely, but I do believe you need to lean into MMM and incrementality for short. So connect with Tom on LinkedIn. And with that, we'll wrap. Tom's been fantastic. Thanks for the time, the insights and the energy. Yeah,

Tom Leonard:

Thanks so much Brett time. Glad to connect.

Brett Curry:

Absolutely. And as always, thank you for tuning in. We'd love to hear from you. If you found this episode helpful, someone else in the D two C space or marketing space, and you think, man, they got to listen to this, please share it. We mean the world to me. And with that, until next time, thank you for listening.

Have questions or requests? Contact us today!

Thank you for reaching out! We'll be in touch soon.
Oops! Something went wrong!