But, Michael, isn’t slashing the Customer Acquisition Cost our life’s purpose? You even post that on your IG stories!
This is a classic trap for performance marketers. For most of us, reducing CAC is often the KPI we are measured by, or it’s our quarterly OKR. It might even be the reason we got hired in the first place.
Then why is this a trap?
Focusing only on CAC can present you with pitfalls in the areas of LTV and volume of growth. For example:
You might find that not all customer types have good retention. Optimising for the right customer rather than any customer may increase your CAC, but your LTV will be higher. It’s likely that this will result in a shorter payback period.
Increasing your media spend is likely to increase your CAC and decrease your ROAS. But it will also unlock more revenue growth. It’s up to you and your business goals to decide what you want to maximise (most often revenue or net profit after factoring in COGS) – and therefore to find the optimal levels of CAC and ROAS.
Then why the obsession with CAC?
The answer is that between CAC and LTV, the former is the metric that you can have more influence on as a performance marketer. Big shifts in LTV represent big shifts in customer behaviour, and that’s not something you can normally dictate.
On the other hand, big shifts in CAC are more possible (via optimisations on your existing channels, opening up new efficient ones, increasing virality, etc.). That means that shortening your payback period or increasing your LTV:CAC ratio is often synonymous to lowering your CAC.
The least important factor in your performance channels is the channel’s account and campaign structure.
Precisely. Media buying is important, but if the campaign fails to deliver on its marketing fundamentals, even the most advanced setup won’t do.
I remember a chat I had with a friend and fellow performance marketer recently. He was telling me a story about a poorly-structured Google Ads account in a business he was working with. “Even though it was a mess and there was tons of wastage in the search term report,” he said, “the performance was amazing.”
“Let me guess. The offer was great?” was my response. “Exactly,” he said. “The price was 50% lower back then than it is now”.
What’s the explanation?
With such a compelling offer, the clickthrough and conversion rates were high enough that the ad wastage was a minor factor for the overall performance.
If I were to rank the factors that affect the performance of an ad campaign, I’d go with the following:
(The 2nd and 3rd factor could be ranked interchangeably depending on the case. I don’t have a strong opinion there.)
The offer is the cornerstone of performance, since it’s what ultimately decides how desirable your value proposition is.
In an extreme example, a job finder website can get away with terrible UX and UI. This is because the job seeker’s motivation to complete the conversion path is going to overcome any obstacles that arise from poorly optimised website elements.
It’s the reason why you’ll see paid ad accounts with millions in spend being structured around a single offer. For example, subscription businesses will discount their first month, to enable cheaper acquisition and then look to increase their LTV and shorten their payback period.
With creative and landing pages being the factors that will lure prospects into your website and increase their chances of converting, it’s evident that the way you structure your ad account is important. But it still ranks behind the 3 factors above.
Furthermore, the more sophisticated the ad platforms become, the less granularity they require in their setup.
This means that two identical ad accounts in terms of setup, where the only difference is the creative and landing page experience, will get radically different results.
Looking at your source/medium report in GA won’t give you accurate channel performance.
The discrepancies between Facebook Pixel Conversions and the conversions attributed to Facebook from Google Analytics is a classic debate between performance marketers.
There are tons of questions about this in performance marketing discussion groups asked from junior PPCers as well as seasoned pros.
Simply put, you can’t get an accurate measure of channel performance by looking at your Source/Medium report.
This report is last click (or, to be fair, last non-direct click) and does not take into account cross-browser, cross-device or view-through conversion paths.
Let’s think of a typical scenario:
You scroll through Instagram and find an ad showcasing a skincare routine. You are impressed by the creative, click through it and browse the website.
You are then hit by retargeting ads over the next 3-4 days, and you decide you really want to start taking better care of your skin. You now remember the name of the brand so you google them on your laptop, click on the first result (the brand’s paid ad), and complete the purchase there.
On Source/Medium, you’ll see this transaction attributed to google/cpc, and specifically the brand campaign – Paid Social would show a visit with no conversion.
Think about it the opposite way. If Source/Medium was a report you could count on to evaluate channel performance, the marketing channel you’d attribute your conversion above would be… branded search, and you’d be looking to increase your budget and efforts in branded search. Sounds illogical, right?
With that being said, I’ve seen brands make huge (in terms of both size and impact) budget decisions based on this report. More often than not, this means cutting paid social spend thinking that the organic, brand ppc and direct traffic will stay unaffected.
What typically happens is that they see drops as high as 50% in their total traffic, then hurriedly get back to spending!
To summarise, since we all know that the buyer’s journey is far from linear, it makes no sense to try to evaluate marketing channels based solely on their impact on the very last touchpoint.
Since we just established that Google Analytics data won’t accurately reflect Facebook’s performance, surely Facebook Pixel data will, right?
(Using Facebook for simplicity, but the same can be said for all Paid Social)
While the Facebook Pixel will give a much better representation of Facebook Ads’ performance than Google Analytics (let’s ignore the latest iOS 14.5 drama for a bit), there’s still a pitfall you should avoid when evaluating performance from the channel’s ad manager.
The pitfall’s name is… Incrementality.
Facebook Ads and Google Ads attribute conversions to their campaigns based on last interaction within their ecosystem. What the platforms won’t do, though, is split the conversion value between them if they’ve both played a part in it.
They’ll both add 1 conversion to their campaigns based on their respective attribution settings.
That means that when budgets increase, especially with Retargeting Audiences and Branded Search campaigns, you’ll find that a lot of times different channels will claim the very same purchases as their own.
The easiest way you can be fooled by Facebook’s stats is by assigning higher value and bigger budgets to your Retargeting and Retention (post-purchase) campaigns.
These campaigns will naturally have higher conversion rates and higher efficiencies, but they are reaching people that have already interacted with your business in some shape or form.
The incrementality of these conversions isn’t 100%, as these audiences are already aware of your business and some of them could reach the purchase step even without seeing another retargeting ad.
Side note: Incrementality Testing inside Facebook is a particularly useful tool once you’ve scaled your Retargeting spend.
With that said, it’s important to structure your Paid Social in a way that the majority of the spend (often over 80%) is going to your Prospecting campaigns.
There are exceptions to this rule (e.g. you’ll want to heavily lean on Retargeting/Retention during BFCM) but doing so ensures you are putting your budget to maximum use by reaching and converting completely new customers, as well as adding more people to your retargeting audiences to convert them later.
You need to lean on blended numbers to evaluate performance instead of trying to measure everything.
The controversial one. I can already sense the skepticism from my performance marketing peers.
The thing is, as already established from Truths 3 and 4 above, we can’t always attribute conversions to a single channel.
Apart from the difficulties in attribution though, it’s also true that activity in one marketing channel influences activity and results in another channel and that in the end they all work together towards a blended result.
We definitely don’t live in the era of “half my advertising spend is wasted, I just don’t know which half” (quote by John Wanamaker).
At the same time, we are still far from measuring everything. What’s more, it looks like we are taking steps backwards in terms of tracking nowadays with the gradual loss of cookies. This can be mitigated by sophisticated 3rd party attribution tools, but the principle still holds true.
There are also still highly-impactful activities in marketing that you can’t measure accurately - and a grave mistake a lot of marketers make is not focusing on certain activities just because they won’t be able to measure them.
At the end of the day, your marketing is an engine with many moving parts that work together - and the result is greater than the sum of its parts.
Therefore, leaning on your blended numbers to measure performance is a better guide to making budget decisions, as you’ll also avoid the pitfalls described in Truths 3 and 4.
And if you're looking for a tool to help you keep track of all the different metric, give TripleWhale a try.
© Triple Whale Inc.
266 N 5th Street, Columbus OH 43209