Measuring success on Meta
As the digital advertising ecosystem continues to evolve due to increasing changes with data regulation & privacy, in-platform measurement has been caught quite significantly in the crosshairs. In particular, Meta / Facebook have had to deal with & been most vocal with the largest of these changes so far: Apple’s introduction of ATT (App Tracking Transparency) to the app ecosystem with iOS14.5 in mid-2021.
There can be no doubt here that the introduction of consent paired with access to advertising identifiers such as the IDFA on iOS have implicated traditional forms of measurement. But to Meta’s credit, their approach whilst it may have seemed extreme with AEM (Aggregated Events Measurement), was looking to tackle Apple as a whole (iOS + Safari) vs the initial changes. But what does measurement actually mean in Meta nowadays? This blog post will explore all opportunities available to measure success when running Meta ads.
Split Tests
Perhaps the most underrated form of testing is the traditional AB test option. In Meta this is known as a split test. This form of experimentation effectively splits your audience into two, whereby the test & control groups are not exposed to each other. This is a form of correlation when looking at the output of the testing. With Meta being generally a logged in environment and a people based platform, this is generally a more accurate read vs other platform’s which leverages cookies to do this, where the risk of overlap is much higher. There are plenty of options here from creative, placement, audience & delivery types to play around with.
Split tests are recommended for any type of advertiser, whether you are spending millions vs starting off very slowly. Having a robust testing plan and framework where a split test can be part of the methodology is best practice. Though it is worth thinking about implications when running split tests on a known audience (retargeting), where the readout can be more difficult to interpret based on how Meta decides to group users.
Brand Lift
Aimed at more upper funnel forms of marketing on Meta, the option to run a Brand Lift study either using Meta’s own solution or via a verified 3rd party such as Millward Brown or Nielsen, is a very common approach to looking at the impact of advertising on brand specific outcomes. This takes the form of exposing ads to users and then following this up with a form of survey with specific questions that are designed to measure ad-recall, favourability or intent. Meta’s own solution demonstrates the results of these surveys through a visualised lift bar chart & scoring, which is accessible through the Experiments section of the UI. As we are measuring lift, this is causation where we are looking at the impact of said advertising campaign to a specific outcome.
One caveat is the requirements to run these sort of lift studies, which will differ per region and type of study. Meta is not alone here as it ultimately requires a minimum amount of responses to give statistically significant results, so this comes in the form of minimum spends / impression + reach volumes to try aim for once live. The beauty of a good brand lift study is the ability to do it over time but also combining it with other external options such as Google trends data or even your own first party data based on site traffic as an example.
Conversion Lift
On a similar theme to Brand Lift and still running with a causation based approach, it is possible to run Conversion Lift studies on Meta. This works on the basis of a test and control, where the control group is not exposed to ads and is then compared to the test group which is exposed to ads, as to whether the end conversion event had any lift from the introduction of ads.
With iOS14.5, this severely hampered the grouping of users for Meta as if an user opted out with no identifier to leverage, it made it harder to ensure quality in how groups are defined. Therefore Meta have made two significant changes to their current form of Conversion Lift:
It is complusory to have Conversion API (CAPI) implemented to run a Conversion Lift study on Meta now
For the conversion event you wish to measure on, the EMQ (Event Match Quality) score should be at least over 4 but preference is over 6
Added to this, is that Conversion Lift studies are no longer self serve and must go through a setup process that ensures there is the appropriate amount of spend / volume of data to make it worthwhile.
However despite all of these steps, Conversion Lift studies in general are really powerful in giving a readout of the true impact of advertising, whether that is within Meta’s world or even on an omni-channel perspective. It is ultimately a form of incrementality that every advertiser should really be aiming for or at least trying to measure. There have been lots of successful case studies of where consistent conversion lift testing proves the value of Meta as a platform but advertisers also need to be comfortable with their control group i.e. a group of users who will not be exposed to their ads, especially during peak trading periods.
It is also interesting to look at both Brand Lift & Conversion Lift studies together, which is an extremely efficient way of looking at the value of running a full funnel media strategy on Meta.
Geo Lift
Another form of lift testing takes the variable of geo / location as a means to measure the effectiveness of advertising on a geographic level. This technique has been around for a long time but is a really viable solution right now due to the ever growing challenges with user level data & privacy; these are less impacted by both of these things.
And Meta have made this a lot easier for advertisers by rolling out an open source solution named, you guessed it, Geo Lift. The documentation for this can be found here.
Again with this form of solution, there are considerations to take into account. One of the biggest strengths of a geo based approach is that it can be rolled across not just Meta but any paid advertising channel as well as offline. This can help paint a much broader picture beyond a siloed channel specific one. But on the flipside, advertisers need to be considerate of:
How they go about splitting out geos in the first place e.g. by postcode or DMA or city?
How these geo lists end up being targeted / excluded in the platform as sometimes its not a like for like match
How to factor in obvious geos that may bias a test e.g. London in the UK is very much different to every other city in terms of % of delivery as well as population size
How going dark in certain geos could implicate overall trade
Geo testing or more commonly known as Matched Market testing (in the US) ultimately do provide really useful insights to not only a channel like Meta but also can be explored in more emerging places like digital out of home.
Media Mix Modelling
The use of MMM (Media Mix Modelling) is becoming a core staple of any measurement framework due to its ability to withstand the changes to user level data / data privacy. This more macro approach to measurement looks at channels on a more holistic level, factoring in both online / offline as well as other variables such as seasonality or weather into a formulated based model.
Historically MMM has always been seen as a solution for more enterprise advertisers that run across both online & offline, but with the way the ecosystem is shifting, both the solution itself & its role in measuring success are evolving. Notably Meta have always been open to MMM with their official MMM Feed solution which automates reporting to be exported into third party MMM solutions. But Meta have also released an open source solution in Robyn which can be looked at here, as a free solution to get a very baseline MMM in place with the ability to customise it however you like. This does require data science knowledge as well as statistical packages such as R but the potential is quite massive vs looking at a more larger vendor approach or custom build.
Whilst MMM does have a lot of positives as well as being largely futureproof to the changes mentioned, it does have a weakness in the latency (most MMM solutions pull on a quarterly basis & going any quicker is higher resource / cost). Alongside this, the approach of aligning MMM to a more in-platform read is the ultimate aim for an advertiser, which can be solved in different ways but the most popular of these being a form of inflator / deflator. The key message is that no matter how big you are, MMM is something you should be looking at in any form right now.
Third Party Tools
Outside of looking at pure platform based numbers in Ads Manager, it is commonplace to see advertisers leverage some form of third party to try measure the success of Meta. Most common is your site analytics tools such as Google Analytics or Adobe Analytics (formerly Omniture) by implementing tracking codes onto the end of landing pages. There also exist other options such as adserver tracking (Google’s CM360, Mediaocean’s Flashtalking as examples) or multi-touch attribution tools (Amazon Attribution, Nielsen’s VisualIQ). And the ever booming DTC trend of solutions like Rockerbox / TripleWhale / Hyros.
But the general theme here is simple: none of these will ever measure the true value of Meta due to a mix of technical limitations or just general inability to operate with how the ecosystem is evolving. Yes they may give some view alongside other channels but the real answer sits beyond just relying on these outputs explicitly. The only exceptions come from where Meta has to adopt a standard out of its control, namely in the App Install space on iOS where the SKAdNetwork is king.
The Next Generation
There are plenty of measurement solutions already covered and the future is looking even more interesting within the Meta world. Unfortunately this is not possible to cover in too much detail due to NDA’s. but as usual it looks like Meta are one step of its competition, as well as covering its back with existing technology in the data clean room space.
We can already see that Meta are not afraid to get rid of solutions that are no longer viable, with the latest casualty being the new Attribution solution (an upgraded version of Atlas / Facebook Attribution for Meta O&O) that never made it out of Beta & will fully sunset in August 2022.
A lot of the future product set revolves around a newer acronym in PET (Privacy Enhancing Technologies) as Meta themselves talk about in detail:
All of these will no doubt fit into the future state of Meta measurement solutions, alongside the adoption of server side data flow through Conversion API (CAPI) & the continual adoption of conversion modeling as means to probabilistic attribute success.
Concluding Meta Measurement
As you can see, there are plenty of different approaches to measure the success of Meta advertising both within and outside the platform itself. There are super easy things you can do even by running a simple split test, this alone is valuable to any advertiser looking to get more insights from your Meta campaigns.
But as the ecosystem evolves, your measurement strategy not only on Meta but everywhere else needs to be aligned both within the platform but on a more macro level. Depending on your levels of investment in Meta and the type of campaigns, my recommendation is to work through each of these measurement opportunities, as part of a measurement roadmap to prove the incremental value of Meta.
Those 3 core areas of in-platform measurement, lift testing & a MMM / econometrics approach should all be done together to maximise the truest measurement read for an advertiser on Meta but also beyond.