Campaign level testing focus on custom variables that affect the entire campaign structure. This includes:
Dynamic Creative Adjustments: Testing how changes in creative elements based on user data (like location) or external factors (like weather conditions) affect performance.
Optimization Goals: Comparing different campaign goals, such as click-through rate (CTR) vs. conversion rates, to see which is more effective for your objectives.
Note: changing the overall objective isn’t A/B testing, due to running parallel campaigns with different objectives is considered strategic, rather than merely a simple variable adjustment.
Budget Allocation Strategies: Evaluating how different ways of distributing your budget across ad sets or audience segments influence the campaign’s outcome.
At Ad set level testing the focus narrows down to variables that affect specific audience subsets or how your ads are delivered. This includes testing:
Standard Options:
Audience Segments: Identifying which audiences respond best to your ads.
Placements: Determining the most effective platforms or locations within platforms (e.g., Facebook News Feed, Instagram Stories) for your ads.
Schedule and Bidding Strategy: Finding the optimal times to run your ads and the most cost-effective bidding strategies.
Custom Variables for Ad Sets:
Unique budget pacing strategies: different budget allocations within an ad set
Tailored audience definitions: Create specific audience segments beyond predefined options using your own data or criteria.
Dynamic landing pages: Create ad-hoc landing page content based on user characteristics like location or device.
In al cases, Meta will duplicate the campaign to isolate the variable’s influence.
Meta divides the audience you are targeting into random groups that do not overlap. It duplicates your ads and tests the ad sets against each other by only changing that one variable.
IMG
By analysing your A/B test results, you can determine what changes you might want to make to future campaigns.
Remember: not every split test will improve performance. You might test a new design only to discover that the original was more effective. Don’t let this stop you from testing other variables.
It’s a super quick process. But keep in mind some Meta ad testing best practices:
Test high-impact ad campaign elements:
Don’t split testing low-variance elements – changing one line or word in your ad creative will not result in meaningful split test results.
There is only one major difference between your ad sets:
If you test everything at once – it will be impossible to later tell what tested element resulted in success/failure.
I recommend you only test one variable at a time. The goal is not to build two different Ads when performing an A/B test. What you’re really doing is creating near identical versions that only differ by one or two variables, and measuring which alternative performs the best. This preserves the scientific integrity of your test, and helps you identify the specific difference that drives better performance.
Your ad sets have the same budgets:
Don’t allocate your budget unequally to the ad sets – you need to give each variation equal opportunity to deliver results.
Your ad sets run for the full duration of the test:
Don’t concluding tests too fast (+ using too low test budgets) – you need a proper amount of conversions to conclude that one variation outperforms the other.
Which variables you can split test at each Facebook level #
You can carry out the Facebook AB Testing procedure on nearly all levels of Meta Ads
…that depends on the variables you are trying to test.
Here’s a list that covers most of what you can test at each level:
⚠️ I don’t recommend running A/B testing across multiple campaigns. It makes difficult to analyse the data. Instead, you should have each campaign set up to track different goals.
At the campaign level, you can test:
Campaign Objective
Currently, you can choose from 11 campaign objectives in Meta ads.
Note: Your Meta campaign objective determines what ad template, delivery optimisation, and bidding options your ad sets will have.
You can split test two different objectives against each other.
Here are some examples:
If you are looking to drive more leads at a lower cost, test out a lead generation campaign objective versus a website traffic campaign objective (where you are sending people to a dedicated lead generation landing page). Both campaign objectives offer different results. Lead generation campaigns are great for driving higher lead volumes, but they tend to have lower-quality leads. On the flip side, traffic campaigns are great for sending the audience to a landing page, generating fewer but high-quality leads. It’s interesting to see which can get you more leads at a lower cost.
Another example of a split testing campaign objective is to see if you can drive more conversions through conversion campaigns or website traffic campaigns. Both versions can help you achieve your conversion goal, but one might do the same at a lower cost.
Budget (ABO vs. CBO)
You can also split test a campaign using campaign budget optimisation (CBO) against another without this option enabled.
Note: Campaign budget optimisation (CBO) automatically manages your campaign budget across ad sets to get you the overall best results.
So, it’s essential to switch off CBO if you plan to split test at ad set level. This avoid that some ad sets might getting a lot of money spent on them and some might get very little or even no money spent on them.
⚠️ When split testing a CBO campaign using the Facebook a/b feature, you will be notified that two new versions of the original campaign will be created.
The first thing that I like to test when I’m starting from scratch is audiences.
Note: On Facebook, the target or market that you’re going to advertise to is called “audience”.
As discussed in a previous tutorial, targeting the right audiences has a huge impact on your campaign.
Even if you have an ideal buyer persona, you should test various custom audiences for Facebook ads and keep improving it from the results.
For example, the company Survival Life thought their target market was older, white males, and that’s who they were targeting on Facebook. But one time, on accident, they forgot to select males as the target (so they were targeting both men and women).
Do you know what they found?
Their best converting ads in that campaign were from women
Placements
You can split testing different placements in order to determine the best place to show your ads.
The default setting is “automatic placements”, Facebook will run its own tests and determine the best place to show your ads.
But split testing allows you to find out if certain advertising placements generated a better ROI than others from the off-set, rather than waiting for Facebook to find out.
Note: Each ad placement returns different results
Bidding method
⚠️ Again: Make sure that your ad sets have the same budget. If one of your ad sets has a huge budget and the other one has a very small budget you’re not going to be able to properly test those.
At the ad level, you can test pretty much everything that will be visible to a person seeing your ad.
Note: The ad is the creative that people see on Facebook or Instagram.
This includes:
ad type
images or video
ad text
headline
call-to-action (CTA)
IMG
As said above, when you test a creative variable (using the facebook a/b test feature) each ad variation is placed in a separate ad set.
IMG
Multiple single-variation ad sets
Pros:
The split-testing feature of Facebook is an interesting feature, especially regarding its reliability. The accuracy of your results are top-notch, however, this comes at a cost.
Cons:
Targeting the same audience with multiple ad sets is more expensive than if you were running just one ad set with several ads, due to the ad sets competing against each other in the bidding process.
It looks ideal on paper but when using Facebook split testing you divide your audience size in 2 or more, so each adset costs more than if it was together. Plus, Facebook’s delivery is kinda random and incomprehensible so you have to fully trust Facebook on doing the delivery right. Great on theory, but tough in practice.
Also, since Facebook needs to get enough results in order to be statistically significant, you’ll need to spend a decent amount of time (and money) in order to complete that split test.
This goes against the general maxim of staying agile and moving on to the next test as fast as possible – therefore especially for accounts spending less than six figures per month, we tend to stay away from this methodology.
However, it’s possible, but not recommended*, for each ad set to contain multiple ads. You can manually, add multiple Facebook ads within one ad set that has the same targeting options.
* There are a lot of contradictory advice about this methodology. Even if is not a best practice in some case this approach could be the way to go. The A/B testing strategy for each business is unique, as each company has different goals, resources, and products.
Note: In this scenario we are NOT using the facebook a/b test feature (that duplicate the ad set!). Instead what we are doing is running a manual a/b test. Even if due to the Facebook ads auto-optimisation this is technically not an a/b test (more on that later).
IMG
A single ad set — all your ad variations are within a single ad set.
Pros:
Facebook Ads auto-optimisation:
Facebook will optimize and deliver your ad based upon your CTR (click-through-rate) and CPA (cost-per-action). This will give you an optimisation at the ad level.
You won’t have an equal distribution of traffic across all variants, but you get a good enough read of which ad is better.
Also, it’s paradoxical that Facebook during an ab test suggest to combine the ad sets!
Combine 2 similar ad sets affected by fragmentation
2 of your ad sets have similar setups and audiences, but different creatives. As a result, they may take longer to exit the learning phase and spend more budget before performance has optimised. Combine ad sets to help reduce the time and amount spent in the learning phase.
You are also in full control, meaning, you can manually see the performance of your ad creative and make decisions based on those metrics that will not directly affect the delivery of your ad set.
Cons:
Facebook Ads auto-optimisation
Your optimisation advantage is also your drawback.
If you, manually, added multiple Facebook ads within one ad set that has the same targeting options.
You won’t have an equal distribution of traffic across all variants, because Facebook will soon start to deliver the ads that have the highest click-through rates (CTRs) and lowest cost per click (CPC) across your ad set.
However, sometimes Facebook makes the decision too quickly, leaving you with no relevant A/B testing results.
CXL posted a case (img below) where you can see just how early Facebook makes a decision on which ad is performing best.
After only showing the ad to 722 people, Facebook had already scaled back impressions for second ad variation, only showing it to 334 people (compared to 680 people for first ad variation). You’ll also notice the disparity in budget allocation between the different ads, with Facebook spending $7.27 on one ad and only $1.58 on the other.
But what’s most interesting is the cost per website click (the goal of this campaign). The ad that was deemed the loser by Facebook was actually delivering cheaper results.
In order to get the auto-optimisation out of your way, create a new ad set for each ad variation, and let them run simultaneously (or use the Facebook a/b testing feature).
Separate ad sets for each ad variation.
Testing options are exponential
For example, let’s say you wanted to test five different target audiences, five ad headlines, five ad images, and five different blocks of ad text.
As a result, you’ll get five ad sets with 125 ads per audience.
5(images) x 5(headlines) x 5 (ad text) = 125(ads)
Which means you have a total of 625 possible combinations to test.
125(ads) x 5(audiences) = 625 combinations.
The golden rule of Facebook Ad testing: If you intend to test something at the ad level, keep your ad set and campaign variables unchanged.
There are several ways to create a split test on Facebook that depends on where you start designing your test and the variables you are trying to test.
Creating a new ad campaign: You can start the A/B test process during the campaign creation process.
Duplicating an ad set: In Facebook ads manager, you can duplicate an existing ad set or campaign, make changes to the new set, and compare performance.
Ads manager toolbar: This uses an existing ad campaign as a template for your split test.
Experiments tool: Under Experiments in Facebook, you can add the duplicate ad or create ad sets to determine the winning strategy. This store is useful for marketers that want to fine-tune their ad before starting the test or when you want to use several existing campaigns for the test.
Just like creating any new campaign, start by clicking on the green “Create” button.
Step 2: turn on the toggle “Create Split Test”.
When you hit the toggle a little window popped up that says:
this campaign will be version a in your a/b test
Step 3: Campaign Settings.
Set up your campaign, ad set(s), and ad(s) according to the element you want to split test and the budget you wish to allocate.
Step 4: Publish it.
Once ready to go, click the “Publish” button and you will see the “Create A/B Test” popup.
⚠️ Note: You cannot just leave a campaign in draft, you’ll have to publish to create the a/b test.
Step 4: Create a/b test
You’ll be prompted to edit a duplicate version to test against it. Click “Create a/b test” button.
Step 5: A/B test Setup.
The language on this page is a little bit misleading, basically you need to select whether you want to make a copy of the campaign or ad set you’ve just created, or, alternatively, test it against an existing asset in your ad account. Keep the default selection and click “Next”.
Step 6: Choose variable.
We get to pick the variable that we want to use for this new version to test again it.
There are different variables you can test: creative, audience, placement or a custom variable:
Remember: creative, audience and placement variables will duplicate your ad set while custom variable will duplicate the entire campaign.
Step 7: Review and Publish.
Give to this test a name that describe what you’re trying to test, set a time frame of at least 24 hours for the test and choose the metric that you want to determine the winner.
Step 8: Ready to roll.
Once you’re done setting up your test using the Create A/B click “Duplicate Ad Set” and you should see your new ad set and its copy with an Erlenmeyer flask icon to its left.
Now you can edit the copy you created according to the element you wish to test and let the Facebook algorithm do its magic.
Duplicate your Ad Set to A/B test it against the original #
There are two ways to do that:
Clicking the “A/B Test” button: Tick the asset you are interested in testing and click the “A/B Test” button, which is located next to the “Create” button. Alternatively, you can click the button and select the asset on the next screen.
Duplicating the asset: Hover over the desired asset and click “Duplicate.” In the popup that will appear, select New A/B Test and choose the variable you wish to split test. Then, click “Continue to Test Setup.”
In both cases, the next screen you’ll see would be the “Test Setup” screen. Here you can change the campaign you’ve selected for testing and select your variable (if you haven’t done so previously).
If you wish to test audiences or placements, you will need to select the ad set you wish to duplicate.
While if you want to test creatives, you will need to select the ad you want to test.
However, note that Facebook will still duplicate the entire ad set, not just the ad.
You can also set up tests in the experiments part of the business manager
So, I’ll show you how to do that:
Step 1: Navigate to “Experiments” under “Analyze and report” in Facebook Ads Manager.
IMG
Step 2a: Either duplicate an existing Facebook campaign, ad set (shown here), or ad
IMG
Step 2b: Or Create a New Campaign
In Ads Manager: create “Campaign”, select “Objective”, scroll down from the campaign level and toggle on “Create A/B Test.”
IMG
Step 3: Get Started
After clicking “New A/B Test” or “A/B Test” (depending on where you start from), click “Get Started” in the lower right corner.
IMG
Step 4: Follow Facebook Prompts for Set Up
Follow the prompts based on the variable you want to test, and which campaign, ad set, or ad level you’re interested in testing that variable in.
IMG
While you set up your test, you were prompted to select the metric by which the winner would be determined.
In the event you want to view the split test results by a different metric, select the dropdown arrow next to “View By” to preview other options.
IMG
Step 5: Rename the Ad Sets and/or Ads
Once you’ve selected your chosen variable and followed the steps to setting up your split test, we recommend changing the names of the ad sets and/or ads so that each version is immediately recognisable and distinguishable from the other.
In this example, we’re running a split test at the ad set level, as we were testing a saved audience against a lookalike audience to determine which would result in the lowest cost per click.
Step 6: View Results
When your test is running, you’ll be able to view your results: