source: Product Manager HQ
For many digital marketers, when we hear “split testing,” we immediately think of A/B testing creative variations across advertising platforms. This usually means taking two nearly identical ads, which differ only by one variable (e.g. headline), and running them simultaneously in a campaign to determine which variation has the “winning” performance. The winner is determined by your KPIs, and future tests pit the winner as the “control” against another variation. A/B testing should be run frequently to keep your ad campaigns fresh and optimize performance. While A/B tests can be run manually by following some form of the process I’ve just described, Facebook Ads gives marketers the ability to run split tests through one of its self-service advertising tools.
What Does Facebook Split Testing Do?
Facebook Split Testing allows advertisers to easily isolate and test one variable at a time for your campaign, and determine which audiences, placements, delivery optimizations, and creative best meet your marketing goals. A/B tests are usually referred to in discussions on ad creative variation testing, but with Facebook Ads Split Testing you can optimize your campaigns around the aforementioned variables – and the tool does the work for you. The algorithm will evenly split up your audience to eliminate any overlap, and it tests only the variable you specify for the duration of the test.
Split testing can be used with the following campaign objectives:
- Engagement
- Catalog sales
- Conversions
- App installs
- Reach
- Video views
- Traffic
- Lead generation
Why Should I Care? / Split Testing Best Practices
There are many benefits to split testing, including but not limited to:
- Determining whether adoption of a new product or process will work for your business goals, by testing it on a small scale before rolling it out across campaigns
- Gaining clarity on whether a singular aspect of the marketing plan is really working
- Testing hypotheses and gain (sometimes unexpected) insights, from which you can come up with new ideas for further experimentation
- Answer a question with data (rather than guesswork)
- Iterate upon creative content for better and more effective advertising
And here are some of our best practices for getting the most out of true A/B testing:
- Test only one variable at a time, so you can ensure that only one factor determined differences in results
- Pay attention to the statistical significance of the data, because you want to act upon conclusive results
- Budget enough ad spend to gather sufficient, meaningful data
- Test for a time-frame that is not too short or too long (typically 4-14 days)
- Know by which metrics you’ll measure outcomes (KPIs)
- Use unique and ideal targeting for the test, serving an audience that isn’t running in any other campaigns (to avoid overlap which could contaminate results)
When Should I NOT Use Split Testing?
1. When you want to test multiple creative variations with many creative assets
It is important to note that Facebook Split Testing functions very differently from Facebook’s Dynamic Creative tool (you can read about the differences in my June blog post here). If you want to test multiple creative variations at the same time, and allow Facebook to serve winning creative combinations based on machine learning, then the Dynamic Creative tool is a better use of your budget than a traditional split test.
2. When you should use Brand Lift or Conversion Lift measurement tools instead
If you have a large enough budget with specific brand lift or conversion goals, you may also want to use Facebook’s Brand Lift or Conversion Lift tests in lieu of a split test. To get a more complete picture of how Facebook Ads (as compared to other channels) drive your business goals, these more robust offerings can offer greater insights
3. When you want to test minor variations that don’t answer a question about your business goals
Split testing is a way to use a dedicated budget for a specified time period to gather learnings about your marketing initiatives. They are best utilized when you have a specific, measurable question for which you need a data-driven answer. Best practice, especially for creative split testing, is to test conceptual differences rather than minor variations. For these smaller tweaks, your resources might be better used by running multiple ads in the same ad set and analyzing performance, or utilizing more than one ad set in a campaign. Remember, though, that a Facebook Split Test is the only way to get scientifically precise answers and avoid overlap.
How Do I Set Up a Split Test in the Ads Manager?
When you create a campaign, you must select the option to start a split test from the initial campaign creation. You cannot start a split test in an already-existing campaign or ad set. In the campaign guided creation flow, after choosing your campaign objective, you will see a check box like this:
Next, in the variable section, you can select the variable for which you want to test (audience, delivery optimization, placements, creative, or product set).
Here is Facebook’s detailed list of variable selection notes:
- If you choose audience as your variable: Under the audience section, select a saved audience or create a new one for each ad set.
- If you choose delivery optimization as your variable: Under the delivery optimization section, select your delivery and bid strategies for each ad set.
- If you choose placements as your variable: Under the placements section, select whether you would like automatic placements or choose your placements to customize where ads are shown.
- If you choose creative as your variable: You’ll make your selections for audience, placements, delivery, budget and schedule, and then click Continue. You’ll then be able to set up the different versions of your ad.
- If you choose product set as your variable: Choose the product sets you want to test (up to 5) and then make your selection for audience.
From there, you can choose your budgets (with an Even Split or Weighted Split) and schedule your test. After you continue, you will set up your ad sets and ads as normal, but with sections dedicated to whichever variable you selected.
For more detailed instructions, or for instructions for quick creation setup, see the rest of the Facebook article here.
Applying Learnings from Completed Split Tests
There are two ways to determine the winning ad set in a Facebook Split Test.
First, you can review your results in the ads manager, while the test is still running or after is has completed. A split test campaign has a beaker symbol beside it, and when you click into the campaign to view at the ad set level, the winning ad set will appear with a star beside it:
source: Facebook
Note: In the Ads Manager, you can also apply a filter to view only ad sets that are part of a split test.
Second, Facebook sends a results email to the email address associated with the account. This email will contain written and visual information about the split test’s performance. Here is an example of a split test results email I received from Facebook:
Now, the interesting thing about the result of this split test is that Facebook tells me that my ad sets performed too similarly for results to be conclusive. Here is how Facebook explains this:
“The winning ad set is determined by comparing the cost per result of each ad set based on your campaign objective. We will also determine a percentage that represents the chance that you would get the same results if you ran the same test again.
For example, let’s say you run a creative test with one video ad, one single image ad and one carousel ad. We determine that the video ad was the winner with the lowest cost per result and a 95% chance that you would get these results again. With these results, we recommend that you adopt the winning strategy.”
Facebook will determine a winner for results with 75% repeatability or higher. Because my ad sets performed almost equally the same (according to Facebook), my results did not give me a clear winner, and Facebook recommends I try another split test.
If, like in my example above, your results were “low confidence, you can test the campaign again with a longer schedule or higher budget. Testing with a longer schedule and/or higher budget can provide more data to help produce results with higher confidence” (source: Facebook).
Next Steps
Once you have received the results email or viewed your split test results in the ads manager, you can leverage your learnings from those results by continuing to test in further campaigns, or by utilizing the winning variable in future campaigns. You can either create new campaigns, or keep the split test campaign running with only the winning ad set activated.