Ad copy testing is a crucial part of paid search that often gets overlooked. Constantly testing new copy in order to improve performance is very helpful. Excel can be utilized in order to help with ad copy testing.
If the test is being ran across multiple ad groups, the best route to take is to utilize the labels function within Adwords.
You can select which ads you want to add the label by filtering specific messaging that the ad contains, then checking the box to all of these ads and creating a new label as seen below:
You can then do the same for the test ads. This will help when running the test, as you can then download all ads involved, and utilize the label column when creating a pivot table. As seen in the raw data downloaded below:
Once the raw data is organized in this manner, you can create a pivot table.
Depending on your goals you can utilize different metrics to judge which ad copy won the test. The most common metrics include:
- Click-through Rate – Used when traffic to the site is the main goal of the ad copy. This is probably the most commonly used metric for ad copy testing.
- Conversion Rates – Used for when you want to maximize the value of each click. This is when your ad copy is being used more to weave out bad traffic that does not convert, and make sure the user is coming to the site to make a purchase.
- Impressions Until Conversion – Used to combine CTR and Conversion Rate. Shows how the ad is doing in terms of bringing traffic in AND getting sales or leads from the traffic coming in.
Once you have decided on which metrics you want to utilize, you can proceed to getting the significance level of the winning ad. For the first example conversions per impression (similar to IUC) will be used.
Step One: Get the Conversion/Impression data. This is very easy as you just divide conversions by impressions.
Step Two: Get the standard error for both the control and the test ads. The formula in Excel to get the standard error is:
=SQRT(((Conversions/Impression)*(1-(Conversions/Impressions)/Impressions))
This will be performed for both the control and test ads.
Step Three: Get the Z-Score. The formula below is how to get the Z-Score when utilizing a conversions/impressions model for your test:
=((Control Conversions/Impressions) – (Test Conversions/Impressions))/SQRT(POWER(Control Standard Error,2)+POWER(Test Standard Error,2))
Step Four: Get the P Value. Once the Z-Score is found, the P Value is easy to retrieve with the formula listed below:
=NORMDIST(ZScore,0,1,TRUE)
Step Five: Get Your Winner, and Significance Level. In this scenario, if the P Value is greater than .5 then the winner is the Control (And the P Value is the Significance Level associated with the winning ad), if the P Value is less than .5 then the winner is the Test (and the Significance Level is (1-P Value)). Utilizing the two formulas below will help show this on an Excel sheet:
=IF(PValue>0.5,”CONTROL”,”TEST”)
=IF(PValue>0.5,PValue,1-PValue)
Below is an Ad Test set up showing the formulas being utilized:
Once everything is in place, for presentation purposes, rows 14-19 are hidden. The final product should look something like you see below:
As you can see this test shows that the control ad is winning, but only with a significance level of 60.23%.
If you want to set the ad test up around CTR rather than conversions per impression, just simply utilize clicks rather than conversions. This can be done by replacing F6 with C6 and F7 with C7 in each of the formulas.
As for using conversion rates, you would replace the impressions with clicks. So utilizing C6 and C7 rather than D6 and D7 in formulas shown above would work there.
In terms of outside the box thinking for ad testing, one way that eCommerce accounts can get a true look at performance of ad copy would be to test the revenue per impression. An example as to why this would be needed would be if you were testing the following messages:
The original message could get more sales; however, the second message could end up bringing in more revenue, as it targets users looking for the best deal even if they have to buy 5 boxes at once.
Let’s take a look to see what the difference would be in the previous test if total conversion value per impression (revenue per impression) was taken into consideration rather than conversions per impression:
As you can see the test changed considerably when taking revenue into consideration for this test. The test now shows that the test ad is the winner and has over a 95% significance level as the winner at 98.59%. This is because the average order value of the test ad is much higher than the average order value of the control ad.
You can see in the next image listed below, when using the CTR as the testing metric, the control is the winner by a larger margin than reported with the sales per impression model:
Knowing the goals of what your ad copy brings is huge when creating ad tests, as the metrics you utilize to determine a winner will change the test completely, and could potentially change the test dramatically in one direction or the other. Making sure all parties are on the same page on which metrics are used for testing purposes is a huge first step.
Once these goals are in place, you can use labels in the AdWords interface and Excel to help you get to a conclusion on which ad is the winner, and then move onto the next test.
Do you want more new ideas involving Excel for your PPC accounts? Download the following whitepapers and learn more about how Excel can help you run PPC Accounts more efficiently: