Ad testing is a major portion of what we do as PPCers. The large-scale impact is that without great ads, we won’t convert sales or leads on a consistent basis. In our PPC ecosystem, another factor that must not be ignored is CTR and it’s contribution to quality score. As an agency, I frequently see situations where ad testing as a strategy is essential but when the ad testing doesn’t have a strategy in itself, you can quickly get into the danger zone on conversions, quality score, and overall performance.
Let me clarify. Our clients demand results. We need to generate great performance and often times those results need to be immediate and consistent. We also often see clients with slim success margins, meaning their tolerance for testing and growing is small and needs to be monitored very closely. This can make it difficult to test ads on a large-scale basis. The reason being; we all write duds from time to time. When you are on tight margins and you write an ad that simply doesn’t perform, it can suck the life out of your campaign and can mask all the other great things you are doing on your account. Here’s a recent real life example example:
In this case, the control ad was a clear winner in terms of conversion rate and CTR. This particular account converts right around 100 times a month and this was a one week snapshot of its top performing ad group, which is how long the test ran. For comparison, I ran the impression numbers of the test ads through the conversion rate statistics of the control ad and during that week, we lost out on 7 conversions through this test. In a high volume campaign, 7 conversions isn’t that big of a deal but in a campaign that comes in at 100 leads, you’re talking a 7% drop in monthly volume in one week, testing only one ad group. You can see how this can be troublesome, especially if you write a bunch of duds across all of your ad groups. To be fair, this control has been a top performer for some time and the new ads that were written were entirely new approaches in an attempt to change things up and hopefully identify a new direction to take our ads. That said, the test ads simply didn’t cut it and in the end, it made it difficult to hit our month-end goals.
So what do you do? My suggestion is to reign in your testing procedures on tight margin accounts. First and foremost, don’t test every ad group at the same time. If you have a bad run of ads, you might not be able to bounce back from performance drops and might miss your monthly goal. Setup a schedule that allows you to work through your ad testing methodically, while allowing performance of your other campaigns and ad groups to protect you from large scale performance decreases during your testing phases.
Another trick is to test one new ad at a time but also create a copy of your control ad and run both the copy and original at the same time. I highly recommend this for tight margin campaigns and any ad testing really. For tight margin campaigns specifically, what this does is allow you a little more protection for your existing performance while resetting the history of the control. In a split ad test, you would be running 33% on the test ad, 33% on the original control, and 33% on the duplicate control, which should perform similarly to the original control. Essentially, you’ll have 66% of your traffic hitting the ad you know to work while also resetting the historical advantage of the original control and letting the creative of the ads go head-to-head. Doing this simply ads another level of control to your ad testing. If you were to run a A/B test without a duplicate control, you’re sending 50% of your traffic away from what you know to work. If you test 2 new ads against the control, you would be sending two thirds of your traffic away. The basic strategy is keep more control so you can still move forward with your testing. Another option is to use Adwords Campaign Experiments (ACE) to set your control splits, which also can mitigate the amount of traffic you are driving to your experimental ad copy.
The biggest take away here is that ad testing has the possibility of killing your performance. No one is perfect and writing dynamite ads every time isn’t going to happen for anyone. My best advice is to think through the potential impact of ad testing across all of your campaigns an ad groups. If you have the advertising budget to run a full bore test, this is almost always my recommendation. You are going to learn faster and make improvements to the account quicker. You might have to support some short-term losses for long-term gains but in the end, you’ll probably accomplish the same thing in a shorter time. If you can’t support short term losses, adjust your strategy and develop a plan that will allow you to move through your ad tests and gradually raise the performance of your account, while allowing known performers to support consistent performance along the way. Please keep in mind that this is primarily a risk mitigation strategy. It’s going to be a slower but more predictable process.