Sometimes referred to as split testing, A/B testing is the testing of different versions of an advertisement to determine the highest performing ads and to remove the ineffective versions. In other words, it’s comparing two similar, but intently different ads to determine the elements which achieve better performance. The elements to split test can include almost anything, from wording in the headlines, to ad images, to interest targeting.

The most effective ways we’ve found to experiment with A/B testing usually begin with the creatives themselves. Sometimes we’ll be working with a piece of content that can be advertised using more than one tone in the headline, such as light and optimistic or ominous and suspenseful. The trick is discovering which tone hits the most key performance indicators to achieve your desired results. Ad images are another good experimental tool for split testing. We’ll often test the effectiveness of different types of images, even going as specific as altering the color tone. Using different kinds of ad images and headlines is typically the first step for us in A/B testing.

Another way we can do split testing is by who we choose to target the ads towards. This can be via interest targeting, device targeting, or even by what operating system is being used. Determining which targeted groups or combination of groups get the highest performance from a campaign gives us the ability to show ads to only the most likely users to interact with the content.

In short, A/B testing is an invaluable tool in creating the best possible campaigns within a controlled environment. Running two campaigns for the same piece of content with different variables side by side means we can monitor each campaign and compare the data, thereby allowing us to make small tweaks where needed in order to achieve the highest performing campaigns and deliver excellent results to our clients.

Katy Bryan-Beachler