All marketers have been in a similar situation:
It’s Thursday, 3:30 in the afternoon. You have already solved several urgent matters, entered a couple of meetings (which you consider quite unnecessary but fine) and you’ve been retouching the same project for the past 2 hours (maybe it’s a landing, a content grid, or a design that has 3 corrections made by the client). You still feel that “something is missing“; the metrics are relatively good, but it still feels like you could do something else to improve it… But it’s too late to think. You must deliver it in 40 minutes and you still have 5 pending jobs to solve before the weekend. So you deliver it. And go on with your day.
Weeks go by and the metrics arrive. You notice that the numbers are “decent” enough, nothing extraordinary but still work well for the monthly report. And when the time for the next post eventually comes, common sense tells you that, if the previous process gave you good results… surely repeating it will be fine.
…right? (that was a trick question, of course not! Bad marketer, bad).
Well, why is it not a good idea? The problem is that making decisions based on this type of intuition without solid data to support it, is similar to betting: you can make more or less accurate bets if you have a good judgment, but there will always be a certain considerable factor of chance, unnecessary risk.
But in the end, they are still bets. Guessing instead of systemizing. Something completely unreliable. A/B Testing, or “split testing“, is one of the best solutions to this problem. And the way it works is actually, quite elegant.
What is A/B Testing?
How can you implement it?
It is very easy to explain something wrong; using jargon that can mean three different things depending on the context, explaining with abstract technicalities that are very useful but tend to complicate something quite simple (“leads”, “ROI”, “CTA”, to use a few), or just explaining how to do something without telling you why each step is used in the first place. Taking all that into account, we want to give you the most comprehensive guide possible. To do this, we will use a fictitious case study to make each step as understandable as possible and to avoid unnecessary words.
1) Choose a problem you want to solve
It can be practically any problem as long as it’s measurable; improving the number of interactions with your mailing campaigns, reducing the percentage of people who enter but quickly leave your page (or “bounce rate“), even choosing what type of graphic design works best for your audience. To do this step by step, let’s take the following case as an example. You’re in charge of the social media content for a very important company (and to make it less complex, let’s give it a generic name, like Apple). Specifically, you want to find out how much of an effect a slight change in the design of your social media could have, and if the extra cost of time and talent is really worth it.
2) Choose a metric to measure
What metrics or values most clearly show the success or failure of this “test”? This is where your “hypothesis” comes into play; the sentence or statement that describes exactly what you’re going to measure and why. Going back to your humble job at Apple (congratulations, by the way), a sentence that could well describe what you are looking for could be: “Changing the visual presentation of the content will improve the number of interactions, and therefore the number of impressions in said content“. What you’ll measure is clear; the number of interactions. The reason is also clear: the more interactions you have with that publication, the greater its impact (and the number of impressions) on the audience.
3) Create an “A” and a “B” version
There are many names used to define these “versions” of the test: the “prime” / ”control” version, the “challenger” / ”variant” version, but to avoid all that we will only call them “A” (the original version, the one that will not be changed or touched) and “B” (the “variant”, or the version that you will modify according to your hypothesis and that will be compared with the original). In your hypothetical case, you would basically create two posts; one of them will have the same design and content that it would normally have, the other will have the same copy and information, but its design would be the one you want to test.
4) Present both variants to similar audiences
Now that your two babies are ready to be shown to the world (and you better not have favorites, you bad marketer), then comes the most important part of the process: showing them to your audience. The idea is exactly the same as when you do a survey or field research: in this case, take two representative segments of your audience on your social media (as similar as possible in number, age range, hours spent in social media, etc.) and present to one of them the A Version, and the B version to the other. Neither of the two segments knows that there is another segment with another version, nor of the fact that “they are in a test”.
5) Compare the results
You waited and you have the results. All you have to do now, is analyze them. And so, after spending 2 months of letting your posts get some interaction (on the social media of the modest international conglomerate Apple, congrats again), it’s time to test if your hypothesis was correct. And it seems… you were right! The data reveals that, indeed, the number of interactions of the publication with the new design is greater. However, you are a good marketer, you know that although the correlation of both data is quite clear, it’s always important to give even more weight to the results with as much data as possible.
The more data you can collect and compare, the better you will understand your audience and what they most appreciate about your content (and in the best of cases, you will get new information to put to the test in other tests).
Is it worth it?
It’s not difficult to see the benefits of implementing A/B Testing in your strategies:
- Very little risk and very low cost. It’s just a matter of creating a “copy” of whatever you are working on, with minor modifications. The biggest investment may be the wait to get the results, but that only requires some planning ahead.
- Easy to reiterate. You don’t have to invest in paid content or spend talent and budget on large-scale campaigns. Once the results are obtained, the only thing left to do is repeat the process for your next test, or use the data obtained for whatever you need to do in the future.
- Sustainable in the short and long term. Whether it’s a simple post or a large-scale multi-channel campaign, both situations are easy to adapt for testing. The only requirement is for an organization among the team to know what things they want to measure and what metrics they want to enhance.
- Cumulative results. For example, you may discover in one of the tests that your audience on Instagram interacts more with content that has some copy within it. If you are curious enough, you can take the results and implement a similar test on other social media to give your data more weight. Repeat this a couple of times and before you know it, you discovered a new way to create applicable content for all of your brand’s channels.
It should be mentioned again; A/B Testing is a tool, not a solution. And like any other tool, the quality of the results you get depends entirely on how you use said tools. The steps we offer work merely as a guide on how to use it
In Nomad Digital, we don’t believe in “solutions” that claim to apply to every situation. Rather, we believe that the only valid solutions are those that work for your particular needs. Looking to improve your marketing team? Or better yet, do you want your campaigns and content to grow in a measurable and sustained way? Contact us! It will be a pleasure to be part of your team and to grow together towards your goals.