when designing an A B test, you need to focus on one element that you're going to change and keep all other elements the same. So you can have high confidence in what drove any difference in results. Specifically looking at an A B test, the best practices would include pick one single variable to change and change nothing else, Create two or more alternate versions that change just one variable and again change nothing else. And then you will randomly split the audience the same audience so that each member of that audience only sees one version of the variable that you have changed. You will test it, you'll get your results back and be able to determine which direction that you need to go in. Elements that are frequently tested include offers phrasing or language, layout, images, colors, positioning of key elements and even your call to action. However, always, always, always keep the following elements the same time. Testing of options must be done at the same time, not just the same...
duration or length of time, but at the same time many people make this mistake when testing, if you run at a one week and run AD B the following week, it isn't necessarily a valid A B test because there could be different things happening in week one that weren't happening in week two. In other words, an A B test is about isolating one variable so we want to make sure that there are no other variables that may influence our results audience probably said this four times already, but I will emphasize it again, keep your audience the same randomly. Select your testing group from the same target segment. Make sure the group of people who experienced both versions of your tests are as similar as possible. If you're testing an email, you should split your email list randomly. Not by joined date, but split it randomly to form equal groups. Sometimes we may still have some reservations as the results can seem to be too close to easily differentiate. There are a number of tools that you can use to know how confident that you can be that the results of your test will occur if you did the experiment again, statistical significance is the level of confidence one can have that the results from a test will be the same. If that test is completed again, there are a number of tools that you can find online that will help you quickly understand the results of an A. B test. But here's one tool that you can use. This is the Beijing a. B test calculator. Using this calculator, a user would be the total number of people exposed to a tactic. Conversions would be the number of people who take the action that you wanted them to take. This could be website visits, clicks on an ad. In this example let's just assume that we ran an A. B test on a paper click add and want to know which add performed better. So if we assume add a had 55, impressions and 1200 clicks While add B had 72,000 impressions And 1700 Clicks. We simply put those numbers into our calculator, click, make calculation and what we see is that ad be was the better performing AD. In fact, This calculator indicates that with 98% confidence add B will outperform at a. So what we can do is as we run tests, we can put the information or the results from the test into this calculator to really identify how confident we can be in the results if option A is better than option B and it's up to us as business leaders to make the decision on what level of confidence or how much risk we're willing to take to go down a certain path. This tool will essentially allow you to remove some of that risk or at a minimum, know how much risk you're taking before choosing to go down a certain path.