We’ve partnered with Gummicube to bring you a series of blog posts covering various areas of app marketing. This post explores improving App Store conversion through A/B Testing.
Launching an app is one challenge but getting users to install it is another struggle altogether. The app needs to appeal to users from the moment they click on its page in the App Store or Play Store – you’ll want every visual aspect of your app to look good. In order to succeed, you’ll need to optimize your app’s page for conversions and refine it to get the best performance possible. One good way to succeed in this is through A/B testing.
When users find your app on the App Store or Play Store, they’ll only spend a short amount of time evaluating it before deciding whether to install or move on. By testing different messaging in your metadata and trying different creative elements in your screenshots, you can learn what drives users to install.
What is A/B Testing?
A/B testing is a series of tests that run two or more variants of an app’s page against each other. Typically, this is for an equal number of users, but developers can choose how the traffic is split between each version. The tests determine which variant performs better, so developers can utilize the elements that appeal best to users.
When you run your tests, you want to have a clear goal for what it is you want to test. It could be different messaging, colours, layouts or something entirely different altogether. The goal is to identify what changes impact conversions, whether for better or worse.
If two variants look too dramatically different, it will be hard to identify what differences drove the changes in conversions. It could be one element, it could be both, or one change could even be detrimental while the other is more positive. You’ll want to reduce this uncertainty to identify what individual elements drive success.
Taking an iterative approach and focusing on the elements you want to test can greatly benefit your A/B testing strategy. This may require several tests, if there are multiple elements to look at, but the end result is worth the effort as it identifies the best performing aspects overall.
Setting a Measurement Period
When you run tests, either through Google Play Experiments, an A/B testing platform like Splitcube, or even through a live deployment, you’ll want to set a measurement period. This should give you the test time to gather feedback and establish continuous trends in the variants and how they perform.
If you cut it too early because the results look positive after a day or so, you could be jumping the gun and acting before you’ve gathered enough feedback. It takes time to establish trends in user behavior. If the variant does not actually convert users better than the previous version, you won’t learn this until after the changes are applied.
Trends should be observed over time to see if they continue or if any performance shifts are just a flash in the pan. The amount of time it will take to determine a confidence interval will vary based on how many daily downloads an app receives.
Determining a variant that increases conversions is great, but the tests do not end at one success. The goal is to understand what elements drive conversions, then use those to determine what else can be tested and improved upon.
Determining what elements underperform is also important, as you can identify what to keep out of future tests.
Any increase in conversions is helpful. Even if each variant only improves conversions by a few percentage points, those differences add up over time and can help your app thrive.
A/B testing is a great strategy for improving conversions. When running tests, you’ll want to understand exactly what it is you’re testing. This includes market research, competitor research and looking into general trends in your category.
You’ll also want to give yourself a test period to ensure you’re identifying a proper trend. Keep iterating after each success to build on the growth – App Store Optimization and A/B testing are iterative processes.
By continuing to experiment and determine what works best, you can gain an upper hand against the competition and win over more users.