We’re thrilled to share with you the following guest post from Neha Mittal, Digital Marketing Manager at Apptimize.
In the mobile app space, you must constantly fight to attract, engage, retain, and monetise users. The reality is there are currently about 2.8 million Android apps and 2.2 million in the Apple App Store. The way to stand out amongst this crowd is by constantly innovating to improve your user experience to stay on par with leading apps, like Facebook and Starbucks. Feature releases should not be driven by gut feelings, but instead can, and should, be data-driven.
So, how do you make sound, data-driven decisions?
… In comes A/B testing.
We often hear mobile app leaders talk about experimentation as something they will “get to later.” Below, we explore why this is a huge mistake and why experimentation should be embedded within your product roadmap.
Remove the guesswork from app optimisation
We all know what they say about assuming… don’t do it! As a mobile app leader, you are constantly asking yourself questions about what will actually happen when you make a change. Instead of assuming how users will behave, develop a hypothesis, run an experiment, and prove their behavior when given the real-world opportunity to act. These hypotheses should stem from a range of sources, including your own ideas, focus groups, agencies, and your analytics solution.
As a mobile leader, the insights gained from testing hypotheses are instrumental to your product roadmap. By measuring the incremental impact of changes on user behaviour, you have data-driven insights into your user preferences and can answer key questions that impact growth: What keeps users engaged? What turns them off? How do preferences differ between segments?
Grow through retention
Mobile app leaders often prioritise A/B testing and in-app optimisation after implementing push, App Store Optimisation, and other seemingly foundational components of the mobile stack. While on the surface it makes sense that the first step is attracting users, when you dig deeper, it’s clear that A/B testing needs to work in parallel with these other steps. If a product team invests significant effort in driving users to their apps, and then those users don’t have a great in-app experience, they will be wasting money on users who they will not retain.
Because A/B testing is the best way to optimise the in-app experience, it’s no surprise that testing leads to better engagement and retention, and thus overall active user growth. Take this experiment example from Runtastic, which has a portfolio of over 15 health and fitness apps that combine traditional fitness models with mobile technology.
In-app membership upgrades for freemium users are a core part of Runtastic’s monetisation strategy. Runtastic wanted to A/B test and improve the conversion rate of users reaching their paywall.
Users are more likely to subscribe if they are shown a user review on the paywall, as opposed to a promotional statement.
Using Apptimize, a mobile A/B testing platform, Runtastic launched a Dynamic Variable A/B test for their Android and iOS apps. The original variant featured a promotional statement, while the test variant included a real user review along with a 5-star rating.
The user review variant produced a 44% increase in paid subscriptions for Android users and no significant change for iOS users. Based on the results of this test, Runtastic only updated their Android paywall screen.
User reviews were an influencing factor for Runtastic’s Android userbase, but they did not motivate Runtastic’s iOS users. It’s important to look beyond aggregate data and segment your data to observe differences in country, operating system, and other demographic data to observe trends. What works successfully for one segment, might not work equally as well for another.
Apptimize did a study of their customers and found that companies that consistently test grow at over two times the industry average. Real growth comes from plugging the holes in your product through tests that improve the in-app experience and make users want to re-engage with you.
Reduce risk of new launches – don’t wonder “what if”
Imagine this: you release a new feature to your users without a control group and notice an increase in engagement after launch. You use this data to draw definitive conclusions about preferences towards this new feature. What’s missing here is the counterfactual, or the possibility that engagement could have changed regardless of the feature launch. You are now using incorrect assumptions to drive decisions because you don’t know what would have happened had you not released the feature to everyone.
The way to avoid this critical mistake is by running a true A/B test with a control group and a test group. This setup allows you to measure the causal impact of the feature on user experience. Without a control group, you will have unreliable data and can actually do more harm than good.
At Apptimize’s Mobilize 2017 conference, Erin McLaine from Delivery Hero shared examples of A/B tests that they ran to test hypotheses and the results they saw… some of these might surprise you! Watch the video for more.
Product launches are stressful for most organisations. Months of planning and getting behind on a roadmap leads to anticipation about what could go wrong. When you test, you can throw this old process out the window. With the right technology and process in place, you can test more, learn more, and grow faster. By rolling out ideas that might work, you can release more often and adjust on the fly.
Mobile testing capabilities have enabled the historically development-intensive testing process to become much more efficient and less time consuming than before. Specifically, codeless visual testing technology allows you to go live with an experiment in minutes without writing a single line of code. This empowers product managers, designers and marketers to test changes in color, copy, layout, images, navigation items, CTAs and more and use these insights to recommend product updates.
Strava, a social network for athletes with a mobile app to track exercise, developed processes to support their A/B testing goals, resulting in a 10X increase in the number of tests that they run per quarter. Click here to learn about the 3 key areas that they focused on to develop this culture of experimentation.
Experimentation is a crucial asset to your product development roadmap, one that allows you to test hypotheses and roll out impactful, high quality experiences. The data gathered through A/B testing is invaluable and can serve as the basis for decision-making. With the right software in place, embedding experimentation into your product roadmap is seamless.
Apptimize empowers product teams to efficiently run A/B tests, roll out and manage new features, and deliver personalised user experiences. Based in San Francisco, Apptimize is backed by US Venture Partners, Google Ventures, Y Combinator, and others. Join companies like Vevo and Wall Street Journal; fuel data-driven growth for your mobile app with the best-in-class mobile A/B testing. Request a customised demo and sign up for their newsletter here!