In the last blog, we have covered how to start synthetic monitoring for mobile apps from preparing necessary files for monitoring checks to setting up checks on Bitbar Monitoring. We also unveiled some key findings during our continuous observations on the top US retail apps.
By the end of the last blog, you are supposed to be familiar with Bitbar Monitoring and have at least some simple test scripts for proactively monitoring your mobile apps.
One of the biggest advantages of synthetic monitoring is definitely to spot issues and bottlenecks in your mobile apps as early as possible before your users get affected. But in most of the time these issues and bottlenecks are reflected as in stats and metrics. For example, your policy says you will deliver an SLA of 99.99% availability, but in reality your SLA has a service uptime at around 99.5% for a period of time, which indicates that you have a critical issue in delivering service commitment.
Now most of the monitoring platforms like Bitbar Monitoring provide very detailed monitoring dashboards and reports for each check, but there should be some key metrics that you must continuously measure and compare throughout the lifecycle of your app. And these metrics shall help you understand the overall app performance quickly.
In today’s Bitbar Mobile Performance Index blog we are going to take a further step into what we have found are the most relevant performance metrics to measure and how to measure them. Also we will discuss the typical considerations for creating scripts for the most common user flows and top tips for identifying those user flows will be covered as well. Finally we will dive into what we’ve found when measuring and monitoring top US retail apps.
What Metrics to Measure and How to Measure Them?
During our observations on top US retail apps, we’ve found a couple of key metrics that are most relevant to the app performance and the app success. Regardless of what industry you are involved, we recommend you take into account and improve these metrics for your business as well.
- Total Availability. Total availability reveals the ability of your end users to access to your service or mobile app through the day and from day to day over a longer period.
- Time To First Byte (TTFB). This measures the responsiveness of your mobile app. It indicates how fast your app is able to launch and get the first responses from the back end servers.
- Reaction Time. It’s the measurement that reflects the speed at which your mobile application reacts to user inputs. For example, ‘How fast your application starts showing search results’, ‘How fast the business transactions are completed’, etc.
- Time To Load. This is a metric that implies the time elapse between the moment users launch your app and the moment users can start interacting with your app. It tells how fast the application becomes responsive to user queries.
What to Consider When Creating Test Scripts
Tip #1: As mentioned in the last blog, reuse your existing test scripts to get synthetic monitoring up and running in the shortest time by requesting one from your Dev or QA team.
Suppose you’ve kept in mind these three points when it comes to preparing a monitoring check, you should also take the following things into consideration to properly monitor your app and get the measurements of aforementioned metrics.
- Upsides and downsides of API Runs. Relying on API runs at Bitbar Monitoring enables you to check if service end points are up and available. But using backend services will not help you find bugs and reveal the speed of your application.
- Selection of test frameworks. As mentioned in the last blog, though we used Appium for script creation throughout the performance benchmark, most of the popular open source framework would work. But here we want to shed more light on the performance of test framework.
Some testing frameworks are faster than others. And in general, using a mature testing framework is a safer decision as you can avoid excessive analysis on more false negatives caused by using an immature framework. Both Android and iOS have good default test framework – Espresso and XCUITest for this purpose.
In case you wonder top frameworks for app testing, check the blogs here for Android frameworks and here for iOS frameworks.
- Various mobile network carriers. In any way, testing under various mobile network environments makes all the key metrics more meaningful and helps you better understand in what circumstances your app performance is good or poor. That’s the only way to know what level of experience your end users with different networks access have when interacting with your application.
Pro Tips for Identifying Important User Flows
If you have a simple app, it might be quite straightforward to understand each of your user flow. However, in case your app contains lots of user flows, like retail apps, it’s rather critical to identify and mark the most significant user flows.
For retail apps, for example, the main functionality and business critical transaction are to drive customers to buy items with that app. In that case, one of the highest priority is to ease the user flow to the items they need and allow them to pay for the items in their shopping cart easily.
Tip #2: If your app comes with the functionality of in-app purchase, make sure that you don’t create blocks to your customers’ payment process, but make it smooth as butter.
At the same time we also decided to benchmark the search functionality of the top US retail apps to see how well and fast users can get the search results in our studied retail apps.
Tip #3: Providing a good user experience at POS is vital for retail apps, but user flows that help customers get to POS is also critical and need to be constantly measured and checked, such as searching items, adding items to shopping cart, etc.
Findings on Measuring Top US Retail Apps
As we’ve been continuously monitoring the performance of these apps, we want to share and compare the metrics in two different monitoring points. In general, there is an unparalleled stability and performance between studied top US retail apps.
- There’s a surprisingly huge gap between top performing app with 96% availability and bottom performing app with only 8% and <10% availability. Although the team behind Retail A seems to be working on fixing issues and bottlenecks, there’s still a long way to go.
- The most feature rich app was the one that was the most difficult to automate. Before allowing the user to create a basic search the user had to navigate through multiple views and even login (or create an account). This implies that bad user experience reflects closely with app stability and performance.
- Even though built as a native app, Retail A app and Retail B app have a big difference in terms of automation time.
- App performance is visible through requests handling. The number of requests does not directly correlate with the speed of test execution.
- Best performing apps were easy to automate and had few total number of made requests.
- One interesting thing to mention is that the app that had the most requests was also one of the fastest tests to execute. And we assume that care has been taken for improvements.
Learn best practices from this guide to maximize the ROI by building a flawless in-house test lab.Download