App monitoring is a critical part of the mobile app development and DevOps process in the same way that test automation should be thought of as a foundational component of agile release cycles, regardless if it is a mobile website or an Android/iOS native mobile app. Without monitoring your mobile applications, it can have a serious impact on your digital business.
Creating a monitoring check needs similar foresight and support from mobile teams as automated functional app testing. And the fact is that things are a lot easier if the application is aligned with automated testing.
Even though we decided to use Appium throughout our benchmark, most other open source testing frameworks can also be used to create tests. Remember to also consider native frameworks such as XCUITest or Espresso for creating your checks.
Pro tip #1: To maximize the use of existing resources, it makes sense to reuse whatever you already have for mobile test automation to save time and efforts.
Preparing a Check
Designing monitoring checks is like designing short and efficient automated tests. Here are three points to keep in mind when writing checks! (Again, reuse your existing test automation assets if possible or ask for a test script from your Dev team or QA team).
Note: Checks in Bitbar Monitoring are executed from either a Maven pom.xml or a run-tests.sh Shell script.
- Keep your checks simple. In each monitoring check, it’s highly suggested to monitor only one app functionality in order to get a clear understanding of the duration of that action. So writing a simple test case is the key here.
- Accelerate test execution. Related to the previous point, if you decided to use Appium for automation, we recommend to try to find the fastest way for Appium to find elements. A common notion is that using id or class-name instead of xpath can make the check faster. Surely, there are other ways to improve the speed of Appium tests.
- Stabilize test scripts. To make your test script work consistently and avoid the production of false failures, you should focus on stabilizing your test scripts. Each false failure due to Appium or test itself gives a wrong signal about your app stability. And in the worst case, it may even wake up somebody in the middle of the night!
Pro tip #2: It’s really important to keep the KISS (Keep It Simple, Stupid) rule constantly in mind while creating monitoring checks.
Creating a Great Check
When you have everything ready for running checks, it’s very easy and straightforward to get started with mobile synthetic monitoring on Bitbar Monitoring. Simply use your existing Bitbar Testing account credentials, go to monitoring.bitbar.com and create your first check! (If you don’t have a Bitbar account, sign up and create your account here)
Different than mobile app functional testing, in mobile monitoring you want to make sure your application is performing as intended in the locations where your customers are.
Pro tip #3: Select as many monitoring locations and carrier networks as possible based on your customer’s profile.
One Sample Check in Bitbar Mobile Performance Benchmark
In our Mobile Performance Benchmark we have been observing the speed and reliability of the biggest US online retailer apps and comparing these metrics. For example, one of the simple checks we created is to audit the performance of ‘search’ functionality (e.g. searching for the string “head set”) and the consumed time for searching was captured for analysis.
To deliver the best end-user experience, retail companies should also test and monitor other key features, such as user login, product purchase and support functionalities. These flows generally use critical 3rd party services that sometimes show problems that are overlooked in production.
Benchmark Checks Findings
The three retailer apps were developed with different techniques and were quite different in functionality. There were some clear winners and losers in stability, usability and speed as we’ll see in this blog post series.
- One of the studied apps was bloated with pop up windows showing offers and sales to the end user. While these notifications can sometimes be effective these should probably not intrude user’s capability of using the application. Repeatedly changing the app’s webview forces the user to close and scroll through elements on the screen. From test automation point of view, it makes automated testing fragile and leads to a lot of automation maintenance and frustrated users! Our suggestion is that instead of changing the webviews, only contents should be changed.
- In one test case the search field was quite buggy – the field got stuck and page reload was required to continue. From the user’s point of view this is unacceptable.
- Using a retailer app should be simple. From our observations, however, some apps require end users to start by selecting nearest shop location, getting access to user’s location and even requiring login before users can search items!
- Appium is not always the best automation framework. From our experience, pressing enter from Appium sometimes messed up the type-ahead functionality in an app and caused the app to choose the wrong search phrase from an automatically generated list. This was overcome by clicking the enter button on the presented soft keyboard with coordinates. This is a working solution as the monitoring environment service is well defined and there is no variation in screen resolutions and sizes. Explore what are the top 5 Android test automation frameworks and the top 5 iOS test automation frameworks.
- A general finding was that creating automation checks for apps developed using HTML5 or basic webview techniques were slower than for native apps. Also native apps were easier to automate and had less minor bugs providing a more consistent user experience.
- However, one interesting finding worth noting is that in this particular case the retail app developed with HTML5 techniques was faster to execute the test case.
Learn these aspects to improve test efficiency and effectiveness.Download