Here’s a sobering fact every executive in the digital space should know: a 2017 survey shows that 88% of users have abandoned (and never returned to) an app due to bugs and glitches.
In spite of all the design and testing efforts put into mobile apps to create a great product with the expectation that it will be used by millions of users, this is how most apps end: abandoned and uninstalled, accompanied by the potential for negative reviews after users encountered small annoying issues– that could have been avoided– after a single try. Last year, 58% of all iOS apps crashed or froze at least once.
Folks – this should NOT be the case! It should not be the norm, but the exception.
Thankfully, app bugs and glitches are not terminal diseases. They very clearly fall under the rubric of preventable and easily curable digital diseases.
What if I told you there is a way to avoid embarrassing yourself and your product? Would you believe me if I told you that anyone who reads this article and follows every point of advice to the letter will likely reduce app glitches by 99%?
This is not a scam. This is not a joke.
At SIGOS, we work with some of the greatest and largest telecom operators and enterprises in the world. We supply software solutions leveraged by thousands of companies in over 150 countries worldwide.
We have seen all points of mobile app failure. From these experiences, we’ve compiled a list of the 21 best mobile app testing practices that are bound to improve the deployment and quality of your software release.
Are you ready to learn how to take your mobile app testing strategy to the next level?
Most analysts writing on the topic of mobile application testing strategy take a clinical approach to fixing what’s broken. They look at best practices in hardware testing and software testing to create checklists that can be handed over to any Director of Quality Assurance for implementation.
Even so, QA testing practices do not happen in a vacuum. They require institutional alignment, cross-team collaboration, changes and adaptations in company culture, breaking silos, and much more.
We plot mobile testing best practices aligned with the different efforts an organization must undergo in order to create a comprehensive and robust digital strategy within the QA testing discipline.
This article is divided across the main topics that need to be correctly aligned in order to successfully reduce QA bugs by 99%.
- Business processes built around the QA testing discipline
- QA engineering testing strategies
- External factors testing
- Performance and security testing
Let’s look into each of them more closely.
Business processes built around the QA testing discipline
Let’s get one thing clear going out of the gate: QA testers and business users are not wired in the same way. QA testers are very methodical employees looking at a specific problem, much like mathematicians. The thinking goes something like this, “Given a problem to solve within specific parameters, I will solve it according to the constraints that were given to me.”
In many cases, this is a great mindset. However, business users (product managers, UX architects) have cross-discipline knowledge across different aspects of the product. Getting the two teams to work well together and agree to the same strategies across disciplines is key to the success of any product.
Remove organizational silos by doing cross-team QA test case reviews
Great institutions understand this well: QA testers should not be left in a silo. They should be part of the strategizing and product-building process. At a minimum, business users and QA specialists should review together all the test cases involved for the development of every new feature. This can begin as early as during the requirements-gathering stage or when wireframes/designs are created. At the very least, it should be done BEFORE any QA analyst ever sees a product built.
Some people have argued that a QA team is not needed at all and that engineers/product managers should be responsible for their own QA. Though ambitious in its statement, this extreme strategy is not practical in most cases. The more code you ship to production, the more likely you are to encounter different points of failure. A QA team is critical to the prevention or reduction of these points before the product goes live.
The most measured approach is to ensure that QA testers and business users review requirements, designs, and test cases, early and often, in order to ensure that every critical flow is understood across all teams. This is the most fundamental institutional QA strategy along which business users in any software organization should align themselves.
Have your product, UX, and QA teams all conduct user interface testing
User interface testing is one of the major cross-functional activities that all organizations should perform. UI testing is very simple: designs are created and approved by the product team; developers implement those designs; business users/designers/stakeholders and QA testers come together and “proofread” the code to make sure 100% alignment exists between the designs and the code released in the staging environment.
You may have noticed this strategy is being included under business processes rather than QA engineering processes. Why? Because this phase of the QA process should be owned by designers and product owners with input from QA testers.
When a design is handed over to developers, small alterations are inadvertently implemented that differ from the original design. Maybe a different font size, color, padding, animation etc… is slightly off. Although QA testers often catch many of these issues, the resources best suited to catch ALL of them are those who originally created them.
For that reason, all organizations should have clear processes based on the concept of UI testing as a cross-functional activity wherein product owners, user experience architects, designers, and QA testers work in conjunction to make sure everything is done according to the original wires. This is an excellent example of where more (resources/alignment) is better than less.
Determine what operating systems (and versions) your app will support
If you have ever worked on a mobile application, you know this simple (and sometimes painful) truth: not all users are on the same operating system. As of 2017, the situation on the iOS side is simpler because most users are now on iOS 10 and above.
From a business perspective, the case is significantly more complicated with the Android version because 8 different operating systems are still used, as shown below:
This issue won’t be going away (at least not in the immediate future), which makes it, ultimately, a business issue. Someone within your organization should look at the industry data along with your own app usage data and make the hard call on what operating systems will be supported and which will not.
Even the largest companies in the world do this. Take Nest, for example, with one of the most popular smart thermostats in the world right now. Their iOS app only works with iOS 9 and above. As soon as the new iOS goes live, they also will up their game and only support iOS10 and above.
The point is simple: supporting too many operating systems and versions results in diminished returns to the company, likewise, so does too few. From a QA perspective, there are so many OS-specific elements that can go wrong. In the interest of speedy delivery of new features and functionalities, business executives need to determine which operating systems will be supported and review that policy every six months.
Overall, leaving some users behind is better than supporting every single OS that exists. Decide your minimum viable audience and go with it. And if you’re not sure which devices and operating systems to support, get mobile testing experts to help you figure out your unique needs. Once the list is confirmed, tools like the App Experience can help you properly test across all of them with one robust mobile testing strategy.
Get organizational alignment across the form factors your app will support
When all the many operating systems out there are considered, the number of form factors across all smartphones (e.g., screen size and resolution) is significantly higher still. Things get even more complicated when an app is published in international markets where standards differ from one country to another.
Just as with operating systems, your app should support business users’ need to draw a line in the sand and allow them to say, “We will only support the following screen resolutions and screen sizes. It’s a tough call – but one that, for the sake of simplicity, must be made.” Usually, online negative app reviews and reports of bugs and glitches are tied to unsupported versions.
Therefore, our advice is simple: make an institutional call of what screen sizes and resolutions you will support. Work with the QA team to make sure they do a great job at catching all bugs across the supported form factors. Clearly call it out – even at an application level – if a specific downloaded app resolution is not supported for a user’s phone. Better yet, restrict app installs to the form factors that you support.
Have a clearly documented user acceptance testing (UAT) strategy
User acceptance testing, also known as beta testing, is the process of using internal resources in a structured format to test a product before it goes live. Once all product managers, designers, developers, and QA testers complete their evaluation of a new feature and its functionalities, the product team should set up a formal meeting with all internal users, typically for 1-2 hours, to test the app. During that time, stakeholders get a series of test cases defined by the QA team that they need to execute. Everyone can ask questions, report issues, and see the app “live” before it actually does go live into production.
Usually this session is a formality. Most issues will be known and documented, and the development team would already be working on fixes. Unfortunately, this is not always the case. We’ve witnessed countless UAT sessions where various stakeholders have discovered new issues – albeit very small/subtle ones.
Always use your internal resources, institutional knowledge, and a new pair of eyes to test your features.
Business users – usually, product managers – should set the process in place so that no new feature goes live unless it passes user acceptance testing.
Consider investing in a staggered rollout plan
Another key business process that every company should at least discuss is the possibility of a staggered rollout. For those who not familiar with the concept, a staggered rollout is the process of first releasing an app to a small number of users, usually internal, followed by a release to a small number of external users, which is then followed by a general release. For example, a company can release a beta version to all internal users for one week, followed by a release to the first 1,000 users, followed by a general release to everyone else.
The reason some companies wisely choose this approach is to minimize the risk of a premature rollout by allowing business users to test major changes in features/functionalities with smaller groups of users before letting everyone see a feature.
This approach is especially needed when making major changes to a site (e.g., complete UX/UI overhauls, significant content strategy shifts, etc…) or changes to a website/app (e.g., calls to action/checkout experience/homepage redesign, etc…) that could have a significant impact on a company’s Key Performance Indicators.
Getting agreement across internal resources about a staggered release strategy is critical to creating processes and expectations about how to deal with testing and fixes. The reason this matters is simply because a staggered release would also change the definition of the type of issues found. A QA tester may find some page misalignment to be a block to a successful release if it goes to all users, but this could be a small issue if the release goes to internal users first.
Either way – we strongly suggest that every company consider staggered releases.
Devise a clear battery testing strategy to avoid users quitting your app
Let us be clear: battery life matters for every user out there. This is, as of 2017, the single most important factor in how people choose a smartphone. When asked about the importance of features, 89% cited battery life as “important”, with only 11% citing it as “neutral” or “not important”, according to an online panel of 1,000 Britons surveyed by the research company GMI.
Battery life for a smartphone is a huge concern, which is why there are thousands of articles on the internet about apps that drain batteries.
With various applications using geo-location, storing information and user data on the device, importing/exporting images, supporting streaming, and sharing data analytics with third party apps (Amplitude/Google Analytics/Appsee/Mixpanel and more), there are many reasons why your application may drain battery life.
Getting on a naughty list for an app is business suicide. Since battery life is so important to users, testing to ensure your app doesn’t drain that precious power source is critical to your mobile app’s success.
There are strategies business users and QA testers should follow to ensure that the app does not cause significant battery loss.
- Get the phone ready for the first test.
- Record the battery percentage level.
- Run the first test.
- Begin noting the battery usage.
- Run the other tests for each feature that historically use significant power.
- Re-run each test at different battery levels.
Bottom line, from both a business and QA testing perspective, companies should agree on the importance of making sure their mobile apps do not drain users’ batteries and putting QA tests in place that can safeguard an app from unintended consequences (e.g., lowered app installs, app abandonment, or permissions revocations on the user side).
So far, we’ve covered a series of QA strategies on which both business users and QA testers should agree or at least discuss in order to implement the best possible QA strategy for the product. In addition to these seven considerations, there are quite a few QA strategies that sit squarely within the QA team’s purview. We cover these major strategies next.
QA engineering testing strategies
Do end-to-end regression testing for all your features
Introducing new functions and functionality often causes new bugs to occur, not only in relation to the new feature being introduced but also regarding current functionality. It’s the old saying – take one step forward, take two steps back.
That’s where the most fundamental part of QA testing comes into place: regression testing. Regression testing is the process of testing the entire flow in which a new functionality is introduced. That’s because, as everyone working in quality assurance knows, even the smallest change to a codebase can have ripple effects in surprising and unintended ways.
Worse still, code changes of any nature can result in different types of bugs – both functional and nonfunctional. That’s why QA developers can and should test for both performance issues and functional issues with every new feature development.
Use real devices in your testing
As a general rule, your QA testing team should use real devices for testing purposes. In most cases (exceptions explained below in the next section), there is no good substitute for actual device testing.
For example, if your user experience includes various gestures (e.g., swipes, force touches, etc…), these interactions are best tested on actual devices of different sizes.
In addition, how bright/dim the colors are on an app or how the app looks in strenuous conditions (e.g., direct light, in the dark, etc…) can only be observed on an actual device. Finally, as mentioned above, you will want to test an app on an actual device to see if a new build has any impact on the battery performance.
QA testers should also use real devices when looking for memory-related issues. Emulators have a large vast memory by design; however, the reality is that many customers have only very limited memory on their actual devices. This may cause your app to be significantly slower in real life, which is why device testing is highly recommended.
Of course, as a company, you need to determine what devices you want to test. As a general rule, start with the most common devices and work backwards, which brings us to our next point…
Consider using emulators for some specific use cases (though certainly not all)
Most expert software developers agree that device testing is significantly better than testing your app using emulators. However, there are some situations where QA experts can and should make use of emulators.
For example, let’s say you’re dealing with an aggressive timeline for moving code to production. In that case, since emulator testing is faster to execute, it can help you more readily meet deadlines.
Likewise, simulators are even easier and faster to conduct app testing because, unlike emulators, they don’t try to simulate hardware components. But, unfortunately, that gives them even greater drawbacks than emulators for mobile app testing use..
For example, they cannot test various native functionalities such as battery life or native functions (camera/machine learning core/ compass etc). In addition, most simulators cannot mimic interruptions coming from real-life scenarios (e.g., phone calls, text messages or emails).
Bottom line: emulators aren’t a universally bad idea but they should be used only in specific cases. For instance, you can use emulators for functional testing but use real devices to test the final look and feel of an app. Use emulators to execute different automated tests but do a final check on the devices most likely to be used by your customers.
Speed up your QA process via automated testing
The purpose of automated testing is to easily test features quickly & automatically without the need for QA testers to manually check every single use case.
Automated testing requires an upfront effort to create test cases that are specific enough to catch all the different issues pertaining to a new build. The benefit is that, when all is set, you can re-run the tests automatically as you move forward.
In addition, it goes goes pretty much without saying that automated testing is significantly more cost effective – and faster – than manual testing. In some cases, manual testing is actually prone to errors. Testers might not observe all the issues or could miss specific test cases.
If you are an executive who is in a position to invest in automated testing, you should also consider the benefits automated testing has for developers, not just QA testers. When developers finish a build, they can save a lot of time and energy by doing their own QA via automated integration and unit testing to ensure their work didn’t have an adverse effect on the rest of the build. That way, developers can actually discover issues even before a new build is handed over to the QA team.
In case you’re wondering what type of mobile app testing you could automate, here are a few examples of the ways companies use SIGOS’ automated testing solutions:
- Form and error validations
- Shopping cart flow
- Profile/account change verification
- Location services
Ultimately, as previously argued, each company should use both manual and automated testing. It’s not an either-or scenario. Both should be leveraged for specific tasks and scenarios.
External factors testing
Make sure your app works well when users experience network connectivity problems
This happens all the time. You’re checking your Facebook account while walking on the street, start writing a response to your friend’s latest post, then enter an elevator when you tap ‘Post’. You then get a message that your comment cannot be submitted due to poor connectivity.
This scenario shows one reason why it is really important to test and decide what the user experience should be when your customers deal with poor or unreliable network connectivity.
There are five different scenarios QA testers should take into account when testing networking connectivity and how their app will react to it:
- Only Wi-Fi connection
- Only 2G/3G/4G connection
- Only LTE connection
- No connection
- No SIM card in the device
This is not only a scenario for which every QA tester should test. It’s also a case for which business users should define an experience. You don’t want users to lose any in-process app-related task when slipping outside of a coverage area; this would certainly lead to customer frustration.
Some of the most frustrating experiences we’ve seen with poor connectivity include: app crashes, app freezes, spinning wheels, and improper messaging to the end user on why their experience is being interrupted.
Bottom line: every QA tester should account for poor connectivity in testing efforts. Also, every company should define an appropriate experience for when this incredibly likely scenario occurs.
Throttle your connection intentionally to see how users with bad reception will interact with your app
In addition to testing your app in areas of poor internet connectivity, companies should consider intentionally throttling their connection. Throttling your app is the process of intentionally slowing it down to see how the app reacts when slow data is passed between the app and the backend.
There are various tools currently available on the market that allow developers to mimic app connectivity for 2G, 3G, 4G. This tactic is especially helpful to companies with an international presence. In some international markets, bad connectivity is the norm not the exception.
However, this is not necessarily a problem specific to certain international areas. For example, 23 million Americans have bad or virtually no reception on their phones and no internet access. Throttling your app will allow you to share the same experience as these users. It will even allow your company to define the minimal experience these users could get so you can prevent them from becoming completely frustrated with your app.
Make sure the app works well during interruptions and notifications
Interruptions to the use of an application happen every single day. A user received an email, a call, a notification. The alarm of the phone is triggered. There’s a low battery notification. The operating system is asking for a forced update.
Whatever the scenario may be, every company should test for these external factors and decide how the application should behave.
As a general rule, the app should simply allow user to resume whatever task with which they were engaged before the interruption occurred. To make sure that is indeed the case, the QA team should individually test for each of the above scenarios.
Test on new phones launched to market
Each year, virtually every phone manufacturer will release new devices to market. Although, most of the time, these devices are just slight upgrades over the previous year’s model, it is critical to thoroughly test your app on the new devices.
After all, as users upgrade their phones, they will typically re-download all (or most of) the applications they used on their previous phone.
As a general rule, you should keep an eye out for any of the Android and iOS devices that are coming out each year, keep track of the current percentages of your users who own different types of phones and start testing early.
Remember, some phone releases inevitably come with new features and functionalities that QA testers should examine. For example, with the iPhone X release this month, Apple released a new machine learning core and FaceID. It also came with the notorious notch and the safe zone for swiping purposes which is a skill giving users a headache because developers are still playing catch up with the latest hardware and software changes that are coming with the iPhone X.
Most changes are not so drastic from one phone version to another, as we saw with the iPhone X. However, we still recommend that all companies test their app on new phones, especially if internal data suggests your users are likely to upgrade. Solid solutions, like SIGOS, offers Day 1 support on new devices.
Performance and security testing
App performance and security are in the front of the mind of virtually every user out there. As mentioned before, 88% of users will not return to an application if the app is not fast enough. Also, this year, Google announced that slow and buggy apps will be penalized (downranked) in Google Play.
Another mandatory point for consideration, as other authors have argued, involves app security. This is not optional; it is a critical component of any app development process.
Here are a few points QA testers and business users should consider during the QA testing phase to ensure their app performs well and is secure.
Always do load testing to avoid crashing your app when it’s most popular among your users
Load testing is the QA process that helps testers understand how their app will react under normal load conditions by users. This test is used to really understand what will slow down an app or make the app freeze when customers are using it in their day-to-day lives.
There are now multiple testing softwares on the market that can test apps for load impact and free load conditions. To ensure an uninterrupted app experience, business users should decide what tests for an app should be checked, the number of users who can use the system at any time, and how many applications can be used at the same time before your own app slows down.
In addition, companies should create specific goals and targets for load testing cases. That way, if a new app build performs outside the expected key performance indicators, it can be sent back to developers before it goes to production and is optimized.
Companies that do load testing ensure their app does not freeze or is force-stopped during reasonable use. Load testing also allows you to understand the slow points and failure points of your own backend architecture and let’s you know its scalability, so you can ensure that your app is not penalized for being too slow.
Stress test your app so you know your critical mass
Stress testing is the process of loading an app with as much data as possible to learn its breaking point so you know and understand the upper capacity limits for an app. Nearly 80% of global companies with an app use internal stress testing to support different business objectives.
Stress testing can reveal the following factors:
- Memory leaks
- Critical issues
- Data loss
- Systems synchronization issues
- Resource loss bugs
Stress testing is no joking matter especially during times when companies expect a lot of traffic to their application. A few years back, American Apparel’s website Bonobos.com crashed during Cyber Monday due to the large number of visitors that accessed the site. Target experienced the same problem in 2015.
Especially during high traffic times of year, all companies should do thorough stress testing for their apps to ensure that users will take advantage of it. Not doing so, can lead to lost sales and high customer dissatisfaction.
This article methodically goes over all the critical testing strategies you should consider when testing your mobile application. There are thousands of articles out there covering this topic. However, we wanted to create one introductory article with which any company could leverage to build a QA strategy from scratch or optimize their current QA efforts to avoid any further painful QA releases.
As argued in this article, successful companies employ a variety of tactics to make their apps go to production bug-free. These strategies apply to business, QA, engineering, and stakeholder alignment with different processes and priorities from a testing point of view.
It is our belief that any company following these strategies will be able to build a more reliable and bug-free app. After all, if the work and money invested in creating an amazing mobile app is to be worthwhile, it’s critical to make sure the app actually works as expected when it goes to production.