Home Basic SDP Test Case Web Testing FAQ Others
You are here: Home>>Basic Concept>> Usability Testing

Introduction

Testing Types

Black Box Testing

GUI Test Checklist

Stubs and Drives

Decision tables

Equivalence Class

Standards for software

Integration

Software Review

you are here Usability Testing

Load / Stress Testing

QA Glossary


We need your help! To keep the site alive, please make the donation by clicking the button below.
Thank you very much!

How to Make a Donation Using PayPal Without an Account?

Software Quality Assurance & Usability Testing

The role of User Testing in Software Quality Assurance.


Table of Contents:

1.

 The Role of User Testing in Software Quality Assurance

1.1.

 Introduction

1.2.

 What is 'Usability Testing'

1.3.

 Why Usability Testing should be included as an element of the testing cycle 

2.

 How to Approach Usability Testing

2.1.

 How to Implement Usability Testing

2.2.

 The Benefits of Usability Testing

2.3.

 The Role and benefits of "Usability Testers"

3.

 Summary

4.

 Sources of Reference & Internet Links

1. The role of User Testing in Software Quality Assurance.


1.1. Introduction

My first introduction to Usability Testing came when I was a new tester in the Lending department of a large financial institution. They had developed the first of a set of loan management applications (and almost as an afterthought decided they'd better test it).  The application was very good, and of high quality.  Technologically speaking the software was a big step forward, away from paper forms and huge filing cabinets, to an online system which would manage and track all actions previously written by hand. When version 1.0 was ready, it went into one of the larger regional offices on pilot, the intention being to then gradually release it nationally. However, the pilot implementation was a disaster, and the release was postponed. The intended users wouldn't use the application, and went back to doing things by hand. It quickly became clear that the reason was not that the software didn't work, but that they couldn't work the software. At first it was assumed that this was because it was such a technological leap forward - i.e. they were unfamiliar with computers as a whole, resistant to change and reluctant to accept new technology. However, this was not the main problem - the problem was with the software itself.

A post mortem was then carried out on the software, and I was involved, as a representative of the test team. The investigation discovered that the software was not "user-friendly". Yet I, as a tester, had not considered usability or operability to be a problem. We then sat down with several of the users, and got them to go through the application with us screen by screen. This showed that testers have a different viewpoint than users. I was so familiar with the system that I didn't consider some convoluted key strokes to be a problem, until I saw them from a new users perspective.  It turned out to be a very important lesson for me - and indeed would be very educational for any tester or developer.

The lessons learnt from that excercise were then implemented into any further developments, and saw the addition of "usability testing" to the system test cycle. The software was re-worked, and was re-released. The revamped version, although containing mostly cosmetic (non-functional) changes proved to be a success; although the damage was done - there was a little more reluctance to accept the software because they had "heard that it wasn't much good".
 

1.2. What is 'Usability Testing'

'Usability Testing' is defined as: "In System Testing, testing which attempts to find any human-factor problems". [1] A better description is "testing the software from a user's point of view". Essentially it means testing software to prove/ensure that it is 'user-friendly', as distinct from testing the functionality of the software. In practical terms it includes ergonomic considerations, screen design, standardisation etc.
 

1.3. Why Usability Testing should be included as an element of the testing cycle.

I believe that QA have a certain responsibility for usability testing. There are several factors involved, but the main reason is the 'perspective differences' or different viewpoints of the various teams involved in the development of the software.

To demonstrate, assume a new application is developed, that is exactly, 100%, in accordance with the design specifications - yet, unfortunately, it is not fit for use - because it may be so difficult/awkward to use, or it ends up so complicated that the users don't want it or won't use it. Yet, it is what the design specified. This has happened, and will happen again.

I remember a diagram that vividly showed this - it showed the design of a swing, with sections on "what the customer ordered", "What the development team built", "What the engineers installed" etc., with the effect of illustrating the different perspectives of the various people involved.

This is especially true where the business processes that drive the design of the new application are very complex (for example bespoke financial applications).

Secondly, when a totally new or custom application is being developed, how many of the coders themselves (1) have actual first hand experience of the business processes/rules that form the basis of the application being developed; and/or (2) how many of the coders will actually end up using the finished product ? Answer: Usually none. (3) How many of the test team do have first hand experience or the expert knowledge of the underlying business logic/processes ? Answer: Usually minimal.

Even if the testers are indeed experts in their area, they may miss the big picture, so I think that usability testing is a sub-specialty that often is not best left to the average tester. Only some specific personnel should be responsible for doing Usability Testing.

Thirdly, apart from the usual commercial considerations, the success of some new software will depend on how well it is received by the public - whether they like the application . Obviously if the s/w is bug ridden then the popularity of the s/w will suffer; aside from that, if it is a high quality development the popularity of the s/w will still depend on the usability (albeit to a lesser degree). It would be a pity (but it wouldn't be the first time) that an application was not a success because it wasn't readily accepted - because it was not user friendly, or because it was too complex or difficult to use.
 

2. How to approach Usability Testing

2.1. How to Implement Usability Testing

The best way to implement usability testing is two fold - firstly from a design & development perspective, then from a testing perspective.

From a design viewpoint, usability can be tackled by (1) Including actual Users as early as possible in the design stage. If possible, a prototype should be developed - failing that, screen layouts and designs should be reviewed on-screen and any problems highlighted.. The earlier that potential usability issues are discovered the easier it is to fix them.

(2) Following on from the screen reviews, standards should be documented i.e. Screen Layout, Labelling/Naming conventions etc. These should then be applied throughout the application.

Where an existing system or systems are being replaced or redesigned, usability issues can be avoided by using similar screen layouts - if they are already familiar with the layout the implementation of the new system will present less of a challenge, as it will be more easily accepted (provided of course, that that is not why the system is being replaced).

3). Including provisions for usability within the design specification will assist later usability testing. Usually for new application developments, and nearly always for custom application developments, the design team should either have an excellent understanding of the business processes/rules/logic behind the system being developed; and include users with first hand knowledge of same. However, although they design the system, they rarely specifically include usability provisions in the specifications.

An example of a usability consideration within the functional specification may be as simple as specifying a minimum size for the 'Continue' button.

4). At the unit testing stage, there should be an official review of the system - where most of those issues can more easily be dealt with. At this stage, with screen layout & design already reviewed, the focus should be on how a user navigates through the system. This should identify any potential issues such as having to open an additional window where one would suffice. More commonly though, the issues that are usually identified at this stage relate to the default or most common actions. For example, where a system is designed to cope with multiple eventualities and thus there are 15 fields on the main input screen - yet 7 or 8 of these fields are only required in rare instances. These fields could then be set as hidden unless triggered, or moved to another screen altogether.

5). All the previous actions could be performed at an early stage if Prototyping is used. This is probably the best way to identify any potential usability/operability problems. You can never lessen the importance of user-centered design, but you can solve usability problems before they get to the QA stage (thereby cutting the cost of rebuilding the product to correct the problem) by using prototypes (even paper prototypes) and other "discount usability" testing methods.
 

6). From a testing viewpoint, usability testing should be added to the testing cycle by including a formal "User Acceptance Test". This is done by getting several actual users to sit down with the software and attempt to perform "normal" working tasks, when the software is near release quality. I say "normal" working tasks because testers will have been testing the system from/using test cases - i.e. not from a users viewpoint. User testers must always take the customer's point of view in their testing.

User Acceptance Testing (UAT) is an excellent exercise, because not only will it give you there initial impression of the system and tell you how readily the users will take to it, but this way it will tell you whether the end product is a closer match to their expectations and there are fewer surprises. (Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software.

(7) Another option to consider is to include actual users as testers within the test team. One financial organization I was involved with reassigned actual users as "Business Experts" as members of the test team. I found their input as actual "tester users" was invaluable.

8). The final option that may be to include user testers who are eventually going to be (a) using it themselves; and/or (b) responsible for training and effectively "selling" it to the users.
 

2.2. The Benefits of Usability Testing

The benefits of having had usability considerations included in the development of computer software are immense, but often unappreciated. The benefits are too numerous to list - I'd say it's similar to putting the coat of paint on a new car - the car itself will work without the paint, but it doesn't look good. To summarise the benefits I would just say that it makes the software more "user friendly". The end result will be:

  • Better quality software.
  • Software is easier to use.
  • Software is more readily accepted by users.
  • Shortens the learning curve for new users.


2.3.  The Role and benefits of "Usability Testers"

Apart from discovering and preventing possible usability issues, the addition of 'Usability Testers' to the test team can have a very positive effect on the team itself. Several times I have seen that testers become too familiar with the "quirks" of the software - and not report a possible error or usability issue.  Often this is due to the tester thinking either "It's always been like that"  or "isn't that the way it's supposed to be ?". These types of problem can be allieviated by including user testers in the test team.

They can also help to:

  • Refocus the testers and increase their awareness to usability issues, by providing a fresh viewpoint
  • Provide and share their expert knowledge - training the testers to the background and purpose of the system
  • Provide a "realistic" element to the testing, so that test scenarios are not just "possible permutations".

 

 

3. Summary:
 
 

1. Usability evaluation should be incorporated earlier in the software development cycle to minimize resistance to changes in a hardened user interface; 
2. Organizations should have an independent usability evaluation of software products to avoid the temptation to overlook problems to release the product; 
3. Multiple categories of dependent measures should be employed in usability testing because subjective measurement is not always consonant with user performance; and 
4. Even though usability testing at the later stages of development may not impact software changes, it is useful to point out areas where training is needed to overcome deficiencies in the software. 

In my experience, the greater the involvement of key users, the more pleased they will be with the end product. Getting management to commit their key people to this effort can be difficult, but it makes for a better product in the long run.
 

4. Sources of Reference:

4.1. Publications

"The Case for Independent Software Usability Testing: Lessons Learned from a Successful Intervention". Author: David W. Biers.

Originally published: Proceedings of the Human Factors Society 33rd Annual Meeting, 1989, pp. 1218-1222

Republished: G. Perlman, G. K. Green, & M. S. Wogalter (Eds.) Human Factors Perspectives on Human-Computer Interaction: Selections from Proceedings of Human Factors and Ergonomics Society Annual Meetings, 1983-1994, Santa Monica, CA: HFES, 1995, pp. 191-195.

Http://www.acm.org/~perlman/hfeshci/Abstracts/89:1218-1222.html
 

NASA Usability Testing Handbook
http://aaa.gsfc.nasa.gov/ViewPage.cfm?selectedPage=48&selectedType=Product

 

 
1.  A Practical Guide to Usability Testing.
Joseph S. Dumas & Janice C. Redish.  Norwood, NJ: Ablex Publishing, 1993. ISBN 0-89391-991-8. This step-by-step guide provides checklists and offers insights for every stage of usability testing. 
  2. Usability Engineering.
Jakob Nielsen. Boston, MA: Academic Press, 1993. ISBN 0-12-518405-0. This book immediately sold out when it was first published. It is an practical handbook for people who want to evaluate systems.
  3. Usability Inspection Methods.
Jakob Nielsen & Robert L. Mack (Eds.)  New York: John Wiley & Sons, 1994. ISBN 0-471-01877-5. This book contains chapters contributed by experts on usability inspections methods such as heuristic evaluation, cognitive walkthroughs, and others. 
  4. Cost-Justifying Usability.
Randolph G. Bias & Deborah J. Mayhew (Eds.)  Boston: Academic Press, 1994. ISBN 0-12-095810-4. This edited collection contains 14 chapters devoted to the demonstration of the importance of usability evaluation to the success of software development. 
  5. Usability in Practice: How Companies Develop User-Friendly Products
Michael E. Wiklund (Ed.)  Boston: Academic Press, 1994. ISBN 0-12-751250-0. This collection of contributed chapters describes usability practices of 17 companies: American Airlines, Ameritech, Apple, Bellcore, Borland, Compaq, Digital, Dun & Bradstreet, Kodak, GE Information Services, GTE Labs, H-P, Lotus, Microsoft, Silicon Graphics, Thompson Consumer Electronics, and Ziff Desktop Information. It amounts to the broadest usability lab tour ever. 

[1] MDA computing glossary -Http://www.mdagroup.com/computing/usabilit.htm
 

 

Return to top of the page