Back to home page

Performance Measurement 

What is it?

Some usability tests are targeted at determining hard, quantitative data. Most of the time this data is in the form of performance metrics--how long does it take to select a block of text with a mouse, touchpad, or trackball? How does the placement of the backspace key influence the error rate?

Often these metrics are used as goals during the design of a product. Goals can be stated as stipulations, for example, "Users shall be able to connect to the Internet without errors or having to call the toll-free number," or "75% of users shall be able to complete the basic task in less than one hour." These benchmarks are devised during initial usability testing, either of a previous release, or of a competitor product.

How do I do it?

You begin by following the basic usability test concepts of determining a purpose, identifying test objectives, designing the tests, and running the experiment. For performance metrics, though, consider the following additional issues:

Objectives must be quantifiable

As before, the test objectives have to be expressed in testable terms, but when measuring performance, they have to be quantifiable. For example, you could ask the question, "What's more efficient, keyboard shortcuts or toolbar buttons?" A question worded this way could be tested with two interfaces, one using keyboard shortcuts, and the other using buttons. You'd record the performance of each user by timing how long it took them to execute a number of commands, and log their error rates.

Experimental design is really important

Since the goal of a performance measurement test is to gather valid quantifiable data, your experimental design must be valid as well. Quantitative tests assume that your change in the independent variable (for example, the presence of keyboard shortcuts or toolbar buttons) influences the dependent variable (time it takes to execute commands using one of the two options). This influence is called the experimental effect. However, if other factors are introduced into the design, the effect may be confounded, that is, not statistically valid due to tainting by the other factors. Your design must take into account possible confounding factors and eliminate possible sources of tainting.

Data doesn't tell the whole story

Testing solely for the purpose of procuring performance data doesn't seem to be as common as it used to be, for several reasons. Performance testing requires very rigorous test designs and extensive resources. Most companies don't have the time or money to do research of this kind. Also, the types of things tested are often at a very granular level. Does it really matter if it's half a second faster to use a keyboard shortcut than a toolbar button? Maybe if you're designing call center software, and amortized over thousands of operators across the country, saving each one half a second per call could save millions of dollars per year. But for most office productivity applications. half a second isn't really important.

When should I use this technique?

Performance measurement is used in initial stages of design to provide benchmarks for the design process. It's also used during the design cycle to measure the work done thus far against those benchmarks.

Who can tell me more?

Click on any of the following links for more information:

Dumas, JS, and Redish, Janice, A Practical Guide to Usability Testing, 1993, Ablex, Norwood, NJ
ISBN 0-89391-991-8 (paper)

Lindgaard, G., Usability Testing and System Evaluation: A Guide for Designing Useful Computer Systems, 1994, Chapman and Hall, London, U.K. ISBN 0-412-46100-5

Rubin, Jeffrey, Handbook of Usability Testing, 1994, John Wiley and Sons, New York, NY ISBN 0-471-59403-2 (paper)

All content copyright © 1996 - 2016 James Hom