Basically, heuristic evaluation is a fancy name for having a bunch of experts scrutinize the interface and evaluate each element of the interface against a list of commonly accepted principles--heuristics. Early lists of heuristics were quite long, resulting in tedious evaluation sessions and tired experts. These long lists rather defeated the purpose of this method, which was to save time and money over testing. Nielsen distilled his list of heuristics down to ten that have served him and others well in evaluating designs.
Some caveats should be made about selecting your experts. You'll want experts who are, well, experts, and know what they're doing. These folks should have a broad background in usability evaluation and human-computer interface (HCI) design. It might be hard to find an expert that knows the subject matter of the product ("domain knowledge") in addition to HCI expertise, but if you can, you'll get a lot out of that person. An example would be in evaluating do-it-yourself tax software--could you find an person who is an expert in HCI and tax accounting?
The expert will go through the interface at least twice, looking at each element of the interface (for example, each menu item, button, control, affordance, whatever) and evaluating its design, location, implementation, etc. in regards to the list of heuristics.
Structured Report: The expert writes up a formal report about his or her findings. This is probably the easiest to digest, since the evaluator will have compiled all of his or her notes and summarized things in the report, but it might delay the turnaround time.
Verbalized Findings: While evaluating the interface, the expert dictates his or her findings to another person. Although this adds the cost of another person, this can discover other problems that might be missed if the experts need to write everything down themselves. Plus, unstructured comments like "What the #{*&%+@$ was the designer thinking?!?" can get captured this way.
Categories: Before sending the experts off to do their evaluations, everybody agrees on specific categories of problems that they'll log. While this is really easy to analyze, it probably misses some problems that the other methods might find.
The experts usually then reconvene to discuss their individual findings. Most of the time, you'll get back a summary report of all the usability problems found, even if individual evaluators disagreed on whether a particular thing was a problem or not. Most reports provide the heuristic(s) that were violated by the problem, giving you an idea of how to fix it.
Instone, Keith, "Site Usability Evaluation".
Instone, Keith, "Usability Heuristics for the Web".
Nielsen, Jakob, "Heuristic Evaluation: How-To".
Nielsen has a bunch of other papers on his site too at http://www.useit.com/papers/heuristic/. Check them out.
Nielsen,
Jakob, and Mack, R. eds, Usability
Inspection Methods, 1994, John Wiley
and Sons, New York, NY.
ISBN 0-471-01877-5 (hardcover)
All content copyright © 1996 - 2019 James Hom