Back to home page

Self-Reporting Logs 

What is it?

Self-reporting logs are paper-and-pencil journals in which users are requested to log their actions and observations while interacting with a product. Like journaled sessions, this technique allows you to perform user evaluation at a distance. Unlike journaled sessions though, this technique requires much more work on the part of your subject user.

You'd use journaled sessions when you need detailed information from the remote tests; for example, the actual mouse movements or sequence of dialog boxes and menu items accessed by the user. Obviously, requesting the user to record all of their actions in a log, down to each individual click, is out of the question. (Although if you're lucky enough to get someone who's anal enough to do that, well, just think, is that guy representative of your user population? Good luck...)

Self-reporting logs, therefore, are best used when you don't have the time or resources to provide the interactive package required for journaled sessions, or when the level of detail provided by journaled sessions isn't needed. For example, you might want just general perceptions and observations from a broad section of users.

The main disadvantage of this technique is that there is no observer to "see" what the user is doing--the facial expressions of the user, or even spoken comments inadvertently expressed during difficult portions of the session.

How do I do it?

Provide users with a prototype of the product, a script of the tasks they are to perform with the product, and a journal in which to record their observations and tasks. It helps to provide the users with stopping points during the execution of their test tasks where they can write down their observations.

Of course, provide a pre-paid mailing envelope for your evaluators to return their log.

When should I use this technique?

This technique is best used in the early stages of development--probably even pre-development, where the information you're attempting to gather is more preferencial than empirical. You'll want to ensure that your user pool is rather straightforward and honest, so you can assume their journaled sessions actually depict what they'd actually do with the product.

Who can tell me more?

Click on any of the following links for more information:

Castillo, José, Remote Usability Evaluation Home Page, 1998.

José has a ton of remote evaluation stuff on his page.

Nielsen, Jakob, Usability Engineering, 1993, Academic Press/AP Professional, Cambridge, MA
ISBN 0-12-518406-9 (paper)

 
 
 
 

All content copyright © 1996 - 2019 James Hom