Usability evaluation Methods

Evaluation methods assess the product's usability, which include the dimensions of usefulness, learnability, efficiency and user satisfaction.

Authors classify evaluation methods differently. This table presents four classification schemes, aligning approximately equivalent terms.

Rosson and Carroll Lewis and Rieman Nielsen and Mack Preece, Rogers and Sharp
Analytical methods Evaluating without Users Formal methods Predictive / Modeling user's task performance
Informal methods Predictive / Asking experts
Empirical methods Evaluating with Users Empirical methods Usability testing
Field studies
    Automatic methods  

Preece, Rogers and Sharpe (PRS) also describe a "Quick and Dirty" evaluation paradigm, which seems to generally refer to informal empirical methods.

Here I follow the organization set by Rosson and Carroll and add references to our text.

Analytic methods

Usually these methods are conducted by HCI specialists and do not involve human participants performing the tasks. The method often relying on the specialists' judgment. Not only do the method identify potential usability problems, they provide an understanding of the problem.

Common methods include:

  • Heuristic evaluation. Usability specialist systematically reviews the interface applying a list of guidelines, called heuristics. Here are some examples from Nielsen's list (in Usability Inspection Methods, edited by Nielsen and Mack):
    • Match between system and real world. The interface should use the same language, task order and procedures that is present in the users' environment.
    • Recognition rather than recall. The system should allow uses to choose from options rather than requiring users to recall commands and options.
    • Flexibility and efficiency of use. The system gives users the options for entering commands, such as accelerators.
  • Walkthroughs. These are task-based methods for checking if a user would be able to successfully figure out how to complete selected tasks. For the cognitive walkthrough, the HCI specialist asks four analysis questions for each action needed to complete the task. Walkthroughs address learnability.
  • Keystroke-level model (KLM). Sometimes called the GOMS KLM, by Card, Moran and Newell. The primitive operators are analyzed to predict how much time selected tasks would take. This method assumes expert usage.

Nielsen and Mack divide these methods into two categories: informal methods (e.g. heuristic evaluation and cognitive walkthrough) and formal methods (e.g. the keystroke-level model).

Empirical methods

Empirical methods involve data collection of human usage. There are direct methods (recording actual usage) and indirect methods (recording accounts of usage).

Direct methods (Observing users)

  • Usability test. Practitioner asks representative user to complete prescribed tasks and records behavior. This is perhaps the most common evaluation method.
  • Field observation

Indirect methods (Asking users)

  • Interview
  • Questionnaire

In addition to the Needs Analysis phase, these methods can be conducted at the end of a usability test to gain the user's opinions on the product's usability, which can include usefulness and user satisfaction.

Automatic methods

A link checker is one example. There will be more of these in the future.