[Sigia-l] User Test Cost - Does this sound reasonable?

Marc Rettig mrettig at well.com
Tue Jun 1 10:05:16 EDT 2004


> I'd be interested to find out if other people have tried alternatives to
> writing reports, especially for distributed teams? What success have you
> had and what obstacles have you encountered?

A couple of things come to mind.

The first is....
For a series of tests that stretched over 18 months for the same team, I wrote
reports, but we built a little "weave the results into the process" work around
that activity.

1. The day after the test, the core team (myself as facilitator, lead IA, lead
tech, lead graphic designer) went through all of our notes and made a sticky note
for each observation. By the end of the day, there would be hundreds of notes on
the wall. We would do a rough clustering, and agree on a very small number of
things that we could all agree definitely needed attention. This was a)
exhausting, b) tedious, and c) kind of fun.

2. I went off and refined the clusters, then wrote a report which was organized
around the clusters. As it happened, the clusters often mapped either to pages or
transactions, but sometimes they were more general. For example, we had insights
about the use of photographs that had implications for the whole site.

The report was structured as a set of insights and implications. The insights
presented the observations from the test, giving the team ammunition should they
need to justify their priorities to stakeholders. The implications were messages
to the team about what needed attention, based on the teams' sticky-note
conversations and my own recommendations. There were *no* solutions in this
document.

3. The report was used as a sort of checklist for planning the next phase of work
for the team. They worked through each block of implications, and either
conceived possible solutions or assigned solution-generation out as a task. It
was gratifying to me to come back two weeks after the test and see lots of copies
of the report all dog-eared, torn apart, scribbled in, and otherwise mutilated.
They were real working documents.

Since the same people were doing these tests over and over again, with practice
we made two modifications to the above. Because of a, and b mentioned above,
people started capturing observations directly onto stickies during the test.
That saved some time. Probably decreased the "everybody heard about everything"
factor you get from doing it the next day, but we were in a hurry and had an
aversion to tedium. The other change was to do something other than clustering.
We covered a loooong conference table with printouts of each page being tested.
Then we attached stickies directly to the page the observation had to do with. We
made other blank sheets for general issues.

This is the long way of agreeing with Jared. It's key to get the results into the
flow of the team's work. Otherwise why bother? In my past practice, there has
usually been a report, but it has been designed as a tool for the team; its form
was designed to fit into the next steps of work.

............
The other thing is....

This business of usability test results is much like the work of communicating
results from user research. And again, it's key that the results become part of
the team culture. Reports are bad at this. What's better? I usually employ a set
of things:
1. If there is a report or presentation or whatever, it is created as a sort of
piece of theater. By which I don't mean a play. I mean it's a work of
communication design, intended to create an experience for the "audience"/team
that will give them a convincing illusion of what it's like to be the user. Video
clips plus discussion can work. Having people read quotes can work. Huge walls of
photographs and quotes and documents can work. Mocking up a workplace, a sort of
model office, can work. And so on.

2. The reports are short, and they give guidelines, principles, or
recommendations that the team can actually relate directly to their work. The
full report is usually accompanied by a set of posters, a slide show, any number
of things that make it easy for team members to trip over the principles in the
course of their day.

3. We enter the results into the working documents that will be used during the
next phase of work. So, for example, task models will be annotated with insights
from research. You could do the same with the results of usability tests. What
else? Storyboards get an annotation layer or three that come from research. And
so on.

.......
In general, anything like this -- field research, usability tests, and so on --
must be *translated* into the design. If you simply give a report and walk away,
the burden of translation is placed wholly on the team. They may or may not be
any good at this, depending on their experience and team culture. Whether you use
an outside firm or not (there are advantages to both), if you're not going to
attend to how the results get translated into changes in the design, you might as
well save your time and money. And if you limit yourself to traditional "reports"
as big fat documents, you increase the burden on your team to read, digest, and
determine the implications of the data.

Grins,
Marc Rettig
mrettig at well.com
www.marcrettig.com





More information about the Sigia-l mailing list