[Sigia-l] User Test Cost - Does this sound reasonable?
Todd R.Warfel
lists at mk27.com
Mon May 24 15:39:04 EDT 2004
On May 24, 2004, at 7:31 AM, Jared M. Spool wrote:
> I'd be interested to find out if other people have tried alternatives
> to writing reports, especially for distributed teams? What success
> have you had and what obstacles have you encountered?
>
> Jared
I'm mostly in agreement with Jared on this one, assuming that the
"team" attended (observed) the usability test session and following
each participant, or two, the group as a whole (team + facilitators)
got together to do a quick debrief a.k.a. here's what we noticed. This
is key, as more often than not, we as good facilitators will pick up on
a particular significant behavior that the observing client will not.
We've observed behaviors with drop down menus that confused
participants (e.g. not sure if they need to use one or all of them when
presented with multiple DD menus, not sure if "releasing" the DD menu
will perform an action, or if they have to actually hit the "submit"
button). This can really impact the design decision.
We've still found that reports are very valuable. But more importantly,
the format of the report has significant impact on its value. Over the
past couple of years, Molich (http://www.dialogdesign.dk/cue.html) has
been doing some comparative analysis on usability test formats. We've
been following the research and have continued to modify our report
formats based on this research and feedback from out clients (what
works, what doesn't?).
We've found the following guidelines to be extremely useful to the
success, usefulness, and yes, even usability of our reports:
1) keep the report under 30pp total - the meat of the report should be
no longer than 8-12pp. Over 12pp and people tend to round file it.
2) if you have a report that exceeds 30pp, consider a "summary report"
that is condensed, just hits the highlights and a separate
comprehensive report. The comprehensive report will rarely get used, in
our experience.
3) Structure:
3a. Index (TOC) (1p)
3b. Executive summary (1p)
3c. Introduction (purpose of the study, what were we trying to test
and why) (1p)
3d. Method and approach (1p)
3e. Participant and environment criteria (participant demographics,
equipment used, setting, scenario) (1p)
3f. Results (3-8pp)
3g. Ease-of-use ratings with high, low, and average scores (1-2pp)
3h. Recommendations (1p)
3i. Appendix A: test scripts
3j. Appendix B: screen shots
k. Appendix C: Test task results with high, low, and average scores
4) Clear, simple visual representation of the data - we use
non-traditional visualization techniques (you won't find any pie charts
or line graphs in our reports), which gives the "big picture" at a
glance. Additionally, we use soft colours that are present in nature,
which helps relax clients when reading the report - strange perhaps,
but it works. And finally, our reports are done in landscape, not
portrait mode - just seems to work better for us and our clients - they
actually read them.
Cheers!
Todd R. Warfel
User Experience Architect
MessageFirst | making products easier to use
--------------------------------------
Contact Info
voice: (607) 339-9640
email: twarfel at messagefirst.com
web: www.messagefirst.com
aim: twarfel at mac.com
--------------------------------------
In theory, theory and practice are the same.
In practice, they are not.
More information about the Sigia-l
mailing list