[Sigia-l] Evaluating the evaluators

Fred Beecher fbeecher at gmail.com
Mon Sep 4 16:16:03 EDT 2006


On 9/4/06, Jared M. Spool <jspool at uie.com> wrote:
> At 01:55 PM 9/4/2006, Peter Jones wrote:
> >In large, multi-consultant projects, independent usability evaluators
> >sometimes conduct parallel tests to those done by the lead contracting firm.
> >I have been hired to be the evaluator when a large, expensive design firm
> >(fox) also proposed to do their own testing (henhouse).
>
> There's a technical name for this technique. It's called "Wasting Money."

Agreed, if the tests are *parallel.* But if you're talking about
testing at the end of design, it makes a lot of sense to have an
uninvolved usability researcher test the design to determine its
usability... especially for a large project like what is mentioned in
this scenario.

Another possible avenue for evaluating the evaluators would be to take
a look at what your analytics package has to say. True, no matter how
well-configured it is it can't tell you whether something is usable,
but it can tell you if the changes made at the behest of the usability
team resulted in an increase in the actions you want visitors to take.
The problem with this approach, however, is that it can only convey
information post launch... and it requires a significant amount of
time for the usage pattern to emerge.

- Fred



More information about the Sigia-l mailing list