The value of research (Was: RE: [Sigia-l] Prominence in Short Lists)

Joe 10 joe at joe10.com
Fri Apr 11 11:56:03 EDT 2003


There are a few caveats to sound research which most on this list 
probably know, but are worth noting for those who may not.

- The sample selection can have direct effects on the quality of the study
- Studies based on shoddy methodology can produce results which range 
from inaccurate to damaging.
- Studies run by novices can have such detrimental affects on the 
study participants as to render the findings unusable.
- There are good statistics and bad. See:
http://halltennis.joe10.com/archives/000014.html
and/or read "Lying With Statistics" a thing, pleasant book.

The concept that "Some testing is better than nothing" is not always 
true. See Mayhew's "You Get What You Pay For article at:
http://www.taskz.com/ucd_usability_testing_indepth.php

The result of any study is to produce results which can be considered 
to have "External Validity" - to be able to confidently say "the 
findings we observed can be mapped to this other population" be it 
general or very targeted., Sample Selection is crucial, and I have 
seen many research projects fail at this crucial juncture.

PeterM has recently written a piece on this which does as good a job 
of any of drawing out a methodology which is in common practice in 
the Web biz:
http://www.adaptivepath.com/publications/essays/archives/000102.php

The problem with many selections which come from Market Research 
recruiters is their regional nature. I just can't get into the 
effects of Regional Bias and keep this short and readable, but if 
your entire sample is from one geographical region, the studies 
external validity will (most likely) be suspect.

Now, there are two general types of studies. 1) Studies which attempt 
to determine some preference data from a targeted group of people and 
2) Studies which attempt to determine the cognitive "why" across a 
larger population. I frequently see people doing the former but 
thinking they are doing the latter.

The trend towards small studies (which I believe in, by the way, when 
it comes to getting frequent, actionable results) which has been born 
out by NNG, UEI and others may be sufficient (and by sufficient, I 
mean "not much better than the opinions of a practitioner who is both 
a domain expert and an interaction expert) for the very specific 
preference studies we tend to do when perfecting an interface, but we 
should no fall into the trap of thinking we are doing truly 
statistically valid research with these small sample groups.

later,
/Joe

At 5:13 PM -0400 4/10/03, Dan Saffer wrote:
>I do agree that a professional designer has to consider the context of
>any problem. Where we differ, I guess, is that I feel that solid
>usability testing and other research can and should be one of the
>factors (but not necessarily the only one!) that a designer weighs when
>making her decisions.
>
>I also dislike rigid rules of UI, but guidelines and research derived
>from well-run tests by subject experts, be they in usability, cognitive
>psych, library science, graphic design, etc. can help shape designs by
>creating a set of Best Practices to follow. Every craft has these.
>
>Radio buttons might work best vertically and horizontally *in certain
>circumstances*. But what is useful to know is that in most cases,
>vertically is best because of reason X, reason Y, and reason Z. Reasons
>that were discovered through testing, possibly the extensive testing you
>suggest. If I want to scatter radio buttons all over the page randomly,
>I should at least be aware that it goes against best practices and do so
>for reasons of context.
>
>Your highly caffinated pal,
>
-- 

Joe Tennis
Information Design Honcho
Joe 10 Group
2430 5th Street
Studio L
Berkeley, CA 94710
510-649-1744
joe @ joe10.com
http://www.joe10.com?cpn=sig
User Centered Design, Strategy and Marketing
for Web, Wireless and Interactive Media

Get your E-Metrics in order:
http://metrics.joe10.com/



More information about the Sigia-l mailing list