[Sigia-l] A matter of reliability
andrew at friendlymanual.com
andrew at friendlymanual.com
Sun Jan 15 03:20:12 EST 2006
Quoting Listera <listera at rcn.com>:
> You can't believe everything you read with a "scientific" label? Sacred
> cows? Must be a "best practice" kinda thang. :-)
For me the interesting part was "the small size of many studies, for instance,
often leads to mistakes,". As IAs we sometimes test our design assumptions with
real live end users, and the budget generally doesn't allow us to test with
statistically significant numbers of them. We do our best with what we have, as
always, but sometimes we misidentify exactly who to include in our limited test
groups. Identifying 80% of the design issues is mostly OK - but I think that we
are generally dodging a bullet here rather than getting it as good as it can
be. Sure, we can only do what we can only do, and the client is not going to
pay for sometimes thousands of user interviews by the IA team, but this is
something that worries me still. There are times when "good enough" isn't good
enough. Do we stop accepting projects that do not allow statistically
significant user interviews? I know I won't. But it still worries me.
Cheers, Andrew
PS: the bit on the dangers of peer review was interesting too. If there is a
limited "gene pool" feeding one another's incorrect assumptions then there is a
reality issue.
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
More information about the Sigia-l
mailing list