[Sigia-l] Sigia-l Digest, Vol 33, Issue 24

Steven Pautz spautz at gmail.com
Thu Jun 21 13:08:43 EDT 2007


On 6/21/07, Christopher Fahey <chris.fahey at behaviordesign.com> wrote:
> <snip>
> And in any event,
> anything we build that is a matter of life and death is likely to be
> evaluated and operated by a person who is certified in their field --
> i.e., a doctor or a pilot. If a doctor screws up a diagnosis because
> they misread a crappy UI, make no mistake: it's the doctor's fault, not
> the IA's.

While I fully agree with your distinction of life-and-death-oriented jobs
(and the different role and relevance of certification for them), I don't
think this particular example supports your argument very well. We should
never exclusively blame the user for *any* erroneous interaction, regardless
of the user's certification or presumed expertise.

If changing the crappy UI (or any other aspect of the system or environment)
would lead to a change in "user error", then statistically the user cannot
receive the entirety of the blame.

Whether an IA (or anybody else associated with the system) should receive
some portion of the blame is a separate (and offtopic) question, but to
blame the user (and the user alone) is to say that design is irrelevant --
and if design is irrelevant then why bother designing? =P

The way I see it, certification works for doctors and engineers because the
variables they work with (human physiology and physics) are largely static.
This doesn't apply to IA because the relevant variables (eg, purpose,
audience, context) can and do change significantly. Certification cannot
work for IA because we can't predict how those variables will differ or
change.

----------------------------------------
Steven Pautz
Future Junior Designer (hopefully)



More information about the Sigia-l mailing list