[Sigia-l] How to Lose Friends and Infuriate People ;) (was "A Brief History...")

infoarchitect at ourbrisbane.com infoarchitect at ourbrisbane.com
Wed Aug 27 05:46:55 EDT 2003


Where to begin?

Ok.

Peter M wrote:
> Can I just say how much I hate the phrase "human factors," and the degree to
> which it is indicative of the narrow-mindedness of the field? It drips of a
> certain Taylorist quality, as if you are meant to find out the "human
> factor" in some larger system. ("Users" suffers a different, though
> similar-ish problem).

Yes you can.  It's your right to.  However, Human Factors (the very broad area defined as 
the "study of humans interacting with systems" (not limited to computers)) is the field from 
which UCD came - like it or not.  

> A fairly simple one. There aren't enough of us to do it right, so of course
> those who practice or or the other will find their responsibilities spilling
> over into "other" disciplines.

Fair comment.  I live and work in Australia atm - most people here have to be a 'jack-of-all-
trades'.  That said, to resurrect one of Ziya's metaphors, I don't think many people would 
want a vet-by-trade to operate on them.  The vet would require further specialist knowledge 
and/or surgical training.  You shouldn't just 'do' another specialist's job without 
some 'substantiated' knowledge and/or training.

> Aha! And hear the elitism of "trained" user-centered designers begins to
> rear its head. I would argue that such folks *are* qualified to begin
> working in this space. You've got to start somewhere. And if you're some
> project manager drone in an organization that doesn't recognize "usability,"
> and you want to build a better product, then more power to you -- get out,
> take a class, read a book, and do what you can.

Yes.  Everyone has to start somewhere - I have no problems with that - we all had to.  What 
worries me is that people are starting, but not going any further.  They sit squarely in the 
realms of 'uncoscious incompetence' - ignorant of what they are doing wrong and what 
other 'tools' are available.  It's not their fault - they don't know what they don't know.  
All I was saying is, perhaps those of us with a little more experience could offer some help 
to open their eyes.

> Diluting? How?

In too many ways to mention.  Many 'practitioners' that I have met employ what Bill so aptly 
described as "user-directed design."  I hope, if nothing else, that we can all agree that 
users are NOT designers.  Yet there are 'practitioners' who listen to every word a user says.  
Hell, I've even observed them asking things like "So, where do you think this button should 
be?" or "Should this border be blue?  What colour would you prefer?" and "What do you think 
you'd do next?"

We are meant to be designers - "user-centred designers" in fact.  Our job (please, correct me 
if you think I'm wrong), is to understand (using a number of methods mostly involving 
observation) targetted people attempting to reach a goal.  We then go away and design ways in 
which we can best support them in reaching this goal.

But I digress...

By "diluting", I mean, “To lessen the force, strength, purity of”, and/or “To decrease the 
value of”
 
Creating a persona or user-profile without performing some form of contextual inquiry is 
diluting.  Listening to a user and taking their advice verbatim (as opposed to observing and 
analysing user behaviours) is diluting.  Collecting data because you are told to, then not 
knowing how to analyse it is diluting.  ‘Leading’ users by the way you subconsciously word 
your questions (“Do you think this is good?” or “Does this confuse you?” as opposed to “What 
do you think of this?”) is diluting.  Using 5 users to test an enterprise system "because Jake 
said so" is diluting.  The list goes on…

…and what’s wrong with these little, picky things?  They produce biased, incorrect data.  
Designs based upon this erroneous data don’t help anyone achieve their goals in a more 
efficient manner – they just create ‘different’ problems.

> Which is all that ought to be asked in any real-world situation. Satisficing
> goes a long way. Striving for perfection will needlessly bog you down.

Nice use of buzzword there.  Yes, commercial reality dictates that we can’t follow a complete 
methodology in many cases.  Projects with large budgets catering for UCD and user research are 
rare.  Some techniques and steps have to be skipped.  That’s fine - but for goodness sake (and 
the sake of the industry’s reputation) if you have to reduce what you can employ to make a 
good product, at least know what to do (which technique), how to do it (with rigor) and why 
you are doing it (what are the attributes and goals of the technique).

I have no gripes with the contents of the paper I referenced from Neilsen.  I agree that 
sometimes we have to bring it down to only a few, simple techniques, but if these are 
performed incorrectly – if they produce erroneous data upon which we base our designs – what’s 
the point?

> The second you begin dealing with controlled, uncontrolled, and confounding
> variables, the second you're entering the realm of science, which, I'd
> argue, is not very interesting to the majority of this group. The
> user-centered design we practice did not come out of academia, but out of
> engineering, where you try to do the best you can given limited resources
> (be it time, money, or people). It's not an attempt to scientifically
> validate a solution, but simply to get the best product out there.

If discovering ways to create better designs – even if that means ‘resorting to scientific 
methods’ -  is “not very interesting to the majority of this group”, then one would have to 
question why they are here, why they lurk on discussion lists such as this, wouldn’t they?

User centred design actually came from a marriage of engineering and cognitive psychology.  
This was necessitated “when equipment complexity began to exceed the limits of human ability 
for safe operation”.  

engineering. n. The application of scientific and mathematical principles to practical ends 
such as the design, manufacture, and operation of efficient and economical structures, 
machines, processes, and systems

Sorry engineering and cognitive psychology are both ‘sciences’ – they both have roots 
in ‘academia’.  I’m sure you wouldn’t want to be working in a high-rise that was designed by 
an engineer who had no idea of how to calculate the modulus of elasticity or rigidity of the 
beams that support it – just because he didn’t want to “attempt to scientifically validate a 
solution, but simply to get the best product out there.”  

I agree.  On limited resources we should still try and get the best product out there – but 
there’s a big difference between designs based upon ‘validated’ data (otherwise referred to 
as “informed design”), and those based upon ‘good intentions’ (otherwise known as “guesswork”).

My only point was that with the popularity of our industry has come an increased need for 
practitioners – we should help them become better practitioners where we can.  

There are many out there that not only don’t know what they’re doing, but worse – don’t know 
how to find out more (or even that ‘more’ exists).  A number of institutions and organisations 
(UPA, AiFIA, HFES, etc) are now trying to put together an accepted curriculum to attempt to 
train these newcomers (and ‘old hands’ with no relevant background/training alike) and have 
them ‘certified’.  Why?  Companies aren’t getting the results they used to before the 
explosion in this industry.  Designs are now going back to being based on opinion, not 
empirical, or even qualified subjective evidence.  This has to be addressed.  Our industry is 
starting to lose its value.  If companies could distinguish ‘certified practitioners’ from 
those who don’t wish to further themselves (and would prefer to remain ‘hacks’), then our 
industry would be able to reclaim its credibility.  

> This was all better said in the most recent issue of Interactions magazine
> by Dennis Wixon, who works in Microsoft's games division.

> I consider it easily the most important usability article I've read all year
> (if not for a few years), because Dennis gets beyond the pseudo-science to
> recognize that, hey, we're all just trying to do the best job we have given
> the resources, and you can take your "number of users" metrics and your
> "controlling for variables" tests and shove them, because they're getting in
> my way of getting a good product out on time and on budget.

I can’t comment on this article per se as I have not been able to read it, but I’m not sure 
I’d trust the opinion of someone who does not even understand that engineering IS scientific 
method.  “He reminds us that our practices derive from engineering, not scientific method.”  

Oh – and all this is due to Spool’s “Eight is not Enough” – a debate based upon my very 
argument – that techniques have been diluted to such a point that they are no longer useful.  
He brings to light a simple point known as ‘statistical significance’.  No, most companies 
won’t invest enough money to support this – that’s fine – we work with what we can and try our 
best to get “a good product out on time and on budget”, but this ‘big issue’ simply arose from 
people diluting the technique to a point where they forgot where it came from and its core 
attributes.  

“NEXT!”

Peter V wrote:
> Indeed. Testing is, for most of us, meant to inform design. Not to
> gather scientifically useful/valid results.

Really?  Do you design by testing?

test. n.  A procedure for critical evaluation; a means of determining the presence, quality, 
or truth of something; a trial   

It seems to me that there is a fear of ‘science’ amongst some people here.  If we take that 
word away from what Peter V said, it would seem that he is not interested in 
gathering “useful/valid results”.  Isn’t that the point of a test?  

“We form a hypothesis, then run a trial with subjects to test its validity.” 
Translate that into simple design-centric terms and we get:
“We create a design, put it in front of users, and see if what we designed was right.”

But how do you distinguish between right and wrong - valid or invalid?  Gut feeling?  
Opinion?  No – these just reflect your emotional sway or bias towards your (or your 
colleagues’) work.  I hate to say it, but the methods you use for testing came 
from ‘science’.  As scientific methods, they require a hypothesis, controlled and uncontrolled 
variables, and to limit the potential for confounding variables.

Again, I say that we need to educate our peers as to why they are using these methods, how to 
use them properly, and how to create better designs by doing so.

“NEXT!”

Ziya wrote:
>> Can I just say how much I hate the phrase "human factors," and the degree to 
>> which it is indicative of the narrow-mindedness of the field?
> Halleluiah.  
> And don't forget "HCI."

The narrow-mindedness lies where?  In Human Factors; in HCI; or in the eyes of people who have 
not dared to explore these domains out of what seems to be a fear of the word ‘science’?

>> The user-centered design we practice did not come out of academia, but out of 
>> engineering... 
> Halleluiah, again.
> But don't forget the design profession, either.

Sorry Ziya, user-centred design did not come from the design profession.  It was introduced to 
it from that narrow-minded area known as Human Factors – to inform better design.

Ziya also wrote:
> > people don't know what they really want or need...
> So therefore the solution is to give them post-its and crayons and let them
> discover their inner child?

No.  The solution is for practitioners to go back to the foundations of their profession (HCI 
and Human Factors), learn what the proven techniques are, when to employ them, how to employ 
them, and how to analyse/interpret the results.  Then go forth and produce informed, validated 
designs. 

*phew!*

Best regards,

Ash Donaldson
User Experience Designer



More information about the Sigia-l mailing list