[Sigia-l] validating an IA?

Leisa Reichelt leisa.reichelt at gmail.com
Mon Mar 27 00:47:46 EST 2006


ah, Eric - you beat me to it! (warning... long and possibly rambly post ahead!)

To answer your last question first - I'm not sure if validation is the
new ROI, but it certainly is becoming more and more prevalent (in my
experience). My feeling is that 'clients' don't necessarily feel
qualified to make judgements on the IA themselves, or in some cases,
don't have the time/inclination, so they hire in an 'expert'.

this is good and bad. From a professional perspective, I love having
these 'IA Smackdowns' (my term), where I get to meet with my peers and
learn more about how they work and get feedback on my work from them -
I couldn't afford to pay for that if I wanted to! (You do have to take
a v. positive approach to these things, otherwise you'd be a nervous
wreck!)

On the client side, however, I think their business expertise, and
often user expertise, is invaluable to the project and can't be
'outsourced'. Ultimately, your client needs to be intimately involved
in the IA development... outsourcing validation doesn't allow them to
escape this process.

I've been on two ends of 'validation' lately - having had to
'validate' my own IA using an independent testing resource, and having
external consultants come in to 'validate' my work. (can anyone smell
govt.?). I haven't yet had the opportunity to 'validate' someone
else's IA yet... so I may not be best qualified to answer your
question. Here's my experience to date:

all the consultants who have 'validated' me, have done so via
interviews... I articulate my strategy to them, show them sitemaps,
wireframes, process flows, justify with all my research etc.

they ask many questions,  I answer many questions. We make some
adjustments (sometimes), and at the end they 'see that it is good',
and report accordingly.

I have also had cause to ask other consultants to give me an 'expert
review' - particularly for international projects where language and
culture could be problematic. Here I provide them with sitemaps,
wireframes and a structured format for response. This has also been v.
effective, particularly where user testing was prohibitively
expensive.

Using 'users' to validate an IA is where it starts to come unstuck,
for me... I don't find that a card sort is an appropriate method to
validate. It's a great research tool... and correspondingly, every
time you card sort you get questions... and potentially learnings.

This is all good, except that is can screw with your project timeline
and methodology... I mean, theoretically you're supposed to get your
sitemap signed off before you go and invest your time in intensive
wireframing, right? (yes, even though you have 'unofficial' wireframes
that you'd have developed in concert with your sitemap). You can't
'officially' test your 'unofficial' wireframes, can you?! They're not
highly enough developed. There are known 'issues' that you haven't
resolved yet (which, supposedly, means you're not ready to test).

Anyway - its getting all murky and off topic now. You can read more at
the blog post Eric linked to before if you like (comments welcome!)

hrm.helpful?


~~~~~~~~~~~~~~~~~~~~~~~
Leisa Reichelt
leisa.reichelt at gmail.com
www.disambiguity.com




More information about the Sigia-l mailing list