[Sigia-l] What Matters in Nav Bar Usability
Jonathan Baker-Bates
jonathan at bakerbates.com
Sat Jun 22 07:56:22 EDT 2013
By the way, Cennydd Bowles wrote up a quick summary of A/B test pitfalls
from a UX point of view that's worth a read if you haven't done so:
http://www.cennydd.co.uk/2009/statistical-significance-other-ab-test-pitfalls/
The only thing I'd add to what he says is the issue of bugs. Even
moderately simple-looking tests can come in with results that leave
everyone flabbergasted, only for it to be found that in fact the test was
invalid because there was a bug in one of the variants. Things like booking
forms, search results pages, and anything involving 3rd party systems or
APIs is worth being suspicious of if results appear to defy common sense.
And assuming all statistical rigour has otherwise been applied, of course.
One thing I also note about A/B tests is that it's very rare for people to
run them again (on the same site/context) at a later date to see if the
they can replicate the results. Of course, in theory this isn't needed if
the correct statistical analysis has been performed, and obviously it's
tricky due to cost issues. However, from my own experience, if re-testing
happened more often we might have a rather different attitude to the
practice of A/B testing (he says, darkly).
Jonathan
On 18 June 2013 23:24, Jonathan Baker-Bates <jonathan at bakerbates.com> wrote:
> Without proper information about the underlying data for the tests, I'd be
> inclined to ignore the evidence presented here.
>
> For example, they say the IBM test raised "... site section engagement by
> over 219% at more than 99.9% confidence." Not only is that
> an astronomically huge win that immediately raises alarm bells (most such
> tests that produce uplifts do so around the 5 to 50 *basis point* range),
> but without being able to know what "engagement" means, whether the sample
> was statistically significant, or what the stop rule was, a 99.9%
> confidence interval is meaningless. They don't even say if there was a
> control (in fact they imply there wasn't, but perhaps it's always the A
> version). I therefore have some suspicions about the whichtestwon.comsite they take the examples from. For one thing, I find it pretty hard to
> believe that a company like Urban Outfitters would casually publicise the
> fact that they discovered a UI change that raised conversion by 144 basis
> points!
>
> Even with the data, and assuming it was sound, I'd be surprised if you
> could replicate most of these tests unless you are looking at competing
> sites. When I've done my own investigations on this, I've been surprised at
> how what works in one context doesn't necessarily work in another.
>
> Jonathan
>
>
>
>
> On 18 June 2013 21:54, Tom Donehower <tdonehower at gmail.com> wrote:
>
>> Great article about usability of nav bars. Some strong evidence here about
>> the "hide and seek" nature of Mega dropdowns that Jared Spool mentions and
>> how on-page links can help dramatically improve engagement.
>>
>>
>> http://designm.ag/resources/what-really-matters-in-navigation-bar-usability/
>>
>> Best,
>> --
>> -Tom
>> ------------
>> 2013 IA Summit
>> April 4 - 7, 2013
>> Baltimore Marriott Waterfront, MD
>> -----
>> When replying, please *trim your post* as much as possible.
>> *Plain text, please; NO Attachments
>>
>> Searchable Archive at http://www.info-arch.org/lists/sigia-l/
>> ________________________________________
>> Sigia-l mailing list -- post to: Sigia-l at asis.org
>> Changes to subscription: http://mail.asis.org/mailman/listinfo/sigia-l
>>
>>
>
More information about the Sigia-l
mailing list