Traditional usability testing involves several steps: setting up or getting access to a usability lab, recruiting test participants, scheduling test sessions, creating test scripts, conducting the test sessions, consolidating the findings, and then making recommendations.
The challenge for traditional usability recommendations is proving the value. Without quantifiable results, it’s hard to show how recommendations will translate into more sales or increased conversion. Running an experiment provides support for the findings of a usability test, and gives quantifiable results.
When your usability sessions are geared toward finding problems and highlighting opportunities, there’s huge value to gain from completing occasional usability testing alongside experimenting.
Deciding which of the problems you observe in a usability test are real can be a challenge. It’s difficult to tell if the problem a participant tells you about in a usability session, either because she was prompted or because she feels she should say something, would stop her from completing her order in a real-world scenario.
To give you the best chance of determining real problems that will make good experiments, observe participants’ actions rather than asking for verbal feedback and note where they appear to get lost or hit bumps.
If you can identify features or options that customers don’t care about, you can use those opportunities to experiment removing them to free up real estate for something customers will find useful, or for improving visibility to understand whether it adds value with more prominence.
Social game developer wooga uses A/B testing to refine their games on a weekly basis after launch. According to Wired, wooga’s “core discipline is A/B or split testing, in which new features are introduced to a selection of users, and their reactions measured. Features remain only if users engage with them. If they don’t respond, wooga tries new features until they do.”