Composr Supplementary: Feedback methods that don't lead you down the garden path

Written by Chris Graham
One hard lesson I've learnt while working on Composr is to never rely on unsolicited user feedback to gauge and drive product development. This tutorial will explain the issues as they relate to my own experience with Composr development, and detail better feedback methods that hopefully are applicable to the wider world.

Here are a few hard realities that need to be examined:
  • By my estimation it averages over 10 people to experience a bug that breaks functionality for 1 person to bother reporting it. Therefore you cannot assume things work just because nobody is complaining. Evidence: we often find bugs via our automated error system, not through human contact.
  • If a bug is just a little bit annoying, or a particular deep interface is ugly but usable, chances are nobody at all will report it. This means you cannot ever decide user experience improvements by relying on user feedback. The kind of user who reports small issues is usually someone highly engaged, like an early adopter, so as your success grows you probably won't get a particularly increased quantity of detailed quality feedback. Most people are just too busy, or too focused on things that deliver value directly for them. In fact, in a sense if someone has learnt the way around an issue it is in their advantage to not explain it because they throw away a bit of earned competitive advantage (this selfishness annoys me, but I think it happens).
  • Many people either give only negative feedback (people who want to drive positive change through criticism), or only positive feedback (particularly friendly people who like to thank developers). This makes it very hard to gauge satisfaction and success accurately. Personally, from a utility perspective, I actually dislike both: negative feedback is bad PR and the really glowing feedback that is often posted overlooks things doesn't help me.
  • The kind of people who give general feedback are the most engaged and thoughtful subgroup of users, a tiny minority. Therefore, these people speak for the minority, and the feedback is usually to add new features that will actually distract from more important objectives (little usability details, design issues, and so on) and possibly risk creating bloatware (see second system syndrome). Moreover though, it prioritises the concerns of a minority and it can lead to a never-ending software development process where you're not particularly meeting the essential needs of the wider market any better but spending far too much money.
  • Generally some feedback can be a bit bizarre. For example, quite a few times we have been sincerely asked to create games or game arcades for Composr. Bringing great business tools to the public via Open Source brings really useful features that everybody needs: but bringing games into business websites is going to actively discourage business adoption of Composr (and adoption by just about anyone who is not creating a gaming website).
  • If you only listen to existing users you are ignoring the potentials you never managed to convince, and what stopped them adopting. Geoffrey Moore talks about the problems of moving beyond early adopters in his famous book, Crossing the Chasm.
  • Even if you use surveying as your feedback method, you will either only get feedback from the most engaged users, or you will get lazy feedback from very selfish users (if you awarded a prize).

So you need to throw away any idea of deciding your priorities through only unsolicited feedback from existing users. Always look at the feedback and give it very serious consideration, but never use it to decide your roadmap.

So what is a better feedback method?

You need to look at what your model of your target customer is, and then seek out these kinds of people for usability testing. You'll be amazed what very different feedback you find from them if you work from this fresh external perspective. Watch them using the product, give them tasks to perform and see how they do. You will see them get stuck on things you never imagined.

For software testing I recommend using Try My UI, testing relevant people you know, and also hiring people on Upwork and asking them to do record their performance of tasks on Jing.

It's fairly easy to find resolutions once you know the real issues that will impact adoption.

Feedback

Please rate this tutorial:

Have a suggestion? Report an issue on the tracker.