karl taylor

3 minute read

Most advice about user surveys is really bad.

I know, because I’ve been part of the problem on this one.

Don’t believe me? Here’s a video I found and reuploaded from a few years ago. It’s in a folder with a very strongly worded readme all about what was going on at the time.

[embed]https://youtu.be/LBc0rOJhJ3Y[/embed]

The trouble with advice like this, is that user surveys happen in one of two ways.

You’ve either got a highly mechanized feedback system (think coupon codes on the bottom of Fast Food receipts) or you’ve got an incredibly laid-back “user-interview.”

I don’t think it will be very controversial to assert that contemporary thinking has largely evolved to consider this a waste of time. (That is ‘coupin interviews,’) The process is too impersonal. The feedback is too inconsistent. If you’re still relying on this method, it might be well past time to start considering an alternative.

Still, the fundamentals are sound. The fine folks at ChartMogul put together a handy runthrough of NPS, I find myself passing along rather frequently.

There’s been a lot of great work generated on the topic of user interviews however. Eleonora Zucconi put together a fantastic collection of “46 Interview Questions For User Experience Researchers…” Teo Yu Sheng breaks out how you should think about asking questions in “5 Steps To Create Good User Interview Questions…” and I’d be remiss if I didn’t mention the vibrant discussion on Charles Liu’s “Never Ask What They Want

Learning to write in this way is a good skill to pick up, but it won’t keep you from making one of the biggest mistakes I see teams run into during the interview process.

Formulating your questions to get real feedback takes practice, but once you’ve figured it out it can be tempting to use a slightly different framing with each person you interview.

The trouble is, inconsistent surveying tends to generate inconsistent data.

That’s why it’s so important that you set your goals with the understanding that feedback happens on a spectrum.

In some circumstances, automated feedback may actually have some utility. For example, it’s probably ideal for “transactional” uses. In other cases (like say exploring a new feature roadmap) you might want to conduct something far less formal than a regimented user interview.

Trying to force each application into one predetermined feedback rail is a mistake. You won’t have the same answer every time, and that’s the point. Instead, try to focus on picking the right feedback mechanism for the task at hand.

comments powered by Disqus