This article is more than 1 year old

How not to survey your customers

A great customer turns bad survey questions into valuable feedback.

Blog [David Norfolk says: I don't like most surveys I see – they're normally from the armed and dangerous wing of the PR industry – but they can be useful, when done properly. So I was interested when Richard Collins (he wrote a piece for Reg Dev on test-driven development here earlier this year) pointed me at this piece from one of his staff – and I thought it was worth sharing with our readers – , Editor]

Bob Cramblitt says:

We’ve all seen them, been frustrated by them, and sometimes been the culprits producing them : the surveys from hell, larded with vague questions, multiple choices that don’t represent any of your choices, off-target assumptions, conclusions that have been reached before the results are in.

Red Gate Software thought its survey was different – more enlightened. And most of it was. But, there were the two questions so poorly designed that people felt compelled to comment, none more elegantly than Mark McGinty.

“It’s amazing that people even took the time to complain, much less honour us with the kind of insight Mark provided,” says Richard Collins, the Red Gate brand manager who compiled the survey. “We not only received a nice tutorial on how a survey question should and shouldn’t be phrased, but we have a great user perspective on issues such as price, performance and ease of use.”

Bob Cramblitt: The offending questions

Let’s first get done with the offending questions, so we can delve into the wisdom of The Great McGinty.

Question 14 followed up on questions about home improvement and the tools that respondents own or wished they owned (there was a method to this madness, but it would take too much time to explain). It asked respondents to rank those qualities they look for in a power tool from 1 (most important) to 9 (least important). The choices were product design, wide range of accessories, power, handling, never breaks down, what the professionals use, multi-functionality, accessible price, and precision.

Question 15 got back to software, asking respondents to rank the qualities they look for in a software tool from 1 (most important) to 9 (least important). Choices included feature rich, support and upgrades, good reputation, speed, attractive UI, good price, ease of use, never crashes, and accuracy.

Both questions forced respondents to place a separate rating value for each characteristic, in a specific order of importance.

Leaving out some complimentary remarks about Red Gate products, [Of course – Ed] here is McGinty’s response, edited for length and consistency.

Mark McGinty's contribution:

The problem with these questions, McGinty says, isn't that they are unclear. It’s that the importance of those characteristics can’t often (maybe can't ever) be placed in an exact order. And even if they could, that order would be shifting constantly, depending on variations in context that are practically infinite.

Performance and price point are similar in that the importance of either/both tends to be relative to what else is available, and at which end of the extreme hey happen to live. The slower something is at completing its job[s], the more important its performance becomes to me, because it causes me to focus on that aspect.

Likewise with price – price is always something of a hinge point in a decision to buy. Does that make it the most important characteristic? Well, maybe – it depends on how much it costs! :-) I once eval'ed this truly stellar protocol analyzer at a client’s request, then just about soiled myself when I found out it cost $25,000 to license! Very difficult for me to picture a positive ROI for that one.

So at a certain level of exorbitance, price can become the most important factor. I think the best case is when something is priced moderately enough so that the specific price of that specific something is rendered fairly unimportant – it’s so much easier that way. In a perfect world price would never be the deciding factor, but this world has constraints, and as such, it’s important.

Performance is special in that it invariably depends on various environmental factors, each of which is subject to ongoing price vs. performance analysis. In the case of SQL Compare, for example, I know I could boost performance by paying for more upstream bandwidth. Case in point, two identical databases: one on my local system took around six seconds to make a snapshot; the other across a WAN link took a minute and a half. My connectivity bill has nothing to do with SQL Compare, but it heavily affects its performance.

Escaping the “lizards”

Ease of use is yet another quagmire – an ambiguous term. I tend to focus on how intuitive a UI is in the context of the problem it’s meant to solve. I also focus on how tedious it becomes to use repeatedly. I prefer complexity that equates to power over mindless “lizards” (my pet name for wizards) that pretend to enable anyone to do anything.

I don’t want a UI that tries to spoon-feed a concept to me every time I use it – like the way the .zip shell extension works in XP: I know a .zip contains files/folders, and I know I'll need a target folder in which to extract them; why must I endure this relentless UI? To do 25 or more of them in a sitting is easy (intellectually), yet at the same time, excruciating!

I like a UI that I can figure out as I use it. If I need help before I can even scratch the surface of some app, I'm usually a little bent by that alone, depending upon how well I understand the concepts involved. Conversely, I have no qualms about using an online reference, provided it’s adequately keyworded and reasonably detailed, to figure out how to do intricate tasks. Sometimes things that aren’t immediately intuitive turn out to be that way after all, given a good explanation.

Ease of use is not a simple characteristic to quantify. Determining whether or not something fits that bill depends heavily on its purpose and intended users. As such, it’s difficult to say how important this is, either before or after purchase – it may turn out to be way more important than anyone anticipated! Bottom line, its place in the order of importance can’t be pinned without some attached context.

Mcginty's final word on surveys

A survey is a set of questions designed to extract my opinions, so my underlying assumptions are: a) there are no wrong answers; and, b) there should be no questions I can’t answer. In practice, it works out that way more often than not.

Question 15 in the Red Gate survey, however, left me no way to respond that I didn't feel substantially misrepresented my opinion in some way.

Surveys must allow multiple characteristics to be assigned equal importance. It must be possible for the underlying inference logically drawn from the answer[s] to a question to be entirely positive [or negative]; exclusivity of choices cannot force the answer to defy real life or actual opinion.

Bob Cramblitt: So much for the garbage in, garbage out cliché.

It just goes to show, Cramblitt says, that if you engage people with good intention, they might not only forgive your survey sins, but reward you for them. It helps, of course, if one of those people is a Mark McGinty.

Bob Cramblitt is a technology writer and Marketing Consultant at Red Gate, and has blithely inflicted some bad surveys on unfortunate readers.

Mark McGinty landed his first real programming position in 1990, working on a Department of Defense mission planning app written for Windows 2.11. Since then he’s worked as a contractor on numerous (C, C++, VB, HTML, VBS, ECMA Script and T-SQL) projects for almost as many companies, with some occasional network consulting projects in between. For the last seven years one client has monopolized his time with work that includes optimizing schema and SQL statements.

More about

TIP US OFF

Send us news


Other stories you might like