You really should probe me more...

I recently had cause to contact my corporate credit card provider due to a late charge I incurred. No names mentioned, but they're pretty big in the corporate card business.

I won't even begin to go into how the late charge occurred as that was to do with a whole load of crazy processes for claiming back expenses - I'll save that piece of analysis for another day.

What surprised me about this particular interaction was that there were no questions asked (by the agent). Within 10 seconds the representative had cancelled the objectionable charge. Brilliant!

If there was any complaint to be levelled, it was that he didn't really explain the implications (if any) of doing so - for example, I had been told earlier that you had an allowance of once for this goodwill gesture. I.e. don't make a habit of incurring late charges. (I don't intend to. Whether the expenses claim process will live up to this is another matter.)

What pleased me even more was the customer survey form that came through afterwards.

It contained the usual stuff about how I rated the transaction. That of course is the bit that is least useful for driving out any failure in the organisation. If you just ask how the transaction with the agent went, then what you find out is they are very good at handling failure, but you never find out why the failure is occuring.

So, I was even more pleased to see that the survey also asked me how many attempts it had taken to get my enquiry resolved. This is a good start, because the organisation at least has the chance of learning a few new things:

1) That this agent actually finally SOLVED the problem for me, it having been unsolved previously. He deserves special credit for that. Solve-rate is a really key concept for organisations, especially contact centres, for understanding customer experience, failure modes and performance. Few measure it.

2) By understanding that this call was part of a sequence of interactions, the organisation can see that failure occurred somewhere else in the process previously. They can begin to look at the root causes of that failure and start to address it.

However, where this survey fell down and really missed a trick, was in asking me about that sequence of events. My last call occurred, not because the previous call was unsatisfactory or didn't resolve my problem - indeed, to all intents and purposes, the original call did solve my problem. So, on the surface it looks like I have two completely satisfactory calls with the organisation. How, then, can there be any sense of failure? Surely if we look at this chain of interaction, it will come out as first class?

The reason is, that the first call made a promise about something that would happen and then it didn't happen. The first call apparently resolved the problem, but then something broke and it never followed through.

Sadly, said organisation are going to struggle to decode this because they haven't asked whether the failure was in the ability of the representative to solve my problem, or with a behind-the-scenes process issue. That one single question could tell them whether their efforts need to go into improving the performance of their individual agents, or whether they need to look at system and process failings.

This again is another classic case of "you get what you measure". (I.e. The world takes on the shape of the window you view it through.) Because they are measuring my interaction in terms of the experience with the representative, my survey response is going to imply that the first representative did not perform satisfactorily. This is in fact untrue, the first agent was superb too. It was something unseen in the background that went wrong.

So, sadly, by questioning me in the way they have done, they've set themselves off in the wrong direction, looking at agent performance, rather than systemic and process failure. Shame, it could have been so easy.

Let that be a lesson to all who design customer surveys.