Kevin Keohane – CommScrum London
There’s a probably apocryphal anecdote that in 1977, an internal audit of the Royal Artillery showed that 4 crew were required for certain cannons. A gun captain, someone to aim it, someone to load it, and someone … to hold the horses so they didn’t run off when the thing fired. Needless to say, gun crews were reduced to 3 (100 years after the fact).
Somehow I think the same principal applies to the state of employee research.
Twenty years ago, when internal communications/employee communications was just emerging with people like Roger D’Aprix and John Smythe as champions, one of the big questions was: How do we prove our value? How do we show what we do has merit and adds business value? Can i justify my existence as a professional communicator?
The IABC played an instrumental role in the development of the quantification of the value of employee communications (don’t even bother applying for a Gold Quill if you haven’t measured your project). A whole industry grew out of this with companies like ISR (later bought by Towers Watson) delivering monolithic “annual employee surveys”. Harvard Business Review reset the state of play in 1997 with the seminal Sears Service-Profit Chain case study, and The Gallup Organisation later found great success with “The Gallup 12” .
In short, we were saying “We need numbers to be taken seriously by people who are serious about numbers.” So we went out and found ways of generating data. And we sold the thing in to management. They liked the numbers bit.
And now … employee research is serious, big business. But I think the tail is now wagging the dog. Clients are creaking under the cumbersome, slow process and masses of marginally useful data generated. Actions are planned around moving “drivers of employee engagement” in the right direction, successes are celebrated … and yet other business KPIs don’t often respond accordingly. In some cases the employee survey has blacked out all other forms of useful tactical measurement.
We’ve lost sight of why we were measuring these things in the first place. The last employee survey I looked at — and its data outputs — were pretty useless when removed from the context of deploying an employee survey to compare to last year’s … some of the questions made no sense at all.
It’s not that measurement isn’t important; I’m just wondering whether we need to really reassess how we go about the whole thing. Are we measuring the right things? Are we measuring them in the right ways? Are we connecting all the HR-ish internal employee stuff well enough to the external brand equity, customer experience and financial performance stuff (beyond The Gallup 12 + Human Sigma connection)? For some reason there always seems to be a yawning chasm between HR-led measurement and Customer-experience-led measurement.
The good news is it seems like the big annual surveys are seen to be long in the tooth and other, more focussed and timely approaches are emerging like employee panels, pulse checks, etc. And it’s refreshing to see people like Mark Schumann acknowledging that there is an issue to be addressed here.
Is it a myth that “What gets measured gets managed?”
Mike Klein–Commscrum European Railpass
It’s not just a case of measurement overkill for overkill’s sake, Kevin… There is a real downside to overmeasurement–that it kills off an organization’s willingness to research itself in meaningful and targeted ways, and often yields measurements for which there is a stretch between the input and its measure.
I remember a few years ago when I met with a potential client and wanted to do a short, qualitative study to find out what the pressing issues and relevant language used to describe them was. The response “Nope! We just did the Q12 survey. That’s all we’re going to do.” Never mind that the Q12 survey had “sweet FA” to do with my proposed project. I was being asked to fly blind for fear of awakening the business’ terminal case of the dreaded “survey fatigue.”
If I were to advise communicators on their most important research needs, I’d make three recommendations,, in this order:
1) Measure a direct relationship between what you and your team do and specific actions and savings that more than make up the cost of your team. This should give you enough headroom to do the other things you think need to be done.
2) Measure the stuff you think is important–even if it means using qualitative and small sample stuff to avoid awakening the “survey fatigue” monster
3) Carefully–and judiciously measure the stuff your core sponsors think is important, but always remind them that you and your team have already “covered their costs”. This may get you some slack, and reduce your business’ fetish for overmeasurement.
4) Make sure you have a seat at the table where the measurement decisions are made. The last things you want are to either get cut out of measurement activities that are going to happen anyways, or worse, to find your activities measured in a way that doesn’t respect the context in which they are being acted upon.
Dan Gray – Commscrum London
You’ve surprised me, Kevin – this is a lot more polite than I was expecting!
In response to your specific question at the end, I don’t think it is a myth that what gets measured gets managed, nor should it be. But, as you say, you’ve got to be measuring the right things in the first place.
I seem to remember reading something from Melcrum recently – a piece with Mars’ global engagement director – who described how their use of the Gallup 12 survey had led to some appalling behaviours by managers, essentially blackmailing their subordinates to give high scores. ‘Nuff said really.
That’s what can happen when you elevate engagement to an end in itself, to be measured in its own right, rather than what it is – a delivery mechanism; a means to delivering improved business performance.
That’s the biggest problem with the mindset of the measurement-mad, ‘Human Capitalist’ crowd, as you so brilliantly characterised them in your IC taxonomy. It’s as if they took one look at the Service Proven Chain and thought, “That’s it. Case proven. As long as we can show that people are ‘engaged’, everything else can be taken as read.”
So an employee has a best friend at work. So what? It might give you a broad brush indicator of an employee’s happiness and propensity for greater discretionary effort, but it tells you nothing about where that effort is directed and whether that’s contributing directly to the achievement of the organisation’s strategy and goals.
That’s what the C-suite wants and needs to know about – whether or not it’s creating value for the business – so, ultimately, if it isn’t linked to specific strategic objectives, then why are you measuring it?
Lindsay Uittenbogaard – Commscrumming from Canada this time 🙂
Agree with all of these measurements traps and opportunities. I’d just like to add that I feel it is the communicator’s job to keep folk anchored to the fact that not everything can be measured and so measurement has it’s place. I once heard a senior business leader saying “If you can’t measure it, don’t do it.” SO irritating.
If communication is fundamentally about 2 things: avoiding disconnects and misunderstandings, then how do you really know when that happened well – or the cost of the loss when it didn’t (compared with the cost of the communication effort involved in trying to bridge those gaps)? More importantly, when communication works well, how do you measure the costs that were saved as a result of disconnects or misunderstandings that were avoided…
Of course, because we are individual people with different perspectives, interests, cultures, priorities and working hours / locations, there will always be varying degrees of communication success, much of which can be attributed to levels of attentiveness and opportunity on the part of the receivers / decoders / responders rather than the deliver of the communication itself.
I think we’ve got to accept the vague nature of our work, use data as a guide and encourage people to ‘believe’ in common sense-based lines of communication logic more than hinge their support on hard data.