Employee Survey Interpretation – 101
An executive of a financial institution was examining his employee survey results. He was very pleased with the findings and he was especially pleased with the comparisons of his institution to the outside world on common items or norms. On each item that we went over his smile grew larger and larger as he saw that his institution was among the top performing organizations, until we got to pay. When people in his organization were asked to rate their pay they responded in a typical or about average fashion. He asked, “How can I be such a top performer in so many areas and be merely average on how people rate their pay?” I asked him what pay strategy his organization used and without skipping a beat he replied “To pay about average.” He was expecting halo, that exceptional responses to one area of the survey would bleed over to other areas and give more average findings a boost. I indicated that people actually read the items and differentiate among them, scoring average on average performing items and above average on strengths of the organization.
Often times as we get survey responses back we can struggle to interpret the meaning behind the numbers. While not everyone needs to use a similar approach to gain insight into the employee survey data, here is an outline of an approach that I use to obtain a preliminary view of an organization’s findings.
The first thing I like to do is to look at the results in an absolute fashion, meaning simply what is the percent favorable, neutral and unfavorable? And I like to start down at the item level to get a better sense of what is happening in the organization, looking at the details then work my way up to dimensions or indices that are included in the survey. One common misperception about survey findings is that the results are a like a school’s report card, where traditionally a score of 90% or better is an “A”, 80% is a “B” and 70% a “C”. This is not an accurate analogy and a better one is to say that survey results are more like election results, if you get more than 50% you win (in a two party election).
I first look at the favorable side of the item responses (typically top 2-box on a 5 point scale). 50%-65% favorable I would call “moderately favorable”, 65%-75% positive I would call “favorable” and more than 75% positive I would call “strongly favorable”. I then turn to the other end of the spectrum and look at the responses in the bottom 2-boxes. I get a bit concerned when about 1 out of 5 people are negative or 20%, and as the negative grows to about 1 out of 3 (33%) my concern increasingly grows and when it hits 2 out of 5 or 40% the organization needs to very clearly understand the issues that the items response represents. (I find it very helpful to picture the responses in terms of headcount – 1 out of 2 people for instance feel this way and to ask a question to myself – is that acceptable? What does that mean in terms of organizational performance?) What I literally do as a first pass is take a sheet of paper and draw a line down the middle. On one side I start listing out the items that are at 75% or more favorable and on the other side I start listing out those that are 25% or greater negative. The cut scores are not necessarily cast in concrete and I will modify them if the data for the organization is exceptionally high or low.
I also examine the total pattern of responses across all of the response choices and take special note of where the middle “neutral” choice is larger then about 25% and I also keep a look out for a few other patterns. For instance if you have a roughly equal distribution across the favorable, neutral and unfavorable choices you are looking at a responses that is called “mixed”, if you ask 3 people their opinion on this topic you get 3 different answers. This could be indicative of a lack of clarity or a very uneven performance on the item. Large neutrals are valid and they usually do not mean that someone has no opinion; they mean that someone has an opinion and that opinion is neutral. For instance, if you ask a room full of people to rate the taste of vanilla ice cream on a scale of 1-3 (to keep it simple), where a 1 is “I love it, it is my favorite flavor” a 3 is “I dislike it, I will do anything to avoid eating it” and a 2 is “not my favorite, but if it is in front of me it will disappear” the vast majority of people choose 2. Does this mean they have no opinion on the taste of vanilla ice cream? Of course not. They have an opinion and the opinion is “middle of the road”, not strongly positive or negative. It is valid, it is real and it is not negative or positive.
Once I have my two lists I take a step back and try to see how the items listed hang together, if at all. For instance is the positive side of the page dominated by items dealing with treatment or customer focus and is the negative side dealing with items surrounding employee’s career opportunities?
Once that is done, I do a second pass through the items and this time I am doing a comparative analysis. I will compare the responses from each item to the larger group of which the organization is part – so a department within a plant will be contrasted against the results for the entire plant. I will also contrast the findings against external norms and internal benchmarks. Often these surveys are census surveys with very large numbers involved, so statistical relevance is really not meaningful when trying to decide if a group is similar, more or less favorable than the comparison group. If you go back into organizations after surveys are run over time and interview people living within those organizations and ask them when did the place “feel” any different to them the following pattern tends to emerge. At about a 5 point difference (more or less favorable), people begin to say things like “can’t quite put my finger on it, but we may be improving” (or declining as the case may be). Five points seems to be a “just noticeable difference”. At 10 points people say things like “things are moving in the right (or wrong) direction). And at 15 points people use words like “feels completely different”. So again I take out my paper and draw a line down the middle and on one side list out where the organization is 10 points or more higher than the comparison and on the other side where the organization is 10 points or more lower. (I realize that you can use excel for these exercises, it is just that old habits die hard).
So now I have a list of key strengths (more than 75% favorable), key issues (more than 25% negative), and where the organization I am reviewing is more favorable and less favorable than the comparison groups in a meaningful way. I repeat these steps at the dimension level. I next put the findings through my exception filters.
One finding that is very common is that more senior people tend to be more favorable in their survey responses than less senior ones. While not every survey asks that demographic for those which do looking for exceptions to that finding can add insight. (There are some exceptions to this pattern, such as when asking about quality or customer service, it is not unusual to find the more senior folks to be more critical). So I now look at the responses surrounding strategy, communications, decision making etc. and I look to see if the expected pattern holds. And where it doesn’t hold, say for instance if middle managers look more like hourly workers on understanding strategy, I make note of it. What I am looking for here are exceptions to typically seen patterns, for these exceptions add insight into what is going on within the organization.
Another exception filter that I use is reserved for “zero tolerance items”. These are health and safety items, harassment or discrimination items; ethics items etc. in essence items where anything less than 100% favorable is just not acceptable. I throw out the guidelines that I listed above and list out any items that need to score 100%.
A third filter I use has to do with strategy but in many cases it is only possible for someone internal to the client company to use this filter. Surveys can tell you the state of the environment, at this moment in time, within the organization. What they can not tell you is that with the pressures and challenges facing the organization over the next say 5 years, this is where we need to get to in a strategic sense. The survey is a good jumping off point but one role of management is to strategically decide that to compete successfully in their market niche, with their products, this is where we need to be on certain items, maybe items on innovation or customer focus or on responsiveness. And an interpretation of the results can benefit by taking that into consideration. A closely related filter has to do with what the organization needs to be “the best in the world” at. I would argue that no organization had the ability, the resources, the time, and the funds, to be the best in the world at everything. And in fact some of the items on the survey may be somewhat contradictory to be “the best at”. For instance, it you are the most innovative or most responsive it is difficult or impossible to be the best value. In order to be the best value, cost cutting that tends to get in the way of being the most innovative or responsive needs to occur. So a strategic decision to be made is what will we do exceptionally well, to be the best in the world at and what is ok to be average at?
I then examine the items that my perusal has brought to my attention through another template or framework that I call Message, Performance, and Future. Here is how I define Message, Performance, and Future.
Message: These are items that have to do with how the organization is describing itself to the employees and their role in it. They deal with clarity regarding what the organization is about, how it will operate and how each person contributes to delivering on those goals. Is there an inspiring mission? Importantly are the organizational communications delivering that Message consistent throughout all the levels of the organization? Are policies and practices in-line with that Message? Is it clear what each person’s role is in support of the Message?
Performance: These are items that deal with people getting what they need (in the broadest sense) to be able to deliver on that Message – to get the job done? Performance should be thought of in the broadest sense, including such areas as teamwork, effective management, communications, decision making, training, equipment, resources, processes and procedures.
Future: These are items that give people a sense of a longer term benefit to being associated with the organization, that they have a positive Future and a sense of belonging, of being valued by the organization? These are the compelling reasons for them to stick around for the long-term with the organization.
The items that I have now placed onto my various lists can fall (sometimes with a bit of gray) into one of these categories. It can be very helpful for an organization to see that their issues are all about Message or are restricted to Performance for instance. It can help point them in the right direction from an action standpoint.
While no two organizations are absolutely identical, no two analyses need be either, but a consistent approach to an analysis, even if it just personal preference can make the interpretation of your findings a bit easier.