Archive for the ‘Perception’ Category
Fifty years ago, in 1964, the US Civil Rights Act came into being, signed into law by President Lyndon B. Johnson. The purpose of the law was to make discrimination based on race, religion, color, national origin, and gender illegal. Other protected classes were added over time, such as age in 1967. Beyond simply making discrimination illegal the legislation was attempting a feat of social engineering, changing behavior. And while one could argue that tremendous strides have been made, make no mistake about it, there is still plenty of discrimination going on today.
But questions arise for those of us who work in the space of behaviors and attitudes. Can attitudes and opinions, can thought patterns and morality be created by legislation? Does legislation and prosecution for violations of that legislation create morality or only an illusion of morality? And if people are behaving according to moral principles, but in their hearts feel differently, do we care?
While we could argue endlessly whose standards of morality, or which cultures and norms we will accept as “moral”, putting all that aside for a moment, the answer from a social engineering perspective is very clearly that legislation can change behaviors and over time those behavior changes will result in attitudinal shifts. In other words legislation does have the power to affect behaviors, and partly though the power of cognitive dissonance, partly through the power of modeling others in the community, over time thought patterns can be altered. Perhaps not for everyone, and not in every instance, but changing behaviors can lead to attitudinal shifts in a large population.
The attempt to legislate behavior is nothing new, as there were many ancient legal codes aimed at instructing people how to live their lives in an attempt to instill order in society. One well known early attempt at legislating morality occurred under the Babylonian ruler Hammurabi about 3800 years ago. The Code of Hammurabi consisted of 282 laws by which people were to live their lives. Hammurabi’s code was the source of the saying “an eye for an eye”. (“If a man put out the eye of another man, his eye shall be put out.”) And it is likely the earliest instance of medical reimbursement legislation. (“If a physician make a large incision with an operating knife and cure it, or if he open a tumor (over the eye) with an operating knife, and saves the eye, he shall receive ten shekels in money”). But medical malpractice carried stiff penalties under Hammurabi. (“If a physician make a large incision with the operating knife, and kill him, or open a tumor with the operating knife, and cut out the eye, his hands shall be cut off”).
An even older set of laws, originating about 300 years before Hammurabi, was created by the king of Ur and called the code of Ur-Nammu. Some of those very ancient laws we would recognize today (“If a man commits a murder, that man must be killed”). And some would be somewhat foreign to us today (“If a man is accused of sorcery he must undergo ordeal by water; if he is proven innocent, his accuser must pay 3 shekels”).
And almost 1000 years later on Moses brought down a set of laws from Mt. Sinai which also was aimed at describing to people how they were expected to behave and live their lives, a moral code (e.g. “You shall not murder”).
While there were certainly differences among these legal codes, there were also some very interesting similarities. For instance look across these 3 sets of moral codes, originating thousands of years apart regarding what they have to say about bearing false witness.
- Ur-Nammu (4100 years ago) – “If a man appeared as a witness, and was shown to be a perjurer, he must pay fifteen shekels of silver.”
- Hammurabi (3800 years ago) – “If any one bring an accusation of any crime before the elders, and does not prove what he has charged, he shall, if it be a capital offense charged, be put to death.”
- Moses (approx. 3000 years ago) – “You shall not bear false witness against your neighbor.”
- And today in the USA (18 U.S. Code § 1621) perjury is still a crime – “…is guilty of perjury and shall, except as otherwise expressly provided by law, be fined under this title or imprisoned not more than five years, or both.”
Apparently bearing false witness has been an on-going problem since the dawn of civilization or there would have been no need to call it out specifically in each of these moral codes.
More recently the case for legislating morality can be seen with the advent of laws in favor of marriage equality and other equal benefits for the LGBTQ community. In this particular case it seems that the attitudes of the population in general were ahead, and perhaps still are ahead of those in various legislative bodies in the USA. There are of course segments of the population who vehemently oppose equal rights, just as there were those who supported Jim Crow laws in the south. What will likely happen to that group? As LGBTQ rights become more widespread, and people/states are held accountable for violation of those rights, the act of behaving in a fashion supportive of those rights will be seen as:
- normal – people will want to be similar, including in attitudes) to the vast majority of people they are surrounded by (Paraphrasing Tversky & Kahneman 1974, “People will maintain a belief in a position when surround by a community of like-minded believers”).
And again, potentially not everyone’s beliefs will positively shift in every instance (even among those suffering from cognitive dissonance), but across the larger population continuing shifts in attitudes could be measured.
As an aside, in the world of survey research, once we have reached a 51% response rate, in order to drive additional responses, we use this notion to our advantage, by sending out reminders along the lines of, “the majority of people have completed the survey, don’t miss this opportunity to voice your thoughts”). It works.
Today the US military is struggling with the issue of sexual harassment in its ranks. The military code (e.g. article 93 – regarding cruelty and maltreatment) has various statutes in place by which personnel can face court martial trails for sexual harassment offenses. But the rules have been rarely enforced with harsh measures, especially for those with higher rank. Can the military legislate attitudinal shifts among service members? Can they eliminate sexual harassment by simply telling people “don’t do it”? That is a necessary step. And certainly enforcement must be more uniform across the military and the legislation must be seen as having some teeth. But the military must also build standards of behavior that become “normal” and which don’t include sexual harassment behaviors. Once the behaviors are in place attitudes can shift. If all you do is work on attitudes but the old behavioral standards are still there, the attitudes shifts will not “take”.
Legislating morality is possible, but over the long term true shifts in attitudes can only happen if they are supported by the corresponding behaviors.
© 2014 by Jeffrey M. Saltzman. All rights reserved.
Visit OV: www.orgvitality.com
IBM factory employees, more than 1000 of them, are on strike in China. These are workers associated with a piece of IBM that made servers which was recently sold to another company, Lenovo, for 2.3 billion dollars. When you hear about a story like this, your mind is immediate drawn to working conditions or salary as a potential point of contention and the reason why the workers may have walked off the job. But as the New York Times reports (3/6/2014) the workers on strike were carrying banners with phrases such as, “Workers are not a commodity,” and “Give us back our respect.” When I first saw this I was a little surprised. But then as I thought about it, it made complete sense.
Pay, benefits and physical conditions are often the driving force of labor unrest when they are felt to be poor or substandard. And there are certainly plenty of workers in China who still toil under horrible conditions with low pay. However, as the article points out, in the last decade the average worker salary has risen 5 fold in China. And as the Chinese population growth continues to slow workers are not queuing up like they were for each opening, requiring jobs to be more attractive to entice people. The upward tick in pay and conditions has likely removed them or shifted them to a lower priority on the typical grievance list. In a standard Maslow hierarchy fashion, the more middle class standards that the workers now enjoy has shifted their concerns from physical needs to higher order needs such as respectful treatment.
Interestingly in the USA, up until fairly recently, when workers went on strike it was often about treatment. Then with the decline of pension plans, cuts to wages, as well as an increasing health care burden being shouldered by employees, labor unrest began to turn back towards the basics of pay and benefits. A few years ago, in 2005, when subway workers in NYC walked off the job it was all about pensions and health care as the union wanted to resist “roll backs” in the standards of pay and benefits that their members had previously received. Union workers, by the way, often rate pay and benefits more favorably on employee surveys because in general unions have been successful at getting higher levels of pay and benefits for their workers. And when you pay a worker more they rate that pay more favorably. Go figure.
I am not going to argue that pay and benefits are no longer an issue for workers around the world. In many places around the world there has been a “race-to-the-bottom” with transnational corporations searching out the lowest costs of labor to build products or provide services. And their freedom to allocate resources across borders has made that search global in nature. Workers, with more restrictive mobility to move across national boundaries, cannot relocate as easily to areas with labor shortages where pay rates would presumably be higher. This imbalance, where corporations can relocate more easily than workers, creates only a semblance of a free market with the advantage belonging to the corporations. Certainly in the USA with stagnant wages, pay and benefits are once again becoming more and more of an issue. Global Employment Trends, put out by the International Labor Organization states that in 2011 approximately 30% of the world’s workforce earned less than $2 per day and 14% earned less than $1.25 per day, defined as extreme poverty. So pay and benefits are going to stay a global issue for the foreseeable future.
When a workers is walking a picket line with a sign that says “Workers are not a commodity” what exactly do they mean? A fungible worker is someone who can be substituted by another worker with the same skill sets and ability. With fungible workers it does not matter if I have worker A or worker B, as they are the equivalent and can both do the job as assigned. A fungible worker is a commodity. If they leave they are easily replaced and they are laid off or rehired as conditions require. If something is a commodity, like milk or a loaf of bread, I want to buy it at the lowest possible price. People though don’t want to be treated as though they are nothing more than a gallon of milk.
A while back for a Fortune 50 firm that was rapidly expanding through Asia I was hired to help them understand the differing drivers of what would make them an “employer of choice” in various Asian countries. Since they were a USA headquartered company we used the USA employees as a control group and were looking for differences by country. While there were slight differences country by country (mostly in dimension rank order and not dimension inclusion), the largest difference was found to be in China and it was on the notion that the company becomes part of the employee’s “family” along with a greater sense of community or we are all in this together, than was found elsewhere. So beyond being engaged with their work, these employees in China felt that the workplace was part of their family, part of how they defined themselves. So here you have IBM selling off part of their family to another company and the employees did not like it.
This notion of “company as family” is not restricted to China. In many organizations in the USA, especially small ones, perhaps still run by founders, you get employees stating that the company feels like family. A sense of we are all in this together. And as the company grows, and perhaps founders retire, they begin to lose that sense and it is quite common to have employees complain that what is wrong with the company is the loss of this sense of family.
When the IBM workers on the picket line in China ask for their respect back, they are quite possibly asking for their family back.
“ish” seems to be gaining in popularity. At least it appears that way to me when I occasionally hear my high schooler chatting with her friends. Cool, groovy, far-out, rad, are out and “ish” seems to be in, along with “literally”. Not that ish is new. “ish” has, in the distant past of parental youth, meant “approximately”. “When would you like dinner?” “Seven-ish”, has been around for a long time. But “ish” is now being attached to all sorts of words to mean “sort of” or is even being used as a standalone word. “Did you get your homework done?” “Yes-ish”. “How did today go at school?” “ish.” If I respond with “Do you literally mean ish?” I am the recipient of the rolling eyeball “you are so out of touch” look. “ish”, one is left wondering exactly what that means, though the basic gist is certainly there.
In science and organizational decision-making we try to be as “un-ish” as we possibly can be. We want to manage, make decisions, prove our point, develop our facts by relying on incontrovertible proof, on evidence that the course of action we select or the points we are trying to prove simply cannot be denied. Except that is not how humans often draw conclusions. In one study that a friend of mine did he tracked, among HR professionals, the proportion of their “best outcome” decisions vs. their “worst outcome” decisions and each contained a “leap-of-faith”. Meaning that even after all the facts were assembled, all the evidence in, a leap-of-faith was required to make a decision. Mostly because it is impossible to have complete knowledge, so in the absence of omniscience, a leap-of-faith is needed to get the job done, or you would forever be analyzing and never taking action.
In research, one study builds on another. A follow-up study may contradict the original, but over a period of time, slowly the preponderance of evidence builds, pointing the way to the best course of action, or uncovering a “truth” by which the world operates. This process can take time. Remember for decades cigarette makers denied that smoking cigarettes caused any health issues and they commissioned their own studies to prove that point. This last week CVS, a major drug store chain, announced that it would stop selling cigarettes and the only analysis to be found was whether the approximately 2 billion dollars in lost business would be made-up by a positive shift in CVS’s reputation. No one, at least in the news reports I saw, refuted the science anymore that cigarettes are bad for your health.
Making sense of the world though is quite different from understanding the world, and when people’s understanding is incomplete or based on a shaky foundation, their interpretations of what is going on can go astray. The Greeks for instance knew and it made perfect sense to them that when there was thunder and lightning that it was caused by Zeus, the king of their gods. Knowing what we now know, it may be difficult to understand how the ancient Greeks really felt about that. But it was not some cute little story that they used at bed time for the children, while the adults winked at each other. This is what they truly believed, that when it thundered Zeus was speaking. To them this interpretation of the world made sense, for it explained events as they experienced them, even though from our perspective they did not understand the way the world really worked. Today we talk about these Greek beliefs as mythology. One can’t help but wonder which of today’s beliefs will be thought of as mythology a thousand or so years from now.
Each human develops their own mythology of the way the world works and on April 22nd I am going to be conducting a complimentary webinar on “People at Work – Myths vs. Realities”. Feel free to register and join me for what is hopefully going to be an interesting-ish conversation.
Also on February 18th, Scott Brooks and I will be conducting a complimentary webinar on “Why Employee Engagement is not Strategic” and we both would love to see you there.
A friend recently sent me a video which illustrates some new experiments going on in an area called experimental philosophy. Like the well-publicized experiments in behavioral economics, the vignettes bring us to the conclusion that humans are not always rational creatures and that we often think in a contradictory fashion. The video shows two vignettes of a business manager who doesn’t care about the environment, all he cares about is maximizing profit. In the first vignette he states that he doesn’t care about the environment and takes action that causes environmental harm, while maximizing profit. He is then blamed by viewers of the vignette for causing intentional environmental harm. In the second he states that he doesn’t care about the environment then takes action which improves the environment, while maximizing profit, for which he gets no credit. In viewing the vignettes, I found myself drawn into the same judgments as the other viewers, you just can’t help it. Because he is only driven by maximizing profit, when the businessman states he doesn’t care about the environment and then causes harm, your brain readily ascribes intentional fault. But when the businessman states that he doesn’t care about the environment, only profit, and then does environmental good, your brain refuses to ascribe any credit. In this case the environmental effect is viewed as unintentional. You get blamed when you knowingly and uncaringly cause harm, but you do not get credit when you knowingly (but uncaringly) do good. There was no intentionality to do good, hence no credit is ascribed, even though good was done.
The kind of contradiction that this points to is the kind of contradiction that all humans are capable of making. It is part of our shared genetic heritage and it is how all of our brains are wired. For instance, if you ask an abortion opponent about science, it is quite easy to create the following situation. Paint a picture about a scientific investigation that has unequivocal findings supporting the abortion opponent’s point of view and they will state that abortion supporters should change their point of view due to the scientific research. But if a scientific investigation unequivocally proves the fallacy of their position the abortion opponent will reject the science. They will accept the science only when it supports their previously held notions. Conservatives who form a higher proportion of abortion opponents (and paradoxically death penalty supporters) are not the only ones who fall into this cognitive trap.
Those who support the notion that we are in the middle of major climate change, which human behavior is significantly impacting, will often point to the reams of scientific evidence supporting that fact. Opponents, who state that humans are not altering the environment, will find the one or two outlier studies and hold them up as somehow equivalent to the preponderance of evidence, the thousands of studies that state otherwise. People who support implementing environmental regulation, who are often more liberal, scoff at the unscientific notions held by their opponents, stating that science should take precedence.
Yet, those same liberals may hold onto the notion that Genetically Modified Organisms (GMOs), should be kept out of our food chain because of the harm they might do, that somehow they are unhealthy. Bending to this pressure, just this week the maker of Cheerios has announced that no GMOs will be used in the manufacture of its product.
There is no scientific evidence that GMOs have any detrimental effect on human beings. Humans have made modifying their food sources, modifications that require genetic changes, into an ongoing art form which we have been practicing for tens of thousands of years. In fact you would be very hard pressed to find any food what-so-ever that you put into your mouth that has not been genetically modified from its original state by humans over time. That natural organic turkey you just ate bears no resemblance to its wild ancestors, neither do the tomatoes, potatoes, carrots, celery, onions, apples, bread made from wheat, or anything else you might have eaten along with it. GMOs have the potential to deliver critical vitamins to poverty stricken areas, to cure or prevent disease and to create drought and disease resistant crops that could help billions on this planet who cannot take food for granted. So how can some scoff at climate change opponents for ignoring science, when they themselves ignore the science surrounding GMOs? We humans are very good at rationalizing and dealing with our mental contradictions.
Is it possible to teach people to think more rationally? Not to fall victim to their mental traps? The answer appears to be yes. By first learning about these mental traps, by studying them and their underpinnings, then practicing decision-making the research says that thinking patterns and decision-making can be improved. But, stubbornly held onto notions are not going away any time soon.
© 2014 by Jeffrey M. Saltzman. All rights reserved.
Visit OV: www.orgvitality.com
There has been much coverage in the news media about “profiling”, especially in NYC during this mayoral primary campaign. Much of it has been critical of Mayor Bloomberg and the NYC police department which utilizes a stop-and-frisk program in an attempt to have a positive impact on crime in the city. Each candidate during this primary season is staking out a pro- or con- position on stop-and-frisk. A recent federal court ruling has stated that the NYC program violates people’s rights against unreasonable search. “The judge ruled that police officers have for years been systematically stopping innocent people in the street without any objective reason to suspect them of wrongdoing” (August 12, 2013, New York Times). The stop-and-frisk program is said to be based on profiling, that people who fit certain characteristics or profiles are picked out by the police department and “selected” for participation in the program. The huge number of people who are stopped suggests that the program is in need of an overhaul. It appears to be not so much an exercise in profiling as it is an exercise in stopping a large number of people and seeing what turns up, and apparently a federal judge agrees. But as a consequence of this and other bad publicity over the years profiling has been cast as evil.
If profiling is evil, that means we are all evil. Profiling is a built-in feature of being human. Each of us use profiling every single day to assist with quickly categorizing the vast quantities of information which impinge upon us and to help us make decisions that range from very simple to quite complex. Whether an application of profiling is good or evil is based upon how it is used and the characteristics utilized in creating the profile. Sigmund Freud for instance stated, “When you meet a human being, the first distinction you make is ‘male or female?’ and you are accustomed to making that distinction with unhesitating certainty.” You make that distinction with unhesitating certainty because of a profile you carry around regarding which body shapes, facial characteristics, etc. are classified as female and which as male. Taken in total those features and characteristics represent a profile of maleness and femaleness.
Freud’s view of profiling maleness and femaleness tends towards physical characteristics, but there are other categories, such as demographics and behaviors which are also used to build profiles. Levitt and Dubner in Super Freakonomics describe a profile that was created in the UK to help pick out potential Islamic fundamentalist terrorists. Now, if you are trying to pick out potential Islamic fundamentalist terrorists, chances are that they won’t have a name like Buffy Willis, or Bruce Fleming, and in fact the research shows that Muslim names predominate among Islamic terrorists (no surprise), just as Irish names predominated among Irish terrorists who were fighting the British previously.
That particular characteristic, name type, is very superficial, and can be very misleading as a profiling tool as it tends to sweep into its net a huge number of false positives, those with Muslim names who are not terrorists (the vast majority). The researcher that Dubner and Levitt cite ended up creating a terrorist profile with predictive ability (being able to pick a terrorist out of the crowd) by adding in addition to demographic characteristics a behavioral characteristic which they cannot disclose. At the risk of getting a little technical here, they state that most of the profile variables being used to predict who is a potential terrorists and who is not tended to be binary (you either have that characteristic or you don’t), and a shortcoming of binary variables is that they don’t have much statistical variance (the degree to which over a large number of people you will get a wide-range of responses). When you don’t have much statistical variance you lose predictive power. The behavioral variable that enabled more accurate profiling of terrorists was on a continuum with higher levels of this particular behavior being much more common among those with terrorist tendencies. This gradation of responses, from lower being less likely to be a terrorist to higher levels being increasing likely has a much greater amount of variance and hence predictive power.
On a more mundane level, one research project we conducted for a high technology firm sought to define what profile would be seen between those sales people who made the President’s Club by exceeding their beginning of the year quota by a given amount, vs. those sales people who fell short. By far, the best differentiator was not gender or tenure or some other demographic variable, but rather a behavioral characteristic, the degree to which the sales person sold in a collaborative fashion. Selling in a collaborative fashion is not binary but rather exists along a continuum and those sales people with higher and higher amounts of this particular characteristic, in this company’s environment, were more successful. The ability to sell collaboratively has all sorts of selection, training, performance management, technology and support systems implications which are then potentially able to be fine-tuned by a client company.
Here is another example of how behavioral variables on a continuum are better than binary one’s commonly used in profiling. If you have ever passed through airport security in the USA vs. Israel you will notice a distinct difference in how passengers are screened. In the USA you are essentially screened for objects, metal objects or liquids that you are not supposed to carry into the airline gate area. You may be asked one or two questions about whether someone has given you anything to carry for them or where have your bags been, but the screening process is about objects (either you have them or you don’t – a binary variable) and not about behaviors. In Israel the focus of the screen is about your behaviors, why are you there, where are you going, what were you doing, tell me about your family, do you speak Hebrew, where did you learn it, etc. The screening process is about your behaviors, past, present and future which are scored and very hard to fabricate out of whole cloth. The risk score determines if the person needs follow-up additional screening. Israel’s security screening track record is exemplary.
With all the commotion about profiling these days it is clearly obvious while some would desire for profiling to just go away – that any kind of profiling is inherently bad. But in reality we profile constantly and it is not going to go away. Profiles you have in your head of store characteristics, which may determine where you will shop, according to the type, quantity, quality and costs of the goods are common. Which of your neighbors will end up being your close friends based upon profiles of shared interests and characteristics happens all the time. A doctor diagnoses a patient by examining and logically thinking through profiles of symptoms commonly seen in various illnesses. The list goes on and on. The issues around profiling are not about it being inherently good or evil, but rather how we choose to implemented it and the validity we are able to ascribe to it in day-to-day practice.
© 2013 by Jeffrey M. Saltzman. All rights reserved.
Visit OV: www.orgvitality.com
Daniel Kahneman coined the acronym WYSIATI which is an abbreviation for “What you see is all there is”. It is one of the human biases that he explores when he describes how human decision-making is not entirely based on rational thought. Traditionally, economists believed in the human being as a rational thinker, that decisions and judgments would be carefully weighed before being taken. And much of traditional economic theory is based on that notion. Dr. Kahneman’s life’s work (along with his co-author Dr. Amos Tversky) explodes that notion and describes many of the short-comings of human decision-making. He found that many human decisions rely on automatic or knee-jerk reactions, rather than deliberative thought. And that these automatic reactions (he calls them System 1 thinking) are based on heuristics or rules of thumb that we develop or have hard-wired into our brains. System 1 thinking is very useful in that it can help the individual deal with the onslaught of information that impinges on us each and every day, but the risk is when a decision that one is faced with should be thought through rather than based on a knee-jerk reaction.
System 1 decisions are easy, they are comfortable, and unfortunately they can also be wrong. But wrong in the sense that if one learned how to take a step back and allow for more deliberative thought prior to the decision, some of these wrong decisions or judgments could be avoided. A simple example from Dr. Kahneman’s book “Thinking Fast and Slow” will illustrate the point.
“A bat and a ball together cost $1.10. The bat cost $1.00 more than the ball. How much does the ball cost?” Fifty percent of the students who were posed this simple question, students attending either Harvard or Yale got this wrong. Eighty percent of the students who were asked this question from other universities got it wrong. This is System 1 thinking at its finest and most error prone. It is fast, easy, comfortable, lets you come up with a quick answer or decision, but one that is likely wrong. Knowing who reads this blog I’ll let you figure out the answer yourself.
WYSIATI is the notion that we form impressions and judgments based on the information that is available to us. For instance we form impressions about people within a few seconds of meeting them. In fact, it has been documented that without careful training interviewers who are screening job applicants will come to a conclusion about the applicant within about 30 seconds of beginning the interview. And when tested these initial notions are often wrong. Interviewers who are trained to withhold judgment about someone do a better job at applicant screening, and the longer that judgment is delayed the better the decision.
This notion of course flies in the face of Malcolm Gladwell’s best seller “Blink” in which he talks about the wonders of human’s ability to come to decisions instantly and a whole generation of manager’s have eagerly embraced his beliefs – including a few CEO’s I know. Why? It is easy, it is intuitive, it is comfortable and it plays to the notion that I am competent and confident in my work. The only problem is that when put to serious scientific scrutiny, it is often wrong.
A few months ago I introduced this concept to an HR group I was talking to. I explained how untrained HR people in a rush to judgment will jump to conclusions about someone, perhaps too rapidly. One 30-year HR veteran insisted that this may be all well and good but of course did not apply to her. After all, with her 30 years of experience her rush to judgment was of course going to be accurate. She “just knew” who were going to be good employees. I let it drop, and I think I was labeled a trouble-maker by the group. That is a label I can embrace.
We tend to develop stories based on the information at hand; piecing the information we do have into a narrative, often without asking the question, “what information am I missing”? In the area of survey research I have often seen researchers confidently presenting the “drivers” of one type of behavior or another. Say for instance, the drivers of employee engagement. But since the analysis is based on a “within” survey design, the only drivers that can possibly emerge are those that you asked about in the survey in the first place. So the researcher, in designing the 30-50 item survey, is limiting the drivers to those items that they decided to ask about in the first place. The researcher likely has in their head a model of what is important in driving engagement when designing the questionnaire, a model that was designed based on another 30-50 item or fewer questionnaire. It becomes a tautology, it becomes true because I tested it and it came out as true, but the only thing I tested is what I already believed.
There are techniques that can be applied that lead towards more deliberative and better decision-making processes. If you were walking briskly down a busy road and someone asked you “how much is 17 x 24?” you would do what every other human would do to figure that out, you would stop and think.
“Recognition is based on knowledge, familiarity is based on feeling”
Oliver Sacks – The Mind’s Eye
I was reading Dr. Sack’s latest book over a recent vacation and when I got to this sentence I had to pause for a while and really think about it. “Recognition is based on knowledge, familiarity is based on feeling.” Recognition in this context is being used as when someone recognizes a location, a person or an object. Some people have trouble in varying degrees, for instance, to recognize the faces of people they know. The inability to recognize the face of someone who should be familiar to you is called prosopagnosia and there is a growing body of evidence that the incidence of prosopagnosia in the general population is much higher than previously thought, and that it is based on a normal distribution in terms of severity. This affliction is not binary, you don’t either have it or not, but rather you can have prosopagnosia to varying degrees, as is exists on a continuum of severity.
We all spend our days recognizing the objects, people even the tasks that surround us. For instance, you can recognize a specific person or just some artifacts about the person such as young/old, female/male. You can also recognize the foods you eat, the cars you drive, the pen you write with, or the tasks you undertake to carry out your job. But when those things we recognize seem “familiar”, they evoke emotions or feelings. I recognize the face of my mother and she evokes certain feelings in me which makes her seem familiar.
Recognition and familiarity are independent and are processed by two different portions of our brains. This becomes evident in people with Capgas syndrome. These are people who can recognize a face, such as a spouse or child, but because the face does not evoke the emotions of familiarity, people with Capgas syndrome assume they are imposters A man can see his wife and recognize her as being the face of his wife but assumes that it is not really his wife because the face is not evoking the feelings he normally would associate upon seeing his wife. The person must be an imposter!
In the work environment you might recognize a task you have to carry out, but independent of that recognition would be a sense of familiarity that the tasks might generate. You might recognize for instance the steps you have to undertake to perform a tune-up on a car, but it is not until you have done it over and over that the task achieves a sense of familiarity. The same could be said of a surgeon removing a gall bladder, an accountant preparing a tax return, a taxi driver heading to the airport etc.
The question that this posed to me was regarding the measurement of employee perceptions of the workplace. Employees can recognize tasks to be performed very early on in their training for a job. But when does a task feel familiar? And is employee engagement dependent on a task generating an emotional component of familiarity or merely the recognition of the task? Can someone be engaged in their work if the work does not carry a sense of familiarity? We know that normatively the most engaged employees tend to be the ones you just hired, those who would have the least amount of familiarity surrounding their tasks, which might seem odd given the above. And that employee engagement declines, sometimes precipitously at about the 12-18 month mark of employment. It often continues its decline, hitting bottom at the 3-5 year mark, with a corresponding spike in turnover. The 3-5 year mark is also when many organizations report that the employees are really beginning to significantly contribute on the job.
But here is some speculation for you. An employee gets hired, is very engaged from day one, with that engagement being driven by the excitement of a new activity, for some a new beginning. They begin to learn the tasks associated with the job and over a relatively short period the tasks and the work environment begins to generate feelings of familiarity. Short-term engagement, driven by excitement, gives way to long-term engagement, driven by familiarity. At this point the work environment can live up to expectations generating positive emotions surrounding that sense of familiarity, or it can fall short generating negative feelings. And by-and-large it is very difficult for each and every work environment to live up to everyone’s individual expectations, and so the norm on employee engagement is that it declines as people become more familiar with their jobs and often have to deal with the day-to-day frustrations that newer employees tend to be shielded from.
We don’t have to be satisfied with the norm though. And there are certainly benefits to be gained by those organizations who understand how to buck the trend, maintaining or creating a sense of positive familiarity with the work environment as the employee’s experience with and contribution to the organization grows.
© 2013 by Jeffrey M. Saltzman. All rights reserved.
Visit OV: www.orgvitality.com