Jeffrey Saltzman's Blog

Enhancing Organizational Performance

O Robot

leave a comment »

[tweetmeme source=”jeffreysaltzman”]

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov’s 3 Laws of Robotics with the additional Zeroth Law

I just finished reading a science fiction book by one of my favorite writers, Greg Bear. Hull Zero Three is about a massive ship that is sent to colonize another planet in our galaxy, that colonization being the last best hope for mankind’s survival after having systematically exploited and destroyed the Earth. That makes it a fairly typical story as these things go, but when one of the colonists wakes early to find the ship badly damaged, he determines, over the course of the novel, that a civil war had occurred on the ship while he “slept”. Most of those who were to colonize the new planet were kept unaware of the methods and goals that the designers of the ship were willing to employ to assure mankind’s survival and when those goals and methods were uncovered dissention in the ranks ensued. The civil war, internal to the ship, was fought over differing visions of what the ultimate goals of the ship should be and how they should be accomplished.

So here you have a senior group, the designers/managers of the mission, who felt it was necessary to keep overall organizational goals and methods secret from those who were to carry out the mission, the “real” goals being on a “need to know basis” in order to assure success of the mission.  Sounds like fiction doesn’t it? Who could believe that senior managers of an organization would not be clear to others within the organization regarding ultimate organizational goals and methodologies? Hidden agendas, ulterior motives, political manipulations, or simply poor communications are the makings of a good story, unless of course that story unfolds in the real world in your company or organization.

The pursuit of efficiency and the corresponding breakdown of work into its subcomponents, with each subcomponent being performed by an expert in that task was one reason that people lost sight of the bigger picture, what the organization stood for, its goals and how it was going to go about accomplishing those goals. Craftsmanship can be lost when an individual’s tasks are performed in isolation of the other tasks required for ultimate organizational functioning. It then can become very easy to perform blindly, overlooking or literally not seeing ineffective or perhaps even distasteful, illegal or immoral practices occurring elsewhere within the organization. A sales group who has no idea what operations can actually deliver upon, marketing being similarly divorced from an organization’s actual capabilities are not rare occurrences. Overall operational quality can go by the wayside if the view I have of my job is to simply put bolt A into slot B and my perception is that quality will be delivered by the quality control department, or that others will be the ones to worry about things that are beyond my own task. Conversely, those in operations/engineering/service delivery may be oblivious to the need to manufacture or provide what will actually sell or to stay closely in tune with what customers want. What you begin to develop is O Robot, the organization acting in a robotic-like fashion to develop, market, sell and deliver its products and or services, with each simplistic robot/employee doing the individual element for which it was organizationally programmed or because of interest, skill, or willingness to expend effort, programs itself.

But I feel like I might be doing robots a disservice. Isaac Asimov’s fictionalized robots were much more advanced than those notions and behaved according to the laws of robotics stated above. And large advances are being made in the real world to make robots behave in a more human-like fashion. According to a paper in the journal Interaction Studies (2007) among the traits a robot will need to exhibit to be viewed as more human-like are:

  • Acting with autonomy
  • Containing intrinsic value  – being valued for simply being, not only for what it can do
  • Being held morally accountability for its actions
  • Engaging in reciprocal relationships – adjusting its expectations and desires as it interacts with others
  • Demonstrating creativity
  • Imitating other’s behaviors normatively– because of a desire to fit in socially
  • Distinguishing or identifying actions that break social conventions.

It might be considered a step up if humans operated consistently or valued others according to a similar list of what we expect from our future robots to make them seem more human.

In the 1970s, there was for a time the notion of job enrichment. It was all the rage. Organizations, it was felt, had gone too far in breaking down jobs into elemental components, and in order to achieve happier, more satisfied or engaged workers, what was necessary was to enrich their jobs to make them less robot-like. Workers therefore were given more of a “whole” piece of work to do. You don’t hear much about job enrichment today, do you? It was not carried out very well in the majority of cases and in some cases workers whose jobs were enriched were not happier, but went on strike for higher wages and/or benefits, since more was being expected of them. This occurred not because enriching jobs was wrong, but because of fundamentally poor management practices or poor implementation of the job enrichment schema. In some cases workers were given new skills and responsibilities, but were in many ways still treated and compensated as unskilled labor, destroying any sense of fairness or equity they may have had.

An organization’s ethics is a broad and somewhat nebulous definition, but can generally be stated as the values to which the organization subscribes. How it behaves from a legal perspective is only one piece of the ethics equation. Over the years I have found that each organizational member’s view of ethics can be quite varied and is very dependent on where within the organization that member sits. The definition of ethical behavior by a blue collar worker is indeed different than the definition of ethics by management and ethical definitions will differ between professionals and administrative assistants or supervisors etc. And there are often differences in understanding or saliency of communication as different groups think about what is ethical to them. Forgive me, Isaac Asimov, for taking liberties with your Laws of Robotics, but given the way some organization attempt to operate in a robot-like fashion, I could not help but adapt the Robotic Laws into Organizational Laws that might just form a basic foundation for big-picture ethical behavior in organizations.

Organizational Laws

  1. An organization may not harm the Earth, or, by inaction, allow the Earth to come to harm.
  2. An organization may not harm humanity, or, by inaction, allow humanity to come to harm.
  3. An organization may not harm a constituent (e.g. employee, customer, citizen, supplier, member), or through inaction allow a constituent to come to harm.
  4. An organization must follow societal laws and regulations except where such laws and regulation would conflict with the First, Second  or Third Law.
  5. An organization must protect its own existence as long as such protection does not conflict with the other Laws.

One has to wonder that if these relatively simplistic Organizational Laws were widely followed would we be better off today in terms of our environment and society? Or are these notions too simplistic? How do rewards and punishment fit into the organizational role? What is the role of the organization if it decides to punish one member for harming another? And what is the role of rewarding members differentially based on merit? Etc. Possibly too simplistic, but I have to say I am intrigued by the overall framework. Thoughts?

© 2011 by Jeffrey M. Saltzman. All rights reserved.

Visit OV:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: