Jeffrey Saltzman's Blog

Enhancing Organizational Performance

Privacy, Persuasion and Fundamental Rights

leave a comment »

Perhaps, not surprisingly, it started with a lie. In 1957, James Vicary, on a hot summer day, in a Fort Lee, NJ movie theater, claimed to have run an experiment where he said he inserted frames into a movie and flashed on the screen the words “Eat Popcorn” and “Drink Coca Cola”. He claimed this subliminal (meaning literally below threshold) advertising resulted in huge increases in the sales of popcorn (up 58%) and Coca Cola (up 18%).  Vicary stated that subliminal communication was so powerful and had such potentially dangerous uses that he suggested warning the public when subliminal techniques were in use, and even seemed to think that some sort of governmental regulation might be needed.

Congress held hearings, legislation was proposed, but not passed. The public felt they were being manipulated. Norman Cousins, editor of The Saturday Review, warned his readers about subliminal communications. Among the uses his article warned about was the potential to manipulate voting patterns for political candidates and influence the outcome of an election.

On the fifth anniversary of his “experiment”, Vicary admitted that it was a hoax, a ruse and that his goal was to revive his failing consulting practice (Advertising Age, Sept 17, 1962). Apparently, his thinking was it did not matter if his findings were real or not, just that his potential clients believed that they were real. By this time, he was the director of survey research for Dun & Bradstreet as he attempted to resurrect his career as a psychologist.  Some question whether the insertion of the words ever took place.

So, is subliminal perception and its ability to influence people pure bunk? Thijs Verwijmeren, et.al.  (Journal of Consumer Psychology, April 2011) came to a conclusion that subliminal advertising can have some limited effect, but it is not all that powerful. Subliminal ads, for instance, can’t make you do something you don’t want to do. Others question the very concept. If something can’t be perceived, because it is subliminal, how can it possibly affect behavior? The notion is that you can affect the subconscious mind without the conscious mind being aware of the affect. Regardless of the efficacy of this particular technique, the temptation to influence people, to change their behaviors continues, through various other avenues and methods.

Classical economic theory states that humans are rational thinkers and make decisions that are in their best economic interest after considering all the facts. Much economic policy over the years has been based on this concept and of course it is wrong, for humans are anything but cool, rational thinkers as they work through their decisions. We all take short-cuts in our decision-making using bias, heuristics or rules-of-thumb to get through the day (these are decisions or judgements we make without necessarily being conscious we are making them). Without them the number of decisions you would be required to make would simply paralyze you. Daniel Kahneman and Amos Tversky, two psychologists, drove those points home with a substantial body of research that gave rise to the field of Behavioral Economics. One cornerstone of their work was their definition and description of System 1 and System 2 type of thinking that humans use to make decisions.

System 1 thinking is automatic decision-making. It is quick and easy, requiring little to no effort when making a decision or passing judgement. System 1 thinking speeds decision-making and allows you to make thousands of unthinking decisions each and every day. The path you will drive to work, what you are likely to order in your morning coffee, do you put butter or cream cheese on your English muffin are all quick, ponderless, System 1 decisions you make. System 2 is when you use deliberative thought in order to make a decision. For instance, if you are ordering a new PC or Mac, if you are like most people, you methodically work through your options and the associated costs, prior to making a decision on what equipment to buy and how to configure it. You may check with friends and read reviews, part of your decision may be based on brand loyalty or a “coolness” factor that you perhaps can’t quite articulate, or one of our many human biases, such as WYSIATI, which stands for “What you see is all there is”, may come into play. Meaning you choose from the options before you and tend not to look for less obvious or unseen options. And you can be assured that the manufacturers of these devices are doing their best to influence your decision.

The speed limit sign says “30 MPH”, the sign on the escalator says “Stand Right, Walk Left”, in the parking garage there are signs that say “small cars only”, the express checkout line at the grocery store says “8 items or less”. We are informed that in order to enroll our kindergarten-aged child into school that we have to show proof of vaccination. All around us, every day there are attempts to influence our behavior, to modify what we are doing or to inform us what is allowable and what is not. While not perfectly so, these rules tend to be imposed when your behavior has the potential to negatively impact others around you, either directly or indirectly. The reason you are not trained to drive with a “use your best judgement on what your speed should be”, is 1. You may not be familiar with the road you are on, 2. If left to their own devices, some people’s judgement (especially younger or inexperienced drivers) might not be that good, 3. There is a tremendous potential for harm occurring to others if you make a poor decision. So, a sign is posted that informs you what is an appropriate speed for that road (and there are consequences to violating that sign’s speed limit). Your child’s ability to spread disease and contribute to epidemics (not in a good way) is the reason for the vaccination (and it does not matter if what you personally believe about vaccinations and autism goes against all the known science – no vaccination, no school – another penalty).

Writing in Scientific American (March 30, 2018), Marcello Lenca and Effy Vayena, describe Cognitive Liberty as “the freedom to control one’s own cognitive dimension (including preferences, choices and beliefs) and to be protected from manipulative strategies that are designed to bypass one’s cognitive defenses.” This was written in response to the Cambridge Analytica scandal where, in an attempt to influence the last presidential election, at least 87 million Facebook users, unbeknown to them, were targeted for customized digital ads and other manipulative information in a manner that “circumvents user’s awareness of such influence”. And that is the key difference between a speed limit sign, a vaccination requirement and subliminally trying to get you to drink more Coca Cola.  One approach is direct and in-your-face, it is transparent, while the other tries to influence you without you realizing you are being influenced. They continue, “most of the current online ecosystem, is an arm’s race to the unconscious mind: notifications, microtargeted ads, autoplay plugins, are all strategies to induce addictive behavior, hence to manipulate”.

The Cambridge Analytica CEO in an undercover interview with Channel 4 News in the UK stated, that it did not matter whether something was real and factual, just that people believed that it was. People, of course, are more inclined to believe information that supports their existing viewpoint, whether it is real or not. Remember Pizzagate – the falsehood spread by certain websites that specialize in spreading falsehoods and the alt-right during the election, that Hillary Clinton was running a child sex ring in the basement of a Pizza Parlor, except she wasn’t and the accused Pizza Parlor did not even have a basement. But never-the-less, a true believer went to that Pizza Parlor with his gun and started firing in a System 1 thinking pattern. He never paused to consider the information he was receiving in a rational manner. The shooter viewed himself as a “good guy with a gun” going to stop bad people, except it was all a delusion meant to influence behavior. A delusion that was crafted by the inappropriate use of big data to find people’s fears, to manipulate them and to capitalize on them. So here you have potentially deadly consequences to falsehoods spread on social media, for which there is no penalty.

It wasn’t the first time and it certainly won’t be the last time that occurs.  And to underscore that this wasn’t some accident, during the last presidential election you also had Republican operatives making statements on TV interviews such as “we are not going to let facts determine the outcome of this election”, or we are presenting “alternative facts”, in other words just like James Vicary’s “Drink Coca Cola” ruse, it did not matter whether it was real or not, just that a population critical to your success, potential customers or in the case of an election, potential voters, believed that it was real.  And because of that orientation and the lack of regulation or penalties around it, the spread of disinformation, enabled by social media and Russia attacking our democracy, reached unprecedented levels.

Our technology, once again, is much more advanced than the social structures we surround it with, at least at first.  Partly that is due to pace of innovation being quite a bit faster than the pace of social structure change. But this is nothing new. For instance, when we, as a species, starting writing down our stories, our gurus at that time were those who could read. They assumed a special elevated status within society and because of their skill set they were the bearers of the “word” and able to manipulate and persuade the masses, for who could argue with someone about a text that you could not read? The priests of today are those who can create and harness the technology that can influence the masses and those who can build smart systems to enable that to happen more effectively.  In a special section on AI appearing in The Economist (March 31st-April 6th, 2018), it was estimated that for each capable AI tech person in a company today, the value of that company increases by 5 to 10 million dollars, so it is no wonder that sophisticated AI talent today draw 6- and 7- figure salaries.

Sander van der Linden in Psychological Weapons of Mass Persuasion, (Scientific American, April 10th 2018), quotes a study covering 3.5 million people which “found that psychologically tailored advertising, i.e. matching the content of a persuasive message to an individual’s broad psychographic profile, resulted in 40% more clicks and 50% more online purchases than mismatched or unpersonalized messages”.  We have come a long way, with the help of our technology, from flashing “Eat Popcorn” and “Drink Coca Cola” on a screen. And more importantly he states that these messages when carried over into a political environment can have the ability to either suppress voting for the candidate these messages are targeted against or can swing some potential voters to switch candidates. When elections are often decided by a percentage point or two, that small effect can have a large impact. In addition to the USA presidential election, it now appears that Britain’s EU exit vote was influenced using the same techniques.

So yes, we are at risk, in an unregulated, wild-west of a technology world, elections can be affected, Cognitive Liberty, our very democracy can be undermined, autocrats/dictators or would-be tin-pot dictators with effective social media disinformation and targeted voting campaigns can be voted in. What can we do?

Tim Cook, the CEO of Apple, stated in a recent townhall meeting in Chicago that “privacy is a human right”.  That aligns him with his predecessor, Steve Jobs, who in 2010 stated, “”Privacy means people know what they’re signing up for, in plain English and repeatedly.” “I believe people are smart and some people want to share more data than other people do. Ask them. Ask them every time. Make them tell you to stop asking them if they get tired of your asking them. Let them know precisely what you’re going to do with their data.”

The European Union has just enacted GDPR or General Data Protection Regulations, which does align with the sentiments of both Steve Jobs and Tim Cook. In a nutshell, among the GDPR requirements are that individuals give permission for their data to be collected, be informed regarding what data is being collected, how it will be used, how and for how long it will be stored, and at any time they can see what data you have on them and they can demand that they be erased from your systems. There are many other requirements as well and violations result in large fines. In an editorial, The Economist recently called for the USA to adopt the EU data protection regulations.

But in addition to protecting the data we can work towards making people savvier about what they see on social media and how to determine reality from delusion or disinformation. For instance, some possibilities include (and I am sure if a group put their minds to it, many more possibilities would emerge):

  • Transparency can be increased, for instance, just like restaurants in NYC get cleanliness ratings, social media sites can get ratings regarding the veracity of the information they carry. Is the information verified in any fashion or is it just put out there?
  • Sites that label themselves as news, or TV stations that label themselves as news should adhere to certain news worthy standards in order to keep that designation. Each program should be clearly labeled as meeting “news standards” or should be clearly labeled “opinion” not just at the start of the show, but the whole time the show is on.
  • News organizations used to adhere to a separation of church and state. Meaning the news side of the business should not be influenced by the business side of the business. To achieve a certain news rating this standard would have to be met.
  • One method towards getting people to become better consumers of information is to educate them on how humans consume information and make decisions. It is a first step towards taking them out of System 1 thinking when appropriate and having them activate System 2 thinking.
  • Penalties can be implemented for knowingly spreading false information.

Social media now has the power to cause great harm to others and to our society. As the saying goes with great power comes great responsibility, but so far social media has not proven itself capably of operating in that responsible fashion. We are in a variation of the “Tragedy of the Commons” moment when it comes to social media. The tragedy of the commons describes a situation where individuals acting independently put their own self-interest above a common interest in a shared resource. Because each is only concerned about their own interest, they each use the resource until it is despoiled and of no use to anyone. Collective action is needed to save the resource so it can be used by all to mutual benefit. While originally the concept of the commons was based on shared unregulated grazing ground over the years it has morphed to mean “any shared and unregulated resource such as atmosphere, oceans, rivers, fish stocks, or even an office refrigerator”. While social media is an unlimited resource, it too will be despoiled if it is only used by individuals for their own self-interest without regard to the harm it is causing others and to society.

 

Written by Jeffrey M. Saltzman

April 15, 2018 at 4:28 pm

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: