The Upside of Irrationality: The Unexpected Benefits of Defying Logic at Work and at Home
Dan Ariely
Behavioral economist and New York Times bestselling author of Predictably Irrational Dan Ariely returns to offer a much-needed take on the irrational decisions that influence our dating lives, our workplace experiences, and our general behaviour, up close and personal.In The Upside of Irrationality, behavioral economist Dan Ariely will explore the many ways in which our behaviour often leads us astray in terms of our romantic relationships, our experiences in the workplace, and our temptations to cheat. Blending everyday experience with groundbreaking research, Ariely explains how expectations, emotions, social norms and other invisible, seemingly illogical forces skew our reasoning abilities.Among the topics Dan explores are:• What we think will make us happy and what really makes us happy;• How we learn to love the ones we are with;• Why online dating doesn’t work, and how we can improve on it;• Why learning more about people make us like them less;• Why large bonuses can make CEOs less productive;• How to really motivate people at work;• Why bad directions can help us;• How we fall in love with our ideas;• How we are motivated by revenge; and• What motivates us to cheat.Drawing on the same experimental methods that made Predictably Irrational such a hit, Dan will emphasize the important role that irrationality plays in our day-to-day decisionmaking—not just in our financial marketplace, but in the most hidden aspects of our lives.
The Upside of Irrationality
The Unexpected Benefits of Defying Logic at Work and at Home
International Bestselling author
Dan Ariely
Copyright (#ulink_5dfd94b2-257f-500d-86aa-6c6ac1ed8f5f)
HarperCollinsPublishers
1 London Bridge Street
London SE1 9GF
www.harpercollins.co.uk (http://www.harpercollins.co.uk)
First published by HarperCollinsPublishers 2010
© Dan Ariely 2010
Dan Ariely asserts the moral right to be identified as the author of this work
A catalogue record of this book is available from the British Library
All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this e-book on screen. No part of this text may be reproduced, transmitted, downloaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins e-books.
Ebook Edition © MAY 2010 ISBN: 9780007354795
Version 2016-11-25
Find out more about Harpercollins and the environment at www.harpercollins.co.uk/green
To my teachers, collaborators, and students, for making research fun and exciting.
And to all the participants who took part in our experiments over the years—you are the engine of this research, and I am deeply grateful for all your help.
Table of Contents
Cover Page (#ulink_ced672de-685b-5318-9457-946eaf5ab27a)
Title Page (#u8e43ccff-a412-5c1e-b2d8-dc4b826909e6)
Copyright (#u3f07580b-d949-52ea-b77b-2ceb302e306e)
Dedication (#u6dea8bee-b198-5b0c-8b0c-ce87503b8474)
INTRODUCTION Lessons from Procrastination and Medical Side Effects (#u616718e3-ac5e-52c2-a8a1-3d982de0a7b9)
Part I THE UNEXPECTED WAYS WE DEFY LOGIC AT WORK (#u9289bfb0-f8d9-5896-a788-72152af096ba)
CHAPTER 1 Paying More for Less (#u38ab0a4e-aefc-5df9-933d-0dff44283eea)
CHAPTER 2 The Meaning of Labor (#ub7da7b89-79c2-5078-b02d-2379578f9a91)
CHAPTER 3 The IKEA Effect (#litres_trial_promo)
CHAPTER 4 The Not-Invented-Here Bias (#litres_trial_promo)
CHAPTER 5 The Case for Revenge (#litres_trial_promo)
Part II THE UNEXPECTED WAYS WE DEFY LOGIC AT HOME (#litres_trial_promo)
CHAPTER 6 On Adaptation (#litres_trial_promo)
CHAPTER 7 Hot or Not? (#litres_trial_promo)
CHAPTER 8 When a Market Fails (#litres_trial_promo)
CHAPTER 9 On Empathy and Emotion (#litres_trial_promo)
CHAPTER 10 The Long-Term Effects of Short-Term Emotions (#litres_trial_promo)
CHAPTER 11 Lessons from Our Irrationalities (#litres_trial_promo)
Thanks (#litres_trial_promo)
List of Collaborators (#litres_trial_promo)
Notes (#litres_trial_promo)
Bibliography and Additional Readings (#litres_trial_promo)
Index (#litres_trial_promo)
Also by Dan Ariely (#litres_trial_promo)
About the Publisher (#litres_trial_promo)
INTRODUCTION Lessons from Procrastination and Medical Side Effects (#ulink_45f90585-147a-5fd3-8482-f74517a55559)
I don’t know about you, but I have never met anyone who never procrastinates. Delaying annoying tasks is a nearly universal problem—one that is incredibly hard to curb, no matter how hard we try to exert our willpower and self-control or how many times we resolve to reform.
Allow me to share a personal story about one way I learned to deal with my own tendency to procrastinate. Many years ago I experienced a devastating accident. A large magnesium flare exploded next to me and left 70 percent of my body covered with third-degree burns (an experience I wrote about in Predictably Irrational
(#litres_trial_promo)). As if to add insult to injury, I acquired hepatitis from an infected blood transfusion after three weeks in the hospital. Obviously, there is never a good time to get a virulent liver disease, but the timing of its onset was particularly unfortunate because I was already in such bad shape. The disease increased the risk of complications, delayed my treatment, and caused my body to reject many skin transplants. To make matters worse, the doctors didn’t know what type of liver disease I had. They knew I wasn’t suffering from hepatitis A or B, but they couldn’t identify the strain. After a while the illness subsided, but it still slowed my recovery by flaring up from time to time and wreaking havoc on my system.
Eight years later, when I was in graduate school, a flare-up hit me hard. I checked into the student health center, and after many blood tests the doctor gave me a diagnosis: it was hepatitis C, which had recently been isolated and identified. As lousy as I felt, I greeted this as good news. First, I finally knew what I had; second, a promising new experimental drug called interferon looked as if it might be an effective treatment for hepatitis C. The doctor asked whether I’d consider being part of an experimental study to test the efficacy of interferon. Given the threats of liver fibrosis and cirrhosis and the possibility of early death, it seemed that being part of the study was clearly the preferred path.
The initial protocol called for self-injections of interferon three times a week. The doctors told me that after each injection I would experience flulike symptoms including fever, nausea, headaches, and vomiting—warnings that I soon discovered to be perfectly accurate. But I was determined to kick the disease, so every Monday, Wednesday, and Friday evening over the next year and a half, I carried out the following ritual: Once I got home, I would take a needle from the medicine cabinet, open the refrigerator, load the syringe with the right dosage of interferon, plunge the needle deep into my thigh, and inject the medication. Then I would lie down in a big hammock—the only interesting piece of furniture in my loftlike student apartment—from which I had a perfect view of the television. I kept a bucket within reach to catch the vomit that would inevitably come and a blanket to fend off the shivering. About an hour later the nausea, shivering, and headache would set in, and at some point I would fall asleep. By noon the next day I would have more or less recovered and would return to my classwork and research.
Along with the other patients in the study, I wrestled not only with feeling sick much of the time, but also with the basic problem of procrastination and self-control. Every injection day was miserable. I had to face the prospect of giving myself a shot followed by a sixteen-hour bout of sickness in the hope that the treatment would cure me in the long run. I had to endure what psychologists call a “negative immediate effect” for the sake of a “positive long-term effect.” This is the type of problem we all experience when we fail to do short-term tasks that will be good for us down the road. Despite the prodding of conscience, we often would rather avoid doing something unpleasant now (exercising, working on an annoying project, cleaning out the garage) for the sake of a better future (being healthier, getting a job promotion, earning the gratitude of one’s spouse).
At the end of the eighteen-month trial, the doctors told me that the treatment was successful and that I was the only patient in the protocol who had always taken the interferon as prescribed. Everyone else in the study had skipped the medication numerous times—hardly surprising, given the unpleasantness involved. (Lack of medical compliance is, in fact, a very common problem.)
So how did I get through those months of torture? Did I simply have nerves of steel? Like every person who walks the earth, I have plenty of self-control problems and, every injection day, I deeply wanted to avoid the procedure. But I did have a trick for making the treatment more bearable. For me, the key was movies. I love movies and, if I had the time, I would watch one every day. When the doctors told me what to expect, I decided to motivate myself with movies. Besides, I couldn’t do much else anyway, thanks to the side effects.
Every injection day, I would stop at the video store on the way to school and pick up a few films that I wanted to see. Throughout the day, I would think about how much I would enjoy watching them later. Once I got home, I would give myself the injection. Then I would immediately jump into my hammock, make myself comfortable, and start my mini film fest. That way, I learned to associate the act of the injection with the rewarding experience of watching a wonderful movie. Eventually, the negative side effects kicked in, and I didn’t have such a positive feeling. Still, planning my evenings that way helped me associate the injection more closely with the fun of watching a movie than with the discomfort of the side effects, and thus I was able to continue the treatment. (I was also fortunate, in this instance, that I have a relatively poor memory, which meant that I could watch some of the same movies over and over again.)
THE MORAL OF this story? All of us have important tasks that we would rather avoid, particularly when the weather outside is inviting. We all hate grinding through receipts while doing our taxes, cleaning up the backyard, sticking to a diet, saving for retirement, or, like me, undergoing an unpleasant treatment or therapy. Of course, in a perfectly rational world, procrastination would never be a problem. We would simply compute the values of our long-term objectives, compare them to our short-term enjoyments, and understand that we have more to gain in the long term by suffering a bit in the short term. If we were able to do this, we could keep a firm focus on what really matters to us. We would do our work while keeping in mind the satisfaction we’d feel when we finished our project. We would tighten our belts a notch and enjoy our improved health down the line. We would take our medications on time and hope to hear the doctor say one day, “There isn’t a trace of the disease in your system.”
Sadly, most of us often prefer immediately gratifying short-term experiences over our long-term objectives.
(#litres_trial_promo) We routinely behave as if sometime in the future, we will have more time, more money, and feel less tired or stressed. “Later” seems like a rosy time to do all the unpleasant things in life, even if putting them off means eventually having to grapple with a much bigger jungle in our yard, a tax penalty, the inability to retire comfortably, or an unsuccessful medical treatment. In the end, we don’t need to look far beyond our own noses to realize how frequently we fail to make short-term sacrifices for the sake of our long-term goals.
WHAT DOES ALL of this have to do with the subject of this book? In a general sense, almost everything.
From a rational perspective, we should make only decisions that are in our best interest (“should” is the operative word here). We should be able to discern among all the options facing us and accurately compute their value—not just in the short term but also in the long term—and choose the option that maximizes our best interests. If we’re faced with a dilemma of any sort, we should be able to see the situation clearly and without prejudice, and we should assess pros and cons as objectively as if we were comparing different types of laptops. If we’re suffering from a disease and there is a promising treatment, we should comply fully with the doctor’s orders. If we are overweight, we should buckle down, walk several miles a day, and live on broiled fish, vegetables, and water. If we smoke, we should stop—no ifs, ands, or buts.
Sure, it would be nice if we were more rational and clearheaded about our “should”s. Unfortunately, we’re not. How else do you explain why millions of gym memberships go unused or why people risk their own and others’ lives to write a text message while they’re driving or why…(put your favorite example here)?
THIS IS WHERE behavioral economics enters the picture. In this field, we don’t assume that people are perfectly sensible, calculating machines. Instead, we observe how people actually behave, and quite often our observations lead us to the conclusion that human beings are irrational.
To be sure, there is a great deal to be learned from rational economics, but some of its assumptions—that people always make the best decisions, that mistakes are less likely when the decisions involve a lot of money, and that the market is self-correcting—can clearly lead to disastrous consequences.
To get a clearer idea of how dangerous it can be to assume perfect rationality, think about driving. Transportation, like the financial markets, is a man-made system, and we don’t need to look very far to see other people making terrible and costly mistakes (due to another aspect of our biased worldview, it takes a bit more effort to see our own errors). Car manufacturers and road designers generally understand that people don’t always exercise good judgment while driving, so they build vehicles and roads with an eye to preserving drivers’ and passengers’ safety. Automobile designers and engineers try to compensate for our limited human ability by installing seat belts, antilock brakes, rearview mirrors, air bags, halogen lights, distance sensors, and more. Similarly, road designers put safety margins along the edge of the highway, some festooned with cuts that make a brrrrrr sound when you drive on them. But despite all these safety precautions, human beings persist in making all kinds of errors while driving (including drinking and texting), suffering accidents, injuries, and even death as a result.
Now think about the implosion of Wall Street in 2008 and its attendant impact on the economy. Given our human foibles, why on earth would we think we don’t need to take any external measures to try to prevent or deal with systematic errors of judgment in the man-made financial markets? Why not create safety measures to help keep someone who is managing billions of dollars, and leveraging this investment, from making incredibly expensive mistakes?
EXACERBATING THE BASIC problem of human error are technological developments that are, in principle, very useful but that can also make it more difficult for us to behave in a way that truly maximizes our interests. Consider the cell phone, for example. It’s a handy gadget that lets you not only call but also text and e-mail your friends. If you text while walking, you might look at your phone instead of the sidewalk and risk running into a pole or another person. This would be embarrassing but hardly fatal. Allowing your attention to drift while walking is not so bad; but add a car to the equation, and you have a recipe for disaster.
Likewise, think about how technological developments in agriculture have contributed to the obesity epidemic. Thousands of years ago, as we burned calories hunting and foraging on the plains and in the jungles, we needed to store every possible ounce of energy. Every time we found food containing fat or sugar, we stopped and consumed as much of it as we could. Moreover, nature gave us a handy internal mechanism: a lag of about twenty minutes between the time when we’d actually consumed enough calories and the time when we felt we had enough to eat. That allowed us to build up a little fat, which came in handy if we later failed to bring down a deer.
Now jump forward a few thousand years. In industrialized countries, we spend most of our waking time sitting in chairs and staring at screens rather than chasing after animals. Instead of planting, tending, and harvesting corn and soy ourselves, we have commercial agriculture do it for us. Food producers turn the corn into sugary, fattening stuff, which we then buy from fast-food restaurants and supermarkets. In this Dunkin’ Donuts world, our love of sugar and fat allows us to quickly consume thousands of calories. And after we have scarfed down a bacon, egg, and cheese breakfast bagel, the twenty-minute lag time between having eaten enough and realizing that we’re stuffed allows us to add even more calories in the form of a sweetened coffee drink and a half-dozen powdered-sugar donut holes.
Essentially, the mechanisms we developed during our early evolutionary years might have made perfect sense in our distant past. But given the mismatch between the speed of technological development and human evolution, the same instincts and abilities that once helped us now often stand in our way. Bad decision-making behaviors that manifested themselves as mere nuisances in earlier centuries can now severely affect our lives in crucial ways.
When the designers of modern technologies don’t understand our fallibility, they design new and improved systems for stock markets, insurance, education, agriculture, or health care that don’t take our limitations into account (I like the term “human-incompatible technologies,” and they are everywhere). As a consequence, we inevitably end up making mistakes and sometimes fail magnificently.
THIS PERSPECTIVE OF human nature may seem a bit depressing on the surface, but it doesn’t have to be. Behavioral economists want to understand human frailty and to find more compassionate, realistic, and effective ways for people to avoid temptation, exert more self-control, and ultimately reach their long-term goals. As a society, it’s extremely beneficial to understand how and when we fail and to design/ invent/create new ways to overcome our mistakes. As we gain some understanding about what really drives our behaviors and what steers us astray—from business decisions about bonuses and motivation to the most personal aspects of life such as dating and happiness—we can gain control over our money, relationships, resources, safety, and health, both as individuals and as a society.
This is the real goal of behavioral economics: to try to understand the way we really operate so that we can more readily observe our biases, be more aware of their influences on us, and hopefully make better decisions. Although I can’t imagine that we will ever become perfect decision makers, I do believe that an improved understanding of the multiple irrational forces that influence us could be a useful first step toward making better decisions. And we don’t have to stop there. Inventors, companies, and policy makers can take the additional steps to redesign our working and living environments in ways that are naturally more compatible with what we can and cannot do.
In the end, this is what behavioral economics is about—figuring out the hidden forces that shape our decisions, across many different domains, and finding solutions to common problems that affect our personal, business, and public lives.
AS YOU WILL see in the pages ahead, each chapter in this book is based on experiments I carried out over the years with some terrific colleagues (at the end of the book, I have included short biographies of my wonderful collaborators). In each of these chapters, I’ve tried to shed some light on a few of the biases that plague our decisions across many different domains, from the workplace to personal happiness.
Why, you may ask, do my colleagues and I put so much time, money, and energy into experiments? For social scientists, experiments are like microscopes or strobe lights, magnifying and illuminating the complex, multiple forces that simultaneously exert their influences on us. They help us slow human behavior to a frame-by-frame narration of events, isolate individual forces, and examine them carefully and in more detail. They let us test directly and unambiguously what makes human beings tick and provide a deeper understanding of the features and nuances of our own biases.
(#litres_trial_promo)
There is one other point I want to emphasize: if the lessons learned in any experiment were limited to the constrained environment of that particular study, their value would be limited. Instead, I invite you to think about experiments as an illustration of general principles, providing insight into how we think and how we make decisions in life’s various situations. My hope is that once you understand the way our human nature truly operates, you can decide how to apply that knowledge to your professional and personal life.
In each chapter I have also tried to extrapolate some possible implications for life, business, and public policy—focusing on what we can do to overcome our irrational blind spots. Of course, the implications I have sketched are only partial. To get real value from this book and from social science in general, it is important that you, the reader, spend some time thinking about how the principles of human behavior apply to your life and consider what you might do differently, given your new understanding of human nature. That is where the real adventure lies.
READERS FAMILIAR WITH Predictably Irrational might want to know how this book differs from its predecessor. In Predictably Irrational, we examined a number of biases that lead us—particularly as consumers—into making unwise decisions. The book you hold in your hands is different in three ways.
First—and most obviously—this book differs in its title. Like its predecessor, it’s based on experiments that examine how we make decisions, but its take on irrationality is somewhat different. In most cases, the word “irrationality” has a negative connotation, implying anything from mistakenness to madness. If we were in charge of designing human beings, we would probably work as hard as we could to leave irrationality out of the formula; in Predictably Irrational, I explored the downside of our human biases. But there is a flip side to irrationality, one that is actually quite positive. Sometimes we are fortunate in our irrational abilities because, among other things, they allow us to adapt to new environments, trust other people, enjoy expending effort, and love our kids. These kinds of forces are part and parcel of our wonderful, surprising, innate—albeit irrational—human nature (indeed, people who lack the ability to adapt, trust, or enjoy their work can be very unhappy). These irrational forces help us achieve great things and live well in a social structure. The title The Upside of Irrationality is an attempt to capture the complexity of our irrationalities—the parts that we would rather live without and the parts that we would want to keep if we were the designers of human nature. I believe that it is important to understand both our beneficial and our disadvantageous quirks, because only by doing so can we begin to eliminate the bad and build on the good.
Second, you will notice that this book is divided into two distinct parts. In the first part, we’ll look more closely at our behavior in the world of work, where we spend much of our waking lives. We’ll question our relationships—not just with other people but with our environments and ourselves. What is our relationship with our salaries, our bosses, the things we produce, our ideas, and our feelings when we’ve been wronged? What really motivates us to perform well? What gives us a sense of meaning? Why does the “Not-Invented-Here” bias have such a foothold in the workplace? Why do we react so strongly in the face of injustice and unfairness?
In the second part, we’ll move beyond the world of work to investigate how we behave in our interpersonal relations. What is our relationship to our surroundings and our bodies? How do we relate to the people we meet, those we love, and faraway strangers who need our help? And what is our relationship to our emotions? We’ll examine the ways we adapt to new conditions, environments, and lovers; how the world of online dating works (and doesn’t); what forces dictate our response to human tragedies; and how our reactions to emotions in a given moment can influence patterns of behavior long into the future.
The Upside of Irrationality is also very different from Predictably Irrational because it is highly personal. Though my colleagues and I try to do our best to be as objective as possible in running and analyzing our experiments, much of this book (particularly the second part) draws on some of my difficult experiences as a burn patient. My injury, like all severe injuries, was very traumatic, but it also very quickly shifted my outlook on many aspects of life. My journey provided me with some unique perspectives on human behavior. It presented me with questions that I might not have otherwise considered but, because of my injury, became central to my life and the focus of my research. Far beyond that, and perhaps more important, it led me to study how my own biases work. In describing my personal experiences and biases, I hope to shed some light on the thought process that has led me to my particular interest and viewpoints and illustrate some of the essential ingredients of our common human nature—yours and mine.
AND NOW FOR the journey…
Part I THE UNEXPECTED WAYS WE DEFY LOGIC AT WORK (#ulink_a06d624b-170d-5453-b525-cdbc13e2bd3e)
CHAPTER 1 Paying More for Less (#ulink_bc732c1f-4c76-57fb-b505-4174fdb5776d)
Why Big Bonuses Don’t Always Work
Imagine that you are a plump, happy laboratory rat. One day, a gloved human hand carefully picks you out of the comfy box you call home and places you into a different, less comfy box that contains a maze. Since you are naturally curious, you begin to wander around, whiskers twitching along the way. You quickly notice that some parts of the maze are black and others are white. You follow your nose into a white section. Nothing happens. Then you take a left turn into a black section. As soon as you enter, you feel a very nasty shock surge through your paws.
Every day for a week, you are placed in a different maze. The dangerous and safe places change daily, as do the colors of the walls and the strength of the shocks. Sometimes the sections that deliver a mild shock are colored red. Other times, the parts that deliver a particularly nasty shock are marked by polka dots. Sometimes the safe parts are covered with black-and-white checks. Each day, your job is to learn to navigate the maze by choosing the safest paths and avoiding the shocks (your reward for learning how to safely navigate the maze is that you aren’t shocked). How well do you do?
More than a century ago, psychologists Robert Yerkes and John Dodson
(#litres_trial_promo) performed different versions of this basic experiment in an effort to find out two things about rats: how fast they could learn and, more important, what intensity of electric shocks would motivate them to learn fastest. We could easily assume that as the intensity of the shocks increased, so would the rats’ motivation to learn. When the shocks were very mild, the rats would simply mosey along, unmotivated by the occasional painless jolt. But as the intensity of the shocks and discomfort increased, the scientists thought, the rats would feel as though they were under enemy fire and would therefore be more motivated to learn more quickly. Following this logic we would assume that when the rats really wanted to avoid the most intense shocks, they would learn the fastest.
We are usually quick to assume that there is a link between the magnitude of the incentive and the ability to perform better. It seems reasonable that the more motivated we are to achieve something, the harder we will work to reach our goal, and that this increased effort will ultimately move us closer to our objective. This, after all, is part of the rationale behind paying stockbrokers and CEOs sky-high bonuses: offer people a very large bonus, and they will be motivated to work and perform at very high levels.
SOMETIMES OUR INTUITIONS about the links between motivation and performance (and, more generally, our behavior) are accurate; at other times, reality and intuition just don’t jibe. In Yerkes and Dodson’s case, some of the results aligned with what most of us might expect, while others did not. When the shocks were very weak, the rats were not very motivated, and, as a consequence, they learned slowly. When the shocks were of medium intensity, the rats were more motivated to quickly figure out the rules of the cage, and they learned faster. Up to this point, the results fit with our intuitions about the relationship between motivation and performance.
But here was the catch: when the shock intensity was very high, the rats performed worse! Admittedly, it is difficult to get inside a rat’s mind, but it seemed that when the intensity of the shocks was at its highest, the rats could not focus on anything other than their fear of the shock. Paralyzed by terror, they had trouble remembering which parts of the cage were safe and which were not and, so, were unable to figure out how their environment was structured.
The graph below shows three possible relationships between incentive (payment, shocks) and performance. The light gray line represents a simple relationship, where higher incentives always contribute in the same way to performance. The dashed gray line represents a diminishing-returns relationships between incentives and performance.
The solid dark line represents Yerkes and Dodson’s results. At lower levels of motivation, adding incentives helps to increase performance. But as the level of the base motivation increases, adding incentives can backfire and reduce performance, creating what psychologists often call an “inverse-U relationship.”
Yerkes and Dodson’s experiment should make us wonder about the real relationship between payment, motivation, and performance in the labor market. After all, their experiment clearly showed that incentives can be a double-edged sword. Up to a certain point, they motivate us to learn and perform well. But beyond that point, motivational pressure can be so high that it actually distracts an individual from concentrating on and carrying out a task—an undesirable outcome for anyone.
Of course, electric shocks are not very common incentive mechanisms in the real world, but this kind of relationship between motivation and performance might also apply to other types of motivation: whether the reward is being able to avoid an electrical shock or the financial rewards of making a large amount of money. Let’s imagine how Yerkes and Dodson’s results would look if they had used money instead of shocks (assuming that the rats actually wanted money). At small bonus levels, the rats would not care and not perform very well. At medium bonus levels, the rats would care more and perform better. But, at very high bonus levels, they would be “overmotivated.” They would find it hard to concentrate, and, as a consequence, their performance would be worse than if they were working for a smaller bonus.
So, would we see this inverse-U relationship between motivation and performance if we did an experiment using people instead of rats and used money as the motivator? Or, thinking about it from a more pragmatic angle, would it be financially efficient to pay people very high bonuses in order to get them to perform well?
The Bonus Bonanza
In light of the financial crisis of 2008 and the subsequent outrage over the continuing bonuses paid to many of those deemed responsible for it, many people wonder how incentives really affect CEOs and Wall Street executives. Corporate boards generally assume that very large performance-based bonuses will motivate CEOs to invest more effort in their jobs and that the increased effort will result in higher-quality output.
(#litres_trial_promo) But is this really the case? Before you make up your mind, let’s see what the empirical evidence shows.
To test the effectiveness of financial incentives as a device for enhancing performance, Nina Mazar (a professor at the University of Toronto), Uri Gneezy (a professor at the University of California at San Diego), George Loewenstein (a professor at Carnegie Mellon University), and I set up an experiment. We varied the amount of financial bonuses participants could receive if they performed well and measured the effect that the different incentive levels had on performance. In particular, we wanted to see whether offering very large bonuses would increase performance, as we usually expect, or decrease performance, analogous to Yerkes and Dodson’s experiment with rats.
We decided to offer some participants the opportunity to earn a relatively small bonus (equivalent to about one day’s pay at their regular pay rate). Others would have a chance to earn a medium-sized bonus (equivalent to about two weeks’ pay at their regular rate). The fortunate few, and the most important group for our purposes, could earn a very large bonus, equal to about five months of their regular pay. By comparing the performances of these three groups, we hoped to get a better idea of how effective the bonuses were in improving performance.
I know you are thinking “Where can I sign up for this experiment?” But before you make extravagant assumptions about my research budget, let me tell you that we did what many companies are doing these days—we outsourced the operation to rural India, where the average person’s monthly spending was about 500 rupees (approximately $11). This allowed us to offer bonuses that were very meaningful to our participants without raising the eyebrows and ire of the university’s accounting system.
Once we decided where to run our experiments, we had to select the tasks themselves. We thought about using tasks that were based on pure effort, such as running, doing squats, or lifting weights, but since CEOs and other executives don’t earn their money by doing those kinds of things, we decided to focus on tasks that required creativity, concentration, memory, and problem-solving skills. After trying out a whole range of tasks on ourselves and on some students, the six tasks we selected were:
1 Packing Quarters: In this spatial puzzle, the participant had to fit nine quarter-circle wedges into a square. Fitting eight of them is simple, but fitting all nine is nearly impossible.
2 Simon: A bold-colored relic of the 1980s, this is (or was) a common electronic memory game requiring the participant to repeat increasingly longer sequences of lit-up colored buttons without error.
3 Recall Last Three Numbers: Just as it sounds, this is a simple game in which we read a sequence of numbers (23, 7, 65, 4, and so on) and stopped at a random moment. Participants had to repeat the last three numbers.
4 Labyrinth: A game in which the participant used two levers to control the angle of a playing surface covered with a maze and riddled with holes. The goal was to advance a small ball along a path and avoid the holes.
5 Dart Ball: A game much like darts but played with tennis balls covered with the looped side of Velcro and a target covered with the hooked side so that the balls would stick to it.
6 Roll-up: A game in which the participant moved two rods apart in order to move a small ball as high up as possible on an inclining slope.
Having chosen the games, we packed six of each type into a large box and shipped them to India. For some mysterious reason, the people at customs in India were not too happy with the battery-powered Simon games, but after we paid a 250 percent import tax, the games were released and we were ready to start our experiment.
We hired five graduate students in economics from Narayanan College in the southern Indian city of Madurai and asked them to go to a few of the local villages. In each of these, the students had to find a central public space, such as a small hospital or a meeting room, where they could set up shop and recruit participants for our experiment.
One of the locations was a community center, where Ramesh, a second-year master’s student, got to work. The community center was not fully finished, with no tiles on the floors and unpainted walls, but it was fully functional and, most important, it provided protection from wind, rain, and heat.
Ramesh positioned the six games around the room and then went outside to hail his first participant. Soon a man walked by, and Ramesh immediately tried to interest him in the experiment. “We have a few fun tasks here,” he explained to the man. “Would you be interested in participating in an experiment?” The deal sounded suspiciously like a government-sponsored activity to the passerby, so it wasn’t surprising that the fellow just shook his head and continued to walk on. But Ramesh persisted: “You can make some money in this experiment, and it’s sponsored by the university.” And so our first participant, whose name was Nitin, turned around and followed Ramesh into the community center.
Ramesh showed Nitin all the tasks that were set up around the room. “These are the games we will play today,” he told Nitin. “They should take about an hour. Before we start, let’s find out how much you could get paid.” Ramesh then rolled a die. It landed on 4, which according to our randomization process placed Nitin in the medium-level bonus condition, which meant that the total bonus he could make from all six games was 240 rupees—or about two weeks’ worth of pay for the average person in this part of rural India.
Next, Ramesh explained the instructions to Nitin. “For each of the six games,” he said, “we have a medium level of performance we call good and a high level of performance we call very good. For each game in which you reach the good level of performance, you will get twenty rupees, and for each game in which you reach the very good level of performance you will get forty rupees. In games in which you don’t even reach the good level, you will get nothing. This means that your payment will be somewhere between zero rupees and two hundred forty rupees, depending on your performance.”
Nitin nodded, and Ramesh picked the Simon game at random. In this game, one of the four colored buttons lights up and plays a single musical tone. Nitin was supposed to press the lighted button. Then the device would light the same button followed by another one; Nitin would press those two buttons in succession; and so on through an increasing number of buttons. As long as Nitin remembered the sequence and didn’t make any mistakes, the game kept going and the length of the sequence increased. But once Nitin got a sequence wrong, the game would end and Nitin’s score would be equal to his largest correct sequence. In total, Nitin was allowed ten tries to reach the desired score.
“Now let me tell you what good and very good mean in this game,” Ramesh continued. “If you manage to correctly repeat a sequence of six steps on at least one of the ten times you play, that’s a good level of performance and will earn you twenty rupees. If you correctly repeat a sequence of eight steps, that’s a very good level of performance and you will get forty rupees. After ten attempts, we will begin the next game. Is everything clear about the game and the rules for payment?”
Nitin was quite excited about the prospect of earning so much money. “Let’s start,” he said, and so they did.
The blue button was the first to light up, and Nitin pressed it. Next came the yellow button, and Nitin pressed the blue and yellow buttons in turn. Not so hard. He did fine when the green button lit up next but unfortunately failed on the fourth button. In the next game, he did not do much better. In the fifth game, however, he remembered a sequence of seven, and in the sixth game he managed to get a sequence of eight. Overall, the game was a success, and he was now 40 rupees richer.
The next game was Packing Quarters, followed by Recall Last Three Numbers, Labyrinth, Dart Ball, and finally Roll-up. By the end of the hour, Nitin had reached a very good performance level on two of the games and a good performance level on two others. But he failed to reach the good level of performance for two of the games. In total, he made 120 rupees—a little more than a week’s pay—so he walked out of the community center a delighted man.
The next participant was Apurve, an athletic and slightly balding man in his thirties and the proud father of twins. Apurve rolled the die and it landed on 1, a number that, according to our randomization process, placed Apurve in the low-level bonus condition. This meant that the total bonus he could make from all six games was 24 rupees, or about one day of pay.
The first game Apurve played was Recall Last Three Numbers, followed by Roll-up, Packing Quarters, Labyrinth, and Simon, and ending with Dart Ball. Overall, he did rather well. He reached a good performance level in three of the games and a very good performance level in one. This put him on more or less the same performance level as Nitin, but, thanks to the unlucky roll of the die, he made only 10 rupees. Still, he was happy to receive that amount for an hour of playing games.
When Ramesh rolled the die for the third participant, Anoopum, it landed on 5. According to our randomization process, this placed him in the highest-level bonus condition. Ramesh explained to Anoopum that for each game in which he reached the good level of performance he would be paid 200 rupees and that he would receive 400 rupees for each game in which he reached the very good score. Anoopum made a quick calculation: six games multiplied by 400 rupees equaled 2,400 rupees—a veritable fortune, roughly equivalent to five months’ pay. Anoopum couldn’t believe his good luck.
The first randomly selected game for Anoopum was Labyrinth.
(#litres_trial_promo) Anoopum was instructed to place a small steel ball at the start position and then use the two knobs to advance the small ball through the maze while helping it avoid the trap holes. “We’ll play this game ten times,” Ramesh said. “If you manage to advance the ball past the seventh hole, we’ll call this a good level of performance, for which you will be paid two hundred rupees. If you manage to advance the ball past the ninth hole, we’ll call that a very good level of performance, and you will get four hundred rupees. When we’ve finished with this game, we’ll go on to the next. Everything clear?”
Anoopum nodded eagerly. He grabbed the two knobs that controlled the tilt of the maze surface and stared at the steel ball in its “start” position as if it were prey. “This is very, very important,” he mumbled. “I must succeed.”
He set the ball rolling; almost immediately, it fell into the first trap. “Nine more chances,” he said aloud to encourage himself. But he was under the gun, and his hands were now trembling. Unable to control the fine movements of his hands, he failed time after time. Having flubbed Labyrinth, he saw the wonderful images of what he would do with his small fortune slowly dissolve.
The next game was Dart Ball. Standing twenty feet away, Anoopum tried to hit the Velcro center of the target. He hurled one ball after another, throwing one from below like a softball pitch, another from above as in cricket, and even from the side. Some of the balls came very close to the target, but none of his twenty throws stuck to the center.
The Packing Quarters game was sheer frustration. In a minuscule two minutes, Anoopum had to fit the nine pieces into the puzzle in order to earn 400 rupees (if he took four minutes, he could earn 200 rupees). As the clock ticked, Ramesh read out the remaining time every thirty seconds: “Ninety seconds! Sixty seconds! Thirty seconds!” Poor Anoopum tried to work faster and faster, applying more and more force to fit all nine of the wedges into the square, but to no avail.
At the end of the four minutes, the Packing Quarters game was abandoned. Ramesh and Anoopum moved on to the Simon game. Anoopum felt somewhat frustrated, but he braced himself and tried his utmost to focus on the task at hand.
His first attempt with Simon resulted in a two-light sequence—not very promising. But, on the second try, he managed to recall a sequence of six. He beamed, because he knew that he had finally made at least 200 rupees, and he had eight more chances to make it to 400. Feeling as though he was finally able to do something well, he tried to increase his concentration, willing his memory to a higher plane of performance. In the next eight attempts, he was able to remember sequences of six and seven, but he never made it to eight.
With two more games to go, Anoopum decided to take a short break. He went through calming breathing exercises, exhaling a long “Om” with each breath. After several minutes, he felt ready for the Roll-up game. Unfortunately, he failed both the Roll-up game and the Recall Last Three Numbers task. As he left the community center, he comforted himself with the thought of the 200 rupees he had earned—a nice sum for a few games—but his frustration at not having gotten the larger sum was evident on his furrowed brow.
The Results: Drumroll, Please…
After a few weeks, Ramesh and the other four graduate students finished the data collection in a number of villages and mailed me the performance records. I was very eager to take a first look at the results. Was our Indian experiment worth the time and effort? Would the different levels of bonuses tally with the levels of performance? Would those who could receive the highest bonuses perform better? Worse?
For me, taking a first peek into a data set is one of the most exciting experiences in research. Though it’s not quite as thrilling as, say, catching a first glimpse of one’s child on an ultrasound, it’s easily more wonderful than opening a birthday present. In fact, for me there’s a ceremonial aspect to viewing a first set of statistical analysis. Early on in my research career, after having spent weeks or months of collecting data, I would enter all the numbers into a data set and format it for statistical analysis. Weeks and months of work would bring me to the point of discovery, and I wanted to be sure to celebrate the moment. I would take a break and pour myself a glass of wine or make a cup of tea. Only then would I sit down to celebrate the magical moment when the solution to the experimental puzzle I had been working on was finally revealed.
That magical moment is infrequent for me these days. Now that I’m no longer a student, my calendar is filled with commitments and I no longer have time to analyze experimental data myself. So, under normal circumstances, my students or collaborators take the first pass at the data analysis and experience the rewarding moment themselves. But when the data from India arrived, I was itching to have this experience once again. So I persuaded Nina to give me the data set and made her promise that she would not look at the data while I worked on it. Nina promised, and I reinstated my data analysis ritual, wine and all.
BEFORE I TELL you the results, how well do you think the participants in the three groups did? Would you guess that those who could earn a medium-level bonus did better than those who were faced with the small one? Do you think those hoping for a very large bonus did better than those who could achieve a medium-level one? We found that those who could earn a small bonus (equivalent to one day of pay) and the medium-level bonus (equivalent to two weeks’ worth of work) did not differ much from each other. We concluded that since even our small payment was worth a substantial amount to our participants, it probably already maximized their motivation. But how did they perform when the very large bonus (the amount equivalent to five months of their regular pay rate) was on the line? As you can tell from the figure above, the data from our experiment showed that people, at least in this regard, are very much like rats. Those who stood to earn the most demonstrated the lowest level of performance. Relative to those in the low-or medium-bonus conditions, they achieved good or very good performance less than a third of the time. The experience was so stressful to those in the very-large-bonus condition that they choked under the pressure, much like the rats in the Yerkes and Dodson experiment.
The graph below summarizes the results for the three bonus conditions across the six games. The “very good” line represents the percentage of people in each condition who achieved this level of performance. The “earnings” line represents the percentage of total payoff that people in each condition earned.
Supersizing the Incentive
I should probably tell you now that we didn’t start out running our experiments in the way I just described. Initially, we set about to place some extra stress on our participants. Given our limited research budget, we wanted to create the strongest incentive we could with the fixed amount of money we had. We chose to do this by adding the force of loss aversion to the mix.
(#litres_trial_promo) Loss aversion is the simple idea that the misery produced by losing something that we feel is ours—say, money—outweighs the happiness of gaining the same amount of money. For example, think about how happy you would be if one day you discovered that due to a very lucky investment, your portfolio had increased by 5 percent. Contrast that fortunate feeling to the misery that you would feel if, on another day, you discovered that due to a very unlucky investment, your portfolio had decreased by 5 percent. If your unhappiness with the loss would be higher than the happiness with the gain, you are susceptible to loss aversion. (Don’t worry; most of us are.)
To introduce loss aversion into our experiment, we prepaid participants in the small-bonus condition 24 rupees (6 times 4). Participants in the medium-bonus condition received 240 rupees (6 times 40), and participants in the very-large-bonus condition were prepaid 2,400 rupees (6 times 400). We told them that if they got to the very good level of performance, we would let them keep all of the payment for that game; if they got to the good level of performance, we would take back half of the amount per game; and if they did not even reach the good level of performance, we would take back the entire amount per game. We thought that our participants would feel more motivated to avoid losing the money than they would by just trying to earn it.
Ramesh carried out this version of the experiment in a different village with two participants. But he went no further because this approach presented us with a unique experimental challenge. When the first participant stepped into the community center, we gave him all the money he could conceivably make from the experiment—2,400 rupees, equivalent to about five months’ salary—in advance. He didn’t manage to do any task well, and, unfortunately for him, he had to return all the money. At that point we looked forward to seeing if the rest of the participants would exhibit a similar pattern. Lo and behold, the next participant couldn’t manage any of the tasks either. The poor fellow was so nervous that he shook the whole time and couldn’t concentrate. But this guy did not play according to our rules, and at the end of the session he ran away with all of our money. Ramesh didn’t have the heart to chase him. After all, who could blame the poor guy? This incident made us realize that including loss aversion might not work in this experiment, so we switched to paying people at the end.
There was another reason why we wanted to prepay participants: we wanted to try to capture the psychological reality of bonuses in the marketplace. We thought that paying up front was analogous to the way many professionals think about their expected bonuses every year. They come to think of the bonuses as largely given and as a standard part of their compensation. They often even make plans for spending it. Perhaps they eye a new house with a mortgage that would otherwise be out of reach or plan a trip around the world. Once they start making such plans, I suspect that they might be in the same loss aversion mind-set as the prepaid participants.
Thinking versus Doing
We were certain that there would be some limits to the negative effect of high reward on performance—after all, it seemed unlikely that a significant bonus would reduce performance in all situations. And it seemed natural to expect that one limiting factor (what psychologists call a “moderator”) would depend on the level of mental effort the task required. The more cognitive skill involved, we thought, the more likely that very high incentives would backfire. We also thought that higher rewards would more likely lead to higher performance when it came to noncognitive, mechanical tasks. For example, what if I were to pay you for every time you jump in the next twenty-four hours? Wouldn’t you jump a lot, and wouldn’t you jump more if the payment were higher? Would you reduce your jumping speed or stop while you still had the ability to keep going if the amount were very large? Unlikely. In cases where the tasks are very simple and mechanical, it’s hard to imagine that very high motivation would backfire.
This reasoning is why we included a wide range of tasks in the experiment and why we were somewhat surprised that the very high reward level resulted in lower performance on all our tasks. We had certainly expected this to be the case for the more cognitive tasks such as the Simon and Recall Last Three Numbers games, but we hadn’t expected the effect to be just as pronounced for the tasks that were more mechanical in nature, such as the Dart Ball and Roll-up games. How could this be? One possibility was that our intuition about mechanical tasks was wrong and that, even for those kinds of tasks, very high incentives can be counterproductive. Another possibility was that the tasks that we considered as having a low cognitive component (Dart Ball and Roll-up) still required some mental skill and we needed to include purely mechanical tasks in the experiment.
With these questions in mind, we next set out to see what would happen if we took one task that required some cognitive skills (in the form of simple math problems) and compared it to a task that was based on pure effort (quickly clicking on two keyboard keys). Working with MIT students, we wanted to examine the relationship between bonus size and performance when the task was purely mechanical, as opposed to a task that required some mental ability. Given my limited research budget, we could not offer the students the same range of bonuses we had offered in India. So we waited until the end of the semester, when the students were relatively broke, and offered them a bonus of $660—enough money to host a few parties—for a task that would take about twenty minutes.
Our experimental design had four parts, and each participant took part in all four of them (this setup is what social scientists call a within-participant design). We asked the students to perform the cognitive task (simple math problems) twice: once with the promise of a low bonus and once with the promise of a high bonus. We also asked them to perform the mechanical task (clicking on a keyboard) twice: once with the promise of a low bonus and once with the promise of a high bonus.
What did this experiment teach us? As you might expect, we saw a difference between the effects of large incentives on the two types of tasks. When the job at hand involved only clicking two keys on a keyboard, higher bonuses led to higher performance. However, once the task required even some rudimentary cognitive skills (in the form of simple math problems), the higher incentives led to a negative effect on performance, just as we had seen in the experiment in India.
The conclusion was clear: paying people high bonuses can result in high performance when it comes to simple mechanical tasks, but the opposite can happen when you ask them to use their brains—which is usually what companies try to do when they pay executives very high bonuses. If senior vice presidents were paid to lay bricks, motivating them through high bonuses would make sense. But people who receive bonus-based incentives for thinking about mergers and acquisitions or coming up with complicated financial instruments could be far less effective than we tend to think—and there may even be negative consequences to really large bonuses.
To summarize, using money to motivate people can be a double-edged sword. For tasks that require cognitive ability, low to moderate performance-based incentives can help. But when the incentive level is very high, it can command too much attention and thereby distract the person’s mind with thoughts about the reward. This can create stress and ultimately reduce the level of performance.
AT THIS POINT, a rational economist might argue that the experimental results don’t really apply to executive compensation. He might say something like “Well, in the real world, overpaying would never be an issue because employers and compensation boards would take lowered performance into account and never offer bonuses that could make motivation inefficient. After all,” the rational economist might claim, “employers are perfectly rational. They know which incentives help employees perform better and which incentives don’t.”
(#litres_trial_promo)
This is a perfectly reasonable argument. Indeed, it is possible that people intuitively understand the negative consequence of high bonuses and would therefore never offer them. On the other hand, much like many of our other irrationalities, it is also possible that we don’t exactly understand how different forces, including financial bonuses, influence us.
In order to try to find out what intuitions people have about high bonuses, we described the India experiment in detail to a large group of MBA students at Stanford University and asked them to predict the performance in the small-, medium-, and very-large-bonus conditions. Without knowing our results, our “postdictors” (that is, predictors after the fact) expected that the level of performance would increase with the level of payment—mispredicting the effects of the very high bonuses on performance.
These results suggested that the negative effect of high bonuses is not something that people naturally intuit. It also suggests that compensation is an area in which we need to employ stringent empirical investigation, rather than rely on intuitive reasoning. But would companies and boards of directors abandon their own intuitions when it comes to setting salaries and use empirical data instead? I doubt it. In fact, whenever I have a chance to present some of our findings to high-ranking executives, I am continually surprised by how little they know or think about the efficacy of their compensation schemes and how little interest they have in figuring out how to improve them.
(#litres_trial_promo)
What about Those “Special People”?
A few years ago, before the financial crisis of 2008, I was invited to give a talk to a select group of bankers. The meeting took place in a well-appointed conference room at a large investment company’s office in New York City. The food and wine were delicious and the views from the windows spectacular. I told the audience about different projects I was working on, including the experiments on high bonuses in India and MIT. They all nodded their heads in agreement with the theory that high bonuses might backfire—until I suggested that the same psychological effects might also apply to the people in the room. They were clearly offended by the suggestion. The idea that their bonuses could negatively influence their work performance was preposterous, they claimed.
I tried another approach and asked for a volunteer from the audience to describe how the work atmosphere at his firm changes at the end of the year. “During November and December,” the fellow said, “very little work gets done. People mostly think about their bonuses and about what they will be able to afford.” In response, I asked the audience to try on the idea that the focus on their upcoming bonuses might have a negative effect on their performance, but they refused to see my point. Maybe it was the alcohol, but I suspect that those folks simply didn’t want to acknowledge the possibility that their bonuses were vastly oversized. (As the prolific author and journalist Upton Sinclair once noted, “It is difficult to get a man to understand something when his salary depends upon his not understanding it.”)
Somewhat unsurprisingly, when presented with the results of these experiments, the bankers also maintained that they were, apparently, superspecial individuals; unlike most people, they insisted, they work better under stress. It didn’t seem to me that they were really so different from other people, but I conceded that perhaps they were right. I invited them to come to the lab so that we could run an experiment to find out for sure. But, given how busy bankers are and the size of their paychecks, it was impossible to tempt them to take part in our experiments or to offer them a bonus that would have been large enough to be meaningful for them.
Without the ability to test bankers, Racheli Barkan (a professor at Ben-Gurion University in Israel) and I looked for another source of data that could help us understand how highly paid, highly specialized professionals perform under great pressure. I know nothing about basketball, but Racheli is an expert, and she suggested that we look at clutch players—the basketball heroes who sink a basket just as the buzzer sounds. Clutch players are paid much more than other players, and are presumed to perform especially brilliantly during the last few minutes or seconds of a game, when stress and pressure are highest.
With the help of Duke University men’s basketball Coach Mike Krzyzewski (“Coach K”), we got a group of professional coaches to identify clutch players in the NBA (the coaches agreed, to a large extent, about who is and who is not a clutch player). Next, we watched videos of the twenty most crucial games for each clutch player in an entire NBA season (by most crucial, we meant that the score difference at the end of the game did not exceed three points). For each of those games, we measured how many points the clutch players had shot in the last five minutes of the first half of each game, when pressure was relatively low. Then we compared that number to the number of points scored during the last five minutes of the game, when the outcome was hanging by a thread and stress was at its peak. We also noted the same measures for all the other “nonclutch” players who were playing in the same games.
We found that the nonclutch players scored more or less the same in the low-stress and high-stress moments, whereas there was actually a substantial improvement for clutch players during the last five minutes of the games. So far it looked good for the clutch players and, by analogy, the bankers, as it seemed that some highly qualified people could, in fact, perform better under pressure.
But—and I’m sure you expected a “but”—there are two ways to gain more points in the last five minutes of the game. An NBA clutch player can either improve his percentage success (which would indicate a sharpening of performance) or shoot more often with the same percentage (which suggests no improvement in skill but rather a change in the number of attempts). So we looked separately at whether the clutch players actually shot better or just more often. As it turned out, the clutch players did not improve their skill; they just tried many more times. Their field goal percentage did not increase in the last five minutes (meaning that their shots were no more accurate); neither was it the case that nonclutch players got worse.
At this point you probably think that clutch players are guarded more heavily during the end of the game and this is why they don’t show the expected increase in performance. To see if this were indeed the case, we counted how many times they were fouled and also looked at their free throws. We found the same pattern: the heavily guarded clutch players were fouled more and got to shoot from the free-throw line more frequently, but their scoring percentage was unchanged. Certainly, clutch players are very good players, but our analysis showed that, contrary to common belief, their performance doesn’t improve in the last, most important part of the game.
Obviously, NBA players are not bankers. The NBA is much more selective than the financial industry; very few people are sufficiently skilled to play professional basketball, while many, many people work as professional bankers. As we’ve seen, it’s also easier to get positive returns from high incentives when we’re talking about physical rather than cognitive skills. NBA players use both, but playing basketball is more of a physical than a mental activity (at least relative to banking). So it would be far more challenging for the bankers to demonstrate “clutch” abilities when the task is less physical and demands more gray matter. Also, since the basketball players don’t actually improve under pressure, it’s even more unlikely that bankers would be able to perform to a higher degree when they are under the gun.
A CALL FOR LOWER BONUSES
One congressman publicly questioned the ethics of very large bonuses when he addressed the annual awards dinner of the trade newspaper American Banker at the New York Palace Hotel in 2004. Representative Barney Frank of Massachusetts, who, at the time, was the senior Democrat on the House Financial Services Committee (he’s currently the chairman) and hardly your run-of-the-mill, flattering “Thank you all so much for inviting me” speaker, began with a question: “At the level of pay that those of you who run banks get, why the hell do you need bonuses to do the right thing?” He was answered by an abyss of silence. So he went on: “Do we really have to bribe you to do your jobs? I don’t get it. Think what you are telling the average worker—that you, who are the most important people in the system and at the top, your salary isn’t enough, you need to be given an extra incentive to do your jobs right.”
As you may have guessed, two things happened, or rather did not happen, after this speech. First, no one answered his questions; second, no standing ovation was given. But Frank’s point is important. After all, bonuses are paid with shareholders’ money, and the effectiveness of those expensive payment schemes is not all that clear.
Public Speaking 101
The truth is that all of us, at various times, struggle and even fail when we perform tasks that matter to us the most. Consider your performance on standardized tests such as the SAT. What was the difference between your score on the practice tests and your score on the real SAT? If you are like most people, the result on your practice tests was most likely higher, suggesting that the pressure of wanting to perform well led you to a lower score.
The same principle applies to public speaking. When preparing to give a speech, most people do just fine when they practice their talk in the privacy of their offices. But when it’s time to stand up in front of a crowd, things don’t always go according to plan. The hypermotivation to impress others can cause us to stumble. It’s no coincidence that glossophobia (the fear of public speaking) is right up there with arachnophobia (fear of spiders) on the scary scale.
As a professor, I have had a lot of personal experience with this particular form of overmotivation. Early in my academic career, public speaking was difficult for me. During one early presentation at a professional conference in front of many of my professors, I shook so badly that every time I used the laser pointer to emphasize a particular line on a projected slide, it raced all over the large screen and created a very interesting light show. Of course, that just made the problem worse and, as a result, I learned to make do without a laser pointer. Over time and with a lot of experience, I became better at public speaking, and my performance doesn’t suffer as much these days.
Despite years of relatively problem-free public speaking, I recently had an experience where the social pressure was so high that I flubbed a talk at a large conference in front of many of my colleagues. During one session at a conference in Florida, three colleagues and I were going to present our recent work on adaptation, the process through which people become accustomed to new circumstances (you’ll read more about this phenomenon in chapter 6, “On Adaptation”). I had carried out some studies in this area, but instead of talking about my research findings, I planned to give a fifteen-minute talk about my personal experience in adapting to my physical injuries and present some of the lessons I had learned. I practiced this talk a few times, so I knew what I was going to say. Aside from the fact that the topic was more personal than is usual in an academic presentation, I did not feel that the talk was that much different from others I have given over the years. As it turned out, the plan did not match the reality in the slightest.
I started the lecture very calmly by describing my talk’s objective, but, to my horror, the moment I started describing my experience in the hospital, I teared up. Then I found myself unable to speak. Avoiding eye contact with the audience, I tried to compose myself as I walked from one side of the room to the other for a minute or so. I tried again but I could not talk. After some more pacing and another attempt to talk, I was still unable to talk without crying.
It was clear to me that the presence of the audience had amplified my emotional memory. So I decided to switch to an impersonal discussion of my research. That approach worked fine, and I finished my presentation. But it left me with a very strong impression about my own inability to predict the effects of my own emotions, when combined with stress, on my ability to perform.
WITH MY PUBLIC failure in mind, Nina, Uri, George, and I created yet another version of our experiments. This time, we wanted to see what would happen when we injected an element of social pressure into the experimental mix.
In each session of this experiment, we presented eight students at the University of Chicago with thirteen sets of three anagrams, and paid them for each of the anagrams they solved. As an example, try to rearrange the letters of the following meaningless words to form meaningful ones (do this before you look at the footnote
(#litres_trial_promo)):
1. SUHOE
Your solution:
2. TAUDI
Your solution:
3. GANMAAR
Your solution:
In eight of the thirteen trials, participants solved their anagrams working alone in private cubicles. In the other five trials, they were instructed to stand up, walk to the front of the room, and try to solve the anagrams on a large blackboard in plain view of the other participants. In these public trials, performing well on the anagrams was more important, since the participants would not only receive the payment for their performance (as in the private trials) but would also stand to reap some social rewards in the form of the admiration of their peers (or be humiliated if they failed in front of everyone). Would they solve more anagrams in public—when their performance mattered more—or in private, when there was no social motivation to do well? As you’ve probably guessed, the participants solved about twice as many anagrams in private as in public.
THE PSYCHOANALYST AND concentration camp survivor Viktor Frankl described a related example of choking under social pressure. In Man’s Search for Meaning, Frankl wrote about a patient with a persistent stutter who, try as he might, could not rid himself of it. In fact, the only time the poor fellow had been free of his speech problem was once when he was twelve years old. In that instance, the conductor of a streetcar had caught the boy riding without a ticket. Hoping the conductor would pity him for his stutter and let him off, the boy tried to stutter—but since he did not have any incentive to speak without stuttering, he was unable to do it! In a related example, Frankl describes a patient with a fear of perspiring: “Whenever he expected an outbreak of perspiration, this anticipatory anxiety was enough to precipitate excessive sweating.” In other words, the patient’s high social motivation to be sweat-free ironically led to more perspiration or, in economic terms, to lower performance.
In case you’re wondering, choking under social pressure is not limited to humans. A variety of our animal friends have been put to similar tests, including no one’s favorite—the cockroach—who starred in one particularly interesting study. In 1969, Robert Zajonc, Alexander Heingartner, and Edward Herman wanted to compare the speed at which roaches would accomplish different tasks under two conditions. In one, they were alone and without any company. In the other, they had an audience in the form of a fellow roach. In the “social” case, the other roach watched the runner through a Plexiglas window that allowed the two creatures to see and smell each other but that did not allow any direct contact.
One task that the cockroaches performed was relatively easy: the roach had to run down a straight corridor. The other, more difficult task required the roach to navigate a somewhat complex maze. As you might expect (assuming you have expectations about roaches), the insects performed the simpler runway task much more quickly when another roach was observing them. The presence of another roach increased their motivation, and, as a consequence, they did better. However, in the more complex maze task, they struggled to navigate their way in the presence of an audience and did much worse than when they performed the same complex task alone. So much for the benefits of social pressure.
I don’t suppose that the knowledge of shared performance anxiety will endear roaches to you, but it does demonstrate the general ways in which high motivation to perform well can backfire (and it may also point to some important similarities between humans and roaches). As it turns out, overmotivation to perform well can stem from electrical shocks, from high payments, or from social pressures, and in all these cases humans and nonhumans alike seem to perform worse when it is in their best interest to truly outdo themselves.
Where Do We Go from Here?
These findings make it clear that figuring out the optimal level of rewards and incentives is not easy. I do believe that the inverse-U relationship originally suggested by Yerkes and Dodson generally holds, but obviously there are additional forces that could make a difference in performance. These include the characteristics of the task (how easy or difficult it is), the characteristics of the individual (how easily they become stressed), and characteristics related to the individual’s experience with the task (how much practice a person has had with this task and how much effort they need to put into it). Either way, we know two things: it’s difficult to create the optimal incentive structure for people, and higher incentives don’t always lead to the highest performance.
I want to be clear that these findings don’t mean that we should stop paying people for their work and contributions. But they do mean that the way we pay people can have powerful unintended consequences. When corporate HR departments design compensation plans, they usually have two goals: to attract the right people for the job and to motivate them to do the best they can. There is no question that these two objectives are important and that salaries (in addition to benefits, pride, and meaning—topics that we will cover in the next few chapters) can play an important role in fulfilling these goals. The problem is with the types of compensations people receive. Some, such as very high bonuses, can create stress because they cause people to overfocus on the compensation, while reducing their performance.
TO TRY TO get a feeling for how a high salary might change your behavior and influence your performance, imagine the following thought experiment: What if I paid you a lot of money, say $100,000, to come up with a very creative idea for a research project in the next seventy-two hours? What would you do differently? You would probably substitute some of your regular activities with others. You would not bother with your e-mail; you wouldn’t check Facebook; you wouldn’t leaf through a magazine. You would probably drink a lot of coffee and sleep much less. Maybe you would stay at the office all night (as I do from time to time). This means that you would work more hours, but would doing any of this help you be more creative?
Hours spent working aside, let’s consider how your thought process would change during those critical seventy-two hours. What would you do to make yourself more creative and productive? Would you close your eyes harder? Would you visualize a mountaintop? Bite your lips to a larger degree? Breathe deeply? Meditate? Would you be able to chase away random thoughts more easily? Would you type faster? Think more deeply? Would you do any of those things and would they really lead you to a higher level of performance?
This is just a thought experiment, but I hope it illustrates the idea that though a large amount of money would most likely get you to work many hours (which is why high payment is very useful as an incentive when simple mechanical tasks are involved), it is unlikely to improve your creativity. It might, in fact, backfire, because financial incentives don’t operate in a simple way on the quality of output from our brains. Nor is it at all clear how much of our mental activity is really under our direct control, especially when we are under the gun and really want to do our best.
NOW LET’S IMAGINE that you need a critical, lifesaving surgery. Do you think that offering your medical team a skyhigh bonus would really result in improved performance? Would you want your surgeon and anesthesiologist to think, during the operation, about how they might use the bonus to buy a sailboat? That would clearly motivate them to get the bonus, but would it get them to perform better? Wouldn’t you rather they devoted all of their mental energy to the task at hand? How much more effective might your doctors be in what the psychologist Mihály Csíkszentmihályi called a “state of flow”—when they are fully engaged and focused on the task at hand and oblivious to anything else? I’m not sure about you, but for important tasks that require thinking, concentration, and cognitive skill, I would take a doctor who’s in a flow state any day.
A Few Words about Small and Large Decisions
For the most part, researchers like me carry out laboratory-based experiments. Most of these involve simple decisions, short periods of time, and relatively low stakes. Because traditional economists usually do not like the answers that our lab experiments produce, they often complain that our results do not apply to the real world. “Everything would change,” they say, “if the decisions were important, the stakes were higher, and people tried harder.” But to me, that’s like saying that people always get the best care in the emergency room because the decisions made there are often literally life and death. (I doubt many people would argue that this is the case.) Absent empirical evidence one way or the other, such criticism of laboratory experiments is perfectly reasonable. It is useful to have some healthy skepticism about any results, including those generated in relatively simple lab experiments. Nevertheless, it is not clear to me why the psychological mechanisms that underlie our simple decisions and behaviors would not be the same ones that underlie more complex and important ones.
CARING AS A DOUBLE-EDGED SWORD
First Knight, a movie that came out in 1995 starring Sean Connery and Richard Gere, demonstrates one extreme way of dealing with the way motivation affects performance. Richard Gere’s character, Sir Lancelot, is a vagabond expert swordsman who duels to pay the bills. Toward the beginning of the film, he sets up a kind of mini sparring clinic where the villagers pay to test their skills against him while he dispenses witty advice for their improvement. At one point, Lancelot suggests that someone out there must be better than he, and wouldn’t that person love to win the gold pieces he happens to have clinking around in a bag?
Finally, an enormous blond man named Mark challenges him. They fight furiously for a brief time. Then, of course, Lancelot disarms Mark. The latter, confused, asks Lancelot how he managed to disarm him and whether it was a trick. Lancelot smilingly says that that’s just how he fights, no trick to it. (Well, there is one mental trick, as we discover later.) When Mark asks Lancelot to teach him, Lancelot pauses for a moment before giving his lesson. He offers Mark three tips: first, to observe the man he’s fighting and learn how he moves and thinks; second, to await the make-or-break moment in the match and go for it then. Up to that point, Mark smiles and nods happily, sure he can learn to do those things. Lancelot’s final tip, however, is a little more difficult to follow. He tells his eager student that he can’t care about living or dying. Mark stares into his face, astonished; Lancelot smiles sadly and walks off into the sunset like a medieval cowboy.
Judging from this advice, it seems that Lancelot fights better than anyone else because he has found a way to bring the stress of the situation to zero. If he doesn’t care whether he lives or dies, nothing rides on his performance. He doesn’t worry about living past the end of the fight, so nothing clouds his mind and affects his abilities—he is pure concentration and skill.
Seen from this perspective, the findings presented in this chapter suggest that our tendency to behave irrationally and in ways that are undesirable might increase when the decisions are more important. In our India experiment, the participants behaved very much as standard economics would predict when the incentives were relatively low. But they did not behave as standard economics would predict when it really mattered and the incentives were highest.
COULD ALL THIS mean that sometimes we might actually behave less rationally when we try harder? If that’s so, what is the correct way to pay people without overstressing them? One simple solution is to keep bonuses low—something those bankers I met with might not appreciate. Another approach might be to pay employees on a straight salary basis. Though it would eliminate the consequences of overmotivation, it would also eradicate some of the benefits of performance-based payment. A better approach might be to keep the motivating element of performance-based payment but eliminate some of the nonproductive stress it creates. To achieve this, we could, for example, offer employees smaller and more frequent bonuses. Another approach might be to offer employees a performance-based payment that is averaged over time—say, the previous five years, rather than only the last year. This way, employees in their fifth year would know 80 percent of their bonus in advance (based on the previous four years), and the immediate effect of the present year’s performance would matter less.
Whatever approach we take to optimize performance, it should be clear that we need a better understanding of the links between compensation, motivation, stress, and performance. And we need to take our peculiarities and irrationalities into account.
P.S. I WOULD like to dedicate this chapter to my banker friends, who repeatedly “enjoy” hearing my opinion about their salaries and are nevertheless still willing to talk to me.
CHAPTER 2 The Meaning of Labor (#ulink_479b101a-9d06-551e-884f-acf34d82727c)
What Legos Can Teach Us about the Joy of Work
On a recent flight from California, I was seated next to a professional-looking man in his thirties. He smiled as I settled in, and we exchanged the usual complaints about shrinking seat sizes and other discomforts. We both checked our e-mail before shutting down our iPhones. Once we were airborne, we got to chatting. The conversation went like this:
HE: So how do you like your iPhone?
ME: I love it in many ways, but now I check my e-mail all the time, even when I am at traffic lights and in elevators.
HE: Yeah, I know what you mean. I spend much more time on e-mail since I got it.
ME: I’m not sure if all these technologies make me more productive, or less.
HE: What kind of work do you do?
Whenever I’m on a plane and start chatting with the people sitting next to me, they often ask or tell me what they do for a living long before we exchange names or other details about our lives. Maybe it’s a phenomenon more common in America than in other places, but I’ve observed that fellow travelers everywhere—at least the ones who make conversation—often discuss what they do for a living before talking about their hobbies, family, or political ideology.
The man sitting next to me told me all about his work as a sales manager for SAP, a large business management software firm that many companies use to run their back-office systems. (I knew something about the technology because my poor, suffering assistant at MIT was forced to use it when the university switched to SAP.) I wasn’t terribly interested in talking about the challenges and benefits of accounting software, but I was taken by my seatmate’s enthusiasm. He seemed to really like his job. I sensed that his work was the core of his identity—more important to him, perhaps, than many other things in his life.
ON AN INTUITIVE level, most of us understand the deep interconnection between identity and labor. Children think of their potential future occupations in terms of what they will be (firemen, teachers, doctors, behavioral economists, or what have you), not about the amount of money they will earn. Among adult Americans, “What do you do?” has become as common a component of an introduction as the anachronistic “How do you do?” once was—suggesting that our jobs are an integral part of our identity, not merely a way to make money in order to keep a roof over our heads and food in our mouths. It seems that many people find pride and meaning in their jobs.
In contrast to this labor-identity connection, the basic economic model of labor generally treats working men and women as rats in a maze: work is assumed to be annoying, and all the rat (person) wants to do is to get to the food with as little effort as possible and to rest on a full belly for the most time possible. But if work also gives us meaning, what does this tell us about why people want to work? And what about the connections among motivation, personal meaning, and productivity?
Sucking the Meaning out of Work
In 2005, I was sitting in my office at MIT, working on yet another review,
(#litres_trial_promo) when I heard a knock at the door. I looked up and saw a familiar, slightly chubby face belonging to a young man with brown hair and a funny goatee. I was sure I knew him, but I couldn’t place him. I did the proper thing and invited him in. A moment later I realized that he was David, a thoughtful and insightful student who had taken my class a few years earlier. I was delighted to see him.
Once we were settled in with coffee, I asked David what had brought him back to MIT. “I’m here to do some recruiting,” he said. “We’re looking for new blood.” David went on to tell me what he’d been up to since graduating a few years earlier. He’d landed an exciting job in a New York investment bank. He was making a high salary and enjoying fantastic benefits—including having his laundry done—and loved living in the teeming city. He was dating a woman who, from his description, seemed to be a blend of Wonder Woman and Martha Stewart, though admittedly they had been together for only two weeks.
“I also wanted to tell you something,” he said. “A few weeks ago, I had an experience that made me think back to our behavioral economics class.”
He told me that earlier that year he’d spent ten weeks on a presentation for a forthcoming merger. He had worked very hard on analyzing data, making beautiful plots and projections, and he had often stayed in the office past midnight polishing his PowerPoint presentation (what did bankers and consultants do before PowerPoint?). He was delighted with the outcome and happily e-mailed the presentation to his boss, who was going to make the presentation at the all-important merger meeting. (David was too low in the hierarchy to actually attend the meeting.)
His boss e-mailed him back a few hours later: “Sorry, David, but just yesterday we learned that the deal is off. I did look at your presentation, and it is an impressive and fine piece of work. Well done.” David realized that his presentation would never see the light of day but that this was nothing personal. He understood that his work shone, because his boss was not the kind of person who gave undeserved compliments. Yet, despite the commendation, he was distraught with the outcome. The fact that all his effort had served no ultimate purpose created a deep rift between him and his job. All of a sudden he didn’t care as much about the project in which he had invested so many hours. He also found that he didn’t care as much about other projects he was working on either. In fact, this “work to no end” experience seemed to have colored David’s overall approach to his job and his attitude toward the bank. He’d quickly gone from feeling useful and happy in his work to feeling dissatisfied and that his efforts were futile.
“You know what’s strange?” David added. “I worked hard, produced a high-quality presentation, and my boss was clearly happy with me and my work. I am sure that I will get very positive reviews for my efforts on this project and probably a raise at the end of the year. So, from a functional point of view, I should be happy. At the same time, I can’t shake the feeling that my work has no meaning. What if the project I’m working on now gets canceled the day before it’s due and my work is deleted again without ever being used?”
Then he offered me the following thought experiment. “Imagine,” he said in a low, sad voice, “that you work for some company and your task is to create PowerPoint slides. Every time you finish, someone takes the slides you’ve just made and deletes them. As you do this, you get paid well and enjoy great fringe benefits. There is even someone who does your laundry. How happy would you be to work in such a place?”
I felt sorry for David, and in an attempt to comfort him, I told him a story about my friend Devra, who worked as an editor at one of the major university presses. She had recently finished editing a history book—work she had enjoyed doing and for which she had been paid. Three weeks after she submitted the final manuscript to the publishing house, the head editor decided not to print it. As was the case with David, everything was fine from a functional point of view, but the fact that no readers would ever hold the book in their hands made her regret the time and care she had put into editing it. I was hoping to show David that he was not alone. After a minute of silence he said, “You know what? I think there might be a bigger issue around this. Something about useless or unrequited work. You should study it.”
It was a great idea, and in a moment, I’ll tell you what I did with it. But before we do that, let’s take a detour into the worlds of a parrot, a rat, and contrafreeloading.
Will Work for Food
When I was sixteen, I joined the Israeli Civil Guard. I learned to shoot a World War II–era Russian carbine rifle, set up roadblocks, and perform other useful tasks in case the adult men were at war and we youth were left to protect the home front. As it turned out, the main benefit of learning how to shoot was that from time to time it excused me from school. In those years in Israel, every time a high school class went on a trip, a student who knew how to use a rifle was asked to join it as a guard. Since this duty also meant substituting a few days of classes with hiking and enjoying the countryside, I was always willing to volunteer, even if I had to give up an exam for the call of duty.
(#litres_trial_promo)
On one of these trips I met a girl, and by the end of the trip I had a crush on her. Unfortunately, she was one class behind me in school and our schedules did not coincide, making it difficult for me to see her and learn whether she felt the same about me. So I did what any moderately resourceful teenager would do: I discovered an extracurricular interest of hers and made it mine as well.
About a mile from our town lived a guy we called “Birdman” who had endured a miserable and lonely childhood in Eastern Europe during the Holocaust. Hiding from the Nazis in the forest, he found much comfort in the animals and birds around him. After he eventually made it to Israel, he decided to try to make the childhood of the kids around him much better than his, so he collected birds from all over the globe and invited children to come and experience the wonders of the avian world. The girl I liked used to volunteer in the Birdman’s aviary, and so I joined her in cleaning cages, feeding the birds, telling visitors stories about them, and—most amazingly—watching the birds hatch, grow, and interact with one another and the visitors. After a few months, it became clear that the girl and I had no future but the birds and I did, so I continued to volunteer for a while.
Some years later, after my main hospitalization period, I decided to get a parrot. I selected a relatively large, highly intelligent Mealy Amazon parrot and named her Jean Paul. (For some reason, I decided that female parrots should have French male names.) She was a handsome bird; her feathers were mostly green with some light blue, yellow, and red at the tips of her wings, and we had lots of fun together. Jean Paul loved talking and flirting with nearly everyone who happened by her cage. She would come near me to be petted any time I passed her cage, bowing her head very low and exposing the back of her neck, and I would try to produce baby talk as I ruffled the feathers on her neck. Whenever I took a shower, she would perch in the bathroom and twitch happily when I splashed water drops at her.
Jean Paul was intensely social. Left alone in her cage for too long, she would pluck at her own feathers, something she did when she was bored. As I discovered, parrots have a particularly acute need to engage in mental activity, so I invested in several toys specifically designed to preclude parrot boredom. One such puzzle, called SeekaTreat, was a stack of multicolored wooden tiers of decreasing size that form a kind of pyramid. Made of wood, the tiers were connected through the center with a cord. Within each tier, there were half-inch-deep “treat wells” designed to hold tasty parrot treats. To get at the food, Jean Paul had to lift each tier and uncover the treat, which was not very easy to do. Over the years, the SeekaTreat and other toys like it kept Jean Paul happy, curious, and interested in her environment.
THOUGH I DIDN’T know it at the time, there was an important concept behind the SeekaTreat. “Contrafreeloading,” a term coined by the animal psychologist Glen Jensen, refers to the finding that many animals prefer to earn food rather than simply eating identical but freely accessible food.
To better understand the joy of working for food, let’s go back to the 1960s when Jensen first took adult male albino rats and tested their appetite for labor. Imagine that you are a rat participating in Jensen’s study. You and your little rodent friends start out living an average life in an average cluster of cages, and every day, for ten days, a nice man in a white lab coat gives you 10 grams of finely ground Purina lab crackers precisely at noon (you don’t know it’s noon, but you eventually pick up on the general time). After a few days of this pattern, you learn to expect food at noon every day, and your rat tummy begins rumbling right before the nice man shows up—exactly the state Jensen wants you in.
Once your body is conditioned to eating crackers at noon, things suddenly change. Instead of feeding you at the time of your maximal hunger, you have to wait another hour, and at one o’clock, the man picks you up and puts you in a well-lit “Skinner box.” You are ravenous. Named after its original designer, the influential psychologist B. F. Skinner, this box is a regular cage (similar to the one you are used to), but it has two features that are new to you. The first is an automated food dispenser that releases food pellets every thirty seconds. Yum! The second is a bar that for some reason is covered with a tin shield.
At first, the bar isn’t very interesting, but the food dispenser is, and that is where you spend your time. The food dispenser releases food pellets every so often for twenty-five minutes, until you have eaten fifty food pellets. At that point you are taken back to your cage and given the rest of your food for the day.
The next day, your lunch hour passes by again without food, and at 1:00 P.M. you are placed back into the Skinner box. You’re ravenous but unhappy because this time the food dispenser doesn’t release any pellets. What to do? You wander around the cage, and, passing the bar, you realize that the tin shield is missing. You accidentally press the bar, and immediately a pellet of food is released. Wonderful! You press the bar again. Oh joy!—another pellet comes out. You press again and again, eating happily, but then the light goes off, and at the same time, the bar stops releasing food pellets. You soon learn that when the light is off, no matter how much you press the bar, you don’t get any food.
Just then the man in the lab coat opens the top of the cage and places a tin cup in a corner of the cage. (You don’t know it, but the cup is full of pellets.) You don’t pay attention to the cup; you just want the bar to start producing food again. You press and press, but nothing happens. As long as the light is off, pressing the bar does you no good. You wander around the cage, cursing under your rat breath, and go over to the tin cup. “Oh my!” you say to yourself. “It’s full of pellets! Free food!” You begin chomping away, and then suddenly the light comes on again. Now you realize that you have two possible food sources. You can keep on eating the free food from the tin cup, or you can go back to the bar and press it for food pellets. If you were this rat, what would you do?
Assuming you were like all but one of the two hundred rats in Jensen’s study, you would decide not to feast entirely from the tin cup. Sooner or later, you would return to the bar and press it for food. And if you were like 44 percent of the rats, you would press the bar quite often—enough to feed you more than half your pellets. What’s more, once you started pressing the bar, you would not return so easily to the cup with the abundant free food.
Jensen discovered (and many subsequent experiments confirmed) that many animals—including fish, birds, gerbils, rats, mice, monkeys, and chimpanzees—tend to prefer a longer, more indirect route to food than a shorter, more direct one.
(#litres_trial_promo) That is, as long as fish, birds, gerbils, rats, mice, monkeys, and chimpanzees don’t have to work too hard, they frequently prefer to earn their food. In fact, among all the animals tested so far the only species that prefers the lazy route is—you guessed it—the commendably rational cat.
This brings us back to Jean Paul. If she were an economically rational bird and interested only in expending as little effort as possible to get her food, she would simply have eaten from the tray in her cage and ignored the SeekaTreat. Instead, she played with her SeekaTreat (and other toys) for hours because it provided her with a more meaningful way to earn her food and spend her time. She was not merely existing but mastering something and, in a sense, “earning” her living.
(#litres_trial_promo)
THE GENERAL IDEA of contrafreeloading contradicts the simple economic view that organisms will always choose to maximize their reward while minimizing their effort. According to this standard economic view, spending anything, including energy, is considered a cost, and it makes no sense that an organism would voluntarily do so. Why work when they can get the same food—maybe even more food—for free?
When I described contrafreeloading to one of my rational economist friends (yes, I still have some of these), he immediately explained to me how Jensen’s results do not, in fact, contradict standard economic reasoning. He patiently told me why this research was irrelevant to questions of economics. “You see,” he said, as one would to a child, “economic theory is about the behavior of people, not rats or parrots. Rats have very small brains and almost nonexistent neocortices,
(#litres_trial_promo) so it is no wonder that these animals don’t realize that they can get food for free. They are just confused.”
“Anyway,” he continued, “I am sure that if you were to repeat Jensen’s experiment with normal people, you would not find this contrafreeloading effect. And I am a hundred percent positive that if you had used economists as your participants, you would not see anyone working unnecessarily!”
He had a valid point. And though I felt that it is possible to generalize about the way we relate to work from those animal studies, it was also clear to me that some experiments on adult human contrafreeloading were in the cards. (It was also clear that I should not do the experiment on economists.)
What do you think? Do humans, in general, exhibit contrafreeloading, or are they more rational? What about you?
“Small-M” Motivations
After David left my office, I started thinking about his and Devra’s disappointments. The lack of an audience for their work had made a big difference in their motivation. What is it aside from a paycheck, I wondered, that confers meaning on work? Is it the small satisfaction of focused engagement? Is it that, like Jean Paul, we enjoy feeling challenged by whatever it is we’re doing and satisfactorily completing a task (which creates a small level of meaning with a small m)? Or maybe we feel meaning only when we deal with something bigger. Perhaps we hope that someone else, especially someone important to us, will ascribe value to what we’ve produced? Maybe we need the illusion that our work might one day matter to many people. That it might be of some value in the big, broad world out there (we might call this Meaning with a large M)? Most likely it is all of these. But fundamentally, I think that almost any aspect of meaning (even small-m meaning) can be sufficient to drive our behavior. As long as we are doing something that is somewhat connected to our self-image, it can fuel our motivation and get us to work much harder.
Consider the work of writing, for example. Once upon a time, I wrote academic papers with an eye on promotion. But I also hoped—and still hope—that they might actually influence something in the world. How hard would I work on an academic paper if I knew for sure that only a few people would ever read it? What if I knew for sure that no one would ever read my work? Would I still do it?
I truly enjoy the research I do; I think it’s fun. I’m excited to tell you, dear reader, about how I have spent the last twenty years of my life. I’m almost sure my mother will read this book,
(#litres_trial_promo) and I’m hoping that at least a few others will as well. But what if I knew for sure that no one would ever read it? That Claire Wachtel, my editor at HarperCollins, would decide to put this book in a drawer, pay me for it, and never publish it? Would I still be sitting here late at night working on this chapter? No way. Much of what I do in life, including writing my blog posts, articles, and these pages, is driven by ego motivations that link my effort to the meaning that I hope the readers of these words will find in them. Without an audience, I would have very little motivation to work as hard as I do.
BLOGGING FOR TREATS
Now think about blogging. The number of blogs out there is astounding, and it seems that almost everyone has a blog or is thinking about starting one. Why are blogs so popular? Not only is it because so many people have the desire to write; after all, people wrote before blogs were invented. It is also because blogs have two features that distinguish them from other forms of writing. First, they provide the hope or the illusion that someone else will read one’s writing. After all, the moment a blogger presses the “publish” button, the blog can be consumed by anybody in the world, and with so many people connected, somebody, or at least a few people, should stumble upon the blog. Indeed, the “number of views” statistic is a highly motivating feature in the blogosphere because it lets the blogger know exactly how many people have at least seen the posting. Blogs also provide readers with the ability to leave their reactions and comments—gratifying for both the blogger, who now has a verifiable audience, and the reader-cum-writer. Most blogs have very low readership—perhaps only the blogger’s mother or best friend reads them—but even writing for one person, compared to writing for nobody, seems to be enough to compel millions of people to blog.
Building Bionicles
A few weeks after my conversation with David, I met with Emir Kamenica (a professor at the University of Chicago), and Dražen Prelec (a professor at MIT) at a local coffee shop. After discussing a few different research topics, we decided to explore the effect of devaluation on motivation for work. We could have examined Large-M Meaning—that is, we could have measured the value that people who are developing a cure for cancer, helping the poor, building bridges, and otherwise saving the world every day place on their jobs. But instead, and maybe because the three of us are academics, we decided to set up experiments that would examine the effects of small-m meaning—effects that I suspect are more common in everyday life and in the workplace. We wanted to explore how small changes in the work of people like David the banker and Devra the editor affected their desire to work. And so we came up with an idea for an experiment that would test people’s reactions to small reductions in meaning for a task that did not have much meaning to start with.
ONE FALL DAY in Boston, a tall mechanical engineering student named Joe entered the student union at Harvard University. He was all ambition and acne. On a crowded bulletin board boasting flyers about upcoming concerts, lectures, political events, and roommates wanted, he caught sight of a sign reading “Get paid to build Legos!”
As an aspiring engineer, Joe had always loved building things. Drawn to anything that required assembling, Joe had naturally played with Legos throughout his childhood. When he was six years old, he had taken his father’s computer apart, and a year later, he had disassembled the living room stereo system. By the time he was fifteen, his penchant for taking objects apart and putting them back together again had cost his family a small fortune. Fortunately, he had found an outlet for his passion in college, and now he had the opportunity to build with Legos to his heart’s content—and get paid for it.
A few days later, at the agreed-upon time, Joe showed up to take part in our experiment. As luck would have it, he was assigned to the meaningful condition. Sean, the research assistant, greeted Joe as he entered the room, directed him to a chair, and explained the procedure to him. Sean showed Joe a Lego Bionicle—a small fighting robot—and then told Joe that his task would involve constructing this exact type of Bionicle, made up of forty pieces that had to be assembled in a precise way. Next, Sean told Joe the rules for payment. “The basic setup,” he said, “is that you will get paid on a diminishing scale for each Bionicle you assemble. For the first Bionicle, you will receive two dollars. After you finish the first one, I will ask you if you want to build another one, this time for eleven cents less, which is a dollar eighty-nine. If you say that you want to build another one, I will hand you the next one. This same process will continue in the same way, and for each additional Bionicle you build, you will get eleven cents less, until you decide that you don’t want to build any more Bionicles. At that point, you will receive the total amount of money for all the robots you’ve created. There is no time limit, and you can build Bionicles until the benefits you get no longer outweigh the costs.”
Joe nodded, eager to get started. “And one last thing,” Sean warned. “We use the same Bionicles for all of our participants, so at some point before the next participant shows up, I will have to disassemble all the Bionicles you build and place the parts back in their boxes for the next participant. Everything clear?”
Joe quickly opened the first box of plastic parts, scanned the assembly instructions, and began building his first Bionicle. He obviously enjoyed assembling the pieces and seeing the weird robotic form take shape. Once finished, he arranged the robot in a battle position and asked for the next one. Sean reminded him how much he would make for the next Bionicle ($1.89) and handed him the next box of pieces. Once Joe started working on the next Bionicle, Sean took the construction that Joe had just finished and placed it in a box below the desk where it was destined to be disassembled for the next participant.
Like a man on a mission, Joe continued building one Bionicle after another, while Sean continued storing them in the box below the table. After he’d finished assembling ten robots, Joe announced that he’d had his fill and collected his pay of $15.05. Before Joe took off, Sean asked him to answer a few questions about how much he liked Legos in general and how much he had enjoyed the task. Joe responded that he was a Lego fan, that he had really enjoyed the task, and that he would recommend it to his friends.
The next person in line turned out to be a young man named Chad, an exuberant—or perhaps overcaffeinated—premed student. Unlike Joe, Chad was assigned to a procedure that among ourselves we fondly called the “Sisyphean” condition. This was the condition we wanted to focus on.
Sean explained the terms and conditions of the study to Chad in exactly the same way he had to Joe. Chad grabbed the box, opened it, removed the Bionicle’s assembly instruction sheet, and carefully looked it over, planning his strategy. First he separated the pieces into groups, in the order in which they would be needed. Then he began assembling the pieces, moving quickly from one to another. He went about the task cheerily, finished the first Bionicle in a few minutes, and handed it to Sean as instructed. “That’s two dollars,” Sean said. “Would you like to build another one for a dollar eighty-nine?” Chad nodded enthusiastically and started working on his second robot, using the same organized approach.
THE MYTH OF SISYPHUS
We used the term “Sisyphean” as a tribute to the mythical king Sisyphus, who was punished by the gods for his avarice and trickery. Besides murdering travelers and guests, seducing his niece, and usurping his brother’s throne, Sisyphus also tricked the gods.
Before he died, Sisyphus, knowing that he was headed to the Underworld, made his wife promise to refrain from offering the expected sacrifice following his death. Once he reached Hades, Sisyphus convinced kindhearted Persephone, the queen of the Underworld, to let him return to the upper world, so that he could ask his wife why she was neglecting her duty. Of course, Persephone had no idea that Sisyphus had intentionally asked his wife not to make the sacrifice, so she agreed, and Sisyphus escaped the Underworld, refusing to return. Eventually Sisyphus was captured and carried back, and the angry gods gave him his punishment: for the rest of eternity, he was forced to push a large rock up a steep hill, in itself a miserable task. Every time he neared the top of the hill, the rock would roll backward and he would have to start over.
Of course, our participants had done nothing deserving of punishment. We simply used the term to describe the condition that the less fortunate among them experienced.
While Chad was putting together the first pieces of his next Bionicle (pay attention, because this is where the two conditions differed), Sean slowly disassembled the first Bionicle, piece by piece, and placed the pieces back into the original box.
“Why are you taking it apart?” Chad asked, looking both puzzled and dismayed.
“This is just the procedure,” Sean explained. “We need to take this one apart in case you want to build another Bionicle.”
Chad returned his attention to the robot he was building, but his energy and excitement about building Bionicles was clearly diminished. When he finished his second construction, he paused. Should he build a third Bionicle or not? After a few seconds, he said he would build another one.
Sean handed Chad the original box (the one Chad had assembled and Sean had disassembled), and Chad got to work. This time, he worked somewhat faster, but he abandoned his strategy; perhaps he felt he no longer needed an organizational strategy, or maybe he felt that the extra step was unnecessary.
Meanwhile, Sean slowly took apart the second Bionicle Chad had just finished and placed the parts back into the second box. After Chad finished the third Bionicle, he looked it over and handed it to Sean. “That makes five sixty-seven,” Sean said. “Would you like to make another?”
Chad checked his cell phone for the time and thought for a moment. “Okay,” he said, “I’ll make one more.”
Sean handed him the second Bionicle for the second time, and Chad set about rebuilding it. (All the participants in his condition built and rebuilt the same two Bionicles until they decided to call it quits.) Chad managed to build both his Bionicles twice, for a total of four, for which he was paid $7.34.
After paying Chad, Sean asked him, as he did with all participants, whether he liked Legos and had enjoyed the task.
“Well, I like playing with Legos, but I wasn’t wild about the experiment,” Chad said with a shrug. He tucked the payment into his wallet and quickly left the room.
What did the results show? Joe and the other participants in the meaningful condition built an average of 10.6 Bionicles and received an average of $14.40 for their time. Even after they reached the point where their earnings for each Bionicle were less than a dollar (half of the initial payment), 65 percent of those in the meaningful condition kept on working. In contrast, those in the Sisyphean condition stopped working much sooner. On average, that group built 7.2 Bionicles (68 percent of the number built by the participants in the meaningful condition) and earned an average of $11.52. Only 20 percent of the participants in the Sisyphean condition constructed Bionicles when the payment was less than a dollar per robot.
In addition to comparing the number of Bionicles our participants constructed in the two conditions, we wanted to see how the individuals’ liking of Legos influenced their persistence in the task. In general, you would expect that the more a participant loved playing with Legos, the more Bionicles he or she would complete. (We measured this by the size of the statistical correlation between these two numbers.) This was, indeed, the case. But it also turned out that the two conditions were very different in terms of the relationship between Legos-love and persistence in the task. In the meaningful condition the correlation was high, but it was practically zero in the Sisyphean condition.
What this analysis tells me is that if you take people who love something (after all, the students who took part in this experiment signed up for an experiment to build Legos) and you place them in meaningful working conditions, the joy they derive from the activity is going to be a major driver in dictating their level of effort. However, if you take the same people with the same initial passion and desire and place them in meaningless working conditions, you can very easily kill any internal joy they might derive from the activity.
IMAGINE THAT YOU are a consultant visiting two Bionicles factories. The working conditions in the first Bionicles factory are very similar to those in the Sisyphean condition (which, sadly, is not very different from the structure of many workplaces). After observing the workers’ behavior, you would most likely conclude that they don’t like Legos much (or maybe they have something specific against Bionicles). You also observe their need for financial incentives to motivate them to continue working on their unpleasant task and how quickly they stop working once the payment drops below a certain level. When you deliver your PowerPoint presentation to the company’s board, you remark that as the payment per production unit drops, the employees’ willingness to work dramatically diminishes. From this you further conclude that if the factory wants to increase productivity, wages must be increased substantially.
Next, you visit the second Bionicles factory, which is structured more similarly to the meaningful condition. Now imagine how your conclusions about the onerous nature of the task, the joy of doing it, and the level of compensation needed to persist in the task, might be different.
We actually conducted a related consultant experiment by describing the two experimental conditions to our participants and asking them to estimate the difference in productivity between the two factories. They basically got it right, estimating that the total output in the meaningful condition would be higher than the output in the Sisyphean condition. But they were wrong in estimating the magnitude of the difference. They thought that those in the meaningful condition would make one or two more Bionicles, but, in fact, they made an average of 3.5 more. This result suggests that though we can recognize the effect of even small-m meaning on motivation, we dramatically underestimate its power.
In this light, let’s think about the results of the Bionicles experiment in terms of real-life labor. Joe and Chad loved playing with Legos and were paid at the same rate. Both knew that their creations were only temporary. The only difference was that Joe could maintain the illusion that his work was meaningful and so continued to enjoy building his Bionicles. Chad, on the other hand, witnessed the piece-by-piece destruction of his work, forcing him to realize that his labor was meaningless.
(#litres_trial_promo) All the participants most likely understood that the whole exercise was silly—after all, they were just making stuff from Legos, not designing a new dam, saving lives, or developing a new medication—but for those in Chad’s condition, watching their creations being deconstructed in front of their eyes was hugely demotivating. It was enough to kill any joy they’d accrued from building the Bionicles in the first place. This conclusion seemed to tally with David’s and Devra’s stories; the translation of joy into willingness to work seems to depend to a large degree on how much meaning we can attribute to our own labor.
NOW THAT WE had ruined the childhood memories of half of our participants, it was time to try another approach to the same experiment. This time the experimental setup was based more closely on David’s experience. Once again, we set up a booth in the student center, but this time we tested three conditions and used a different task.
We created a sheet of paper with a random sequence of letters on it and asked the participants to find instances where the letter S was followed by another letter S. We told them that each sheet contained ten instances of consecutive Ss and that they would have to find all ten instances in order to complete a sheet. We also told them about the payment scheme: they would be paid $0.55 for the first completed page, $0.50 for the second, and so on (for the twelfth page and thereafter, they would receive nothing).
In the first condition (which we called acknowledged), we asked the students to write their names on each sheet prior to starting the task and then to find the ten instances of consecutive Ss. Once they finished a page, they handed it to the experimenter, who looked over the sheet from top to bottom, nodded in a positive way, and placed it upside down on top of a large pile of completed sheets. The instructions for the ignored condition were basically the same, but we didn’t ask participants to write their names at the top of the sheet. After completing the task, they handed the sheet to the experimenter, who placed it on top of a high stack of papers without even a sidelong glance. In the third, ominously named shredded condition, we did something even more extreme. Once the participant handed in their sheet, instead of adding it to a stack of papers, the experimenter immediately fed the paper into a shredder, right before the participant’s eyes, without even looking at it.
We were impressed by the difference a simple acknowledgment made. Based on the outcome of the Bionicles experiment, we expected the participants in the acknowledged condition to be the most productive. And indeed, they completed many more sheets of letters than their fellow participants in the shredded condition. When we looked at how many of the participants continued searching for letter pairs after they reached the pittance payment of 10 cents (which was also the tenth sheet), we found that about half (49 percent) of those in the acknowledged condition went on to complete ten sheets or more, whereas only 17 percent in the shredded condition completed ten sheets or more. Indeed, it appeared that finding pairs of letters can be either enjoyable and interesting (if your effort is acknowledged) or a pain (if your labor is shredded).
But what about the participants in the ignored condition? Their labor was not destroyed, but neither did they receive any form of feedback about their work. How many sheets would those individuals complete? Would their output be similar to that of the individuals in the acknowledged condition? Would they take the lack of reaction badly and produce an output similar to that of the individuals in the shredded condition? Or would the results of those in the ignored condition fall somewhere between the other two?
The results showed that participants in the acknowledged condition completed on average 9.03 sheets of letters; those in the shredded condition completed 6.34 sheets; and those in the ignored condition (drumroll, please) completed 6.77 sheets (and only 18 percent of them completed ten sheets or more). The amount of work produced in the ignored condition was much, much closer to the performance in the shredded condition than to that in the acknowledged condition.
THIS EXPERIMENT TAUGHT US that sucking the meaning out of work is surprisingly easy. If you’re a manager who really wants to demotivate your employees, destroy their work in front of their eyes. Or, if you want to be a little subtler about it, just ignore them and their efforts. On the other hand, if you want to motivate people working with you and for you, it would be useful to pay attention to them, their effort, and the fruits of their labor.
There is one more way to think about the results of the finding pairs of letters experiment. The participants in the shredded condition quickly realized that they could cheat, because no one bothered to look at their work. In fact, if these participants were rational, upon realizing that their work was not checked, those in the shredded condition should have cheated, persisted in the task the longest, and made the most money. The fact that the acknowledged group worked longer and the shredded group worked the least further suggests that when it comes to labor, human motivation is complex. It can’t be reduced to a simple “work for money” trade-off. Instead we should realize that the effect of meaning on labor, as well as the effect of eliminating meaning from labor, are more powerful than we usually expect.
The Division and Meaning of Labor
I found the consistency between the results of the two experiments, and the substantial impact of such small differences in meaning, rather startling. I was also taken aback by the almost complete lack of enjoyment that the participants in the Sisyphean condition derived from building Legos. As I reflected on the situations facing David, Devra, and others, my thoughts eventually lighted on my administrative assistant.
On paper, Jay had a simple enough job description: he was managing my research accounts, paying participants, ordering research supplies, and arranging my travel schedule. But the information technology that Jay had to use made his job a sort of Sisyphean task. The SAP accounting software he used daily required him to fill in numerous fields on the appropriate electronic forms, sending these e-forms to other people, who filled in a few more fields, who in turn sent the e-forms to someone else, who approved the expenses and subsequently passed them to yet another person, who actually settled the accounts. Not only was poor Jay doing only a small part of a relatively meaningless task, but he never had the satisfaction of seeing this work completed.
Why did the nice people at MIT and SAP design the system this way? Why did they break tasks into so many components, put each person in charge of only small parts, and never show them the overall progress or completion of their tasks? I suspect it all has to do with the ideas of efficiency brought to us by Adam Smith. As Smith argued in 1776 in The Wealth of Nations, division of labor is an incredibly effective way to achieve higher efficiency in the production process. Consider, for example, his observations of a pin factory:
…the division of labour has been very often taken notice of, the trade of the pin-maker; a workman not educated to this business (which the division of labour has rendered a distinct trade), nor acquainted with the use of the machinery employed in it (to the invention of which the same division of labour has probably given occasion), could scarce, perhaps, with his utmost industry, make one pin in a day, and certainly could not make twenty. But in the way in which this business is now carried on, not only the whole work is a peculiar trade, but it is divided into a number of branches, of which the greater part are likewise peculiar trades. One man draws out the wire, another straights it, a third cuts it, a fourth points it, a fifth grinds it at the top for receiving the head; to make the head requires two or three distinct operations; to put it on, is a peculiar business, to whiten the pins is another; it is even a trade by itself to put them into the paper; and the important business of making a pin is, in this manner, divided into about eighteen distinct operations, which, in some manufactories, are all performed by distinct hands, though in others the same man will sometimes perform two or three of them. I have seen a small manufactory of this kind where ten men only were employed, and where some of them consequently performed two or three distinct operations. But though they were very poor, and therefore but indifferently accommodated with the necessary machinery, they could, when they exerted themselves, make among them about twelve pounds of pins in a day. There are in a pound upwards of four thousand pins of a middling size. Those ten persons, therefore, could make among them upwards of forty-eight thousand pins in a day.
(#litres_trial_promo)
When we take tasks and break them down into smaller parts, we create local efficiencies; each person can become better and better at the small thing he does. (Henry Ford and Frederick Winslow Taylor extended the division-of-labor concept to the assembly line, finding that this approach reduced errors, increased productivity, and made it possible to produce cars and other goods en masse.) But we often don’t realize that the division of labor can also exact a human cost. As early as 1844, Karl Marx—the German philosopher, political economist, sociologist, revolutionary, and father of communism—pointed to the importance of what he called “the alienation of labor.” For Marx, an alienated laborer is separated from his own activities, from the goals of his labor, and from the process of production. This makes work an external activity that does not allow the laborer to find identity or meaning in his work.
I am far from being a Marxist (despite the fact that many people think that all academics are), but I don’t think we should wholly discount Marx’s idea of alienation in terms of its role in the workplace. In fact, I suspect that the idea of alienation was less relevant in Marx’s time, when, even if employees tried hard, it was difficult to find meaning at work. In today’s economy, as we move to jobs that require imagination, creativity, thinking, and round-the-clock engagement, Marx’s emphasis on alienation adds an important ingredient to the labor mix. I also suspect that Adam Smith’s emphasis on the efficiency in the division of labor was more relevant during his time, when the labor in question was based mostly on simple production, and is less relevant in today’s knowledge economy.
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/dan-ariely/the-upside-of-irrationality-the-unexpected-benefits-of-defying/) на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.