Zucked: How Users Got Used and What We Can Do About It
Roger McNamee
This is the dramatic story of how a noted tech venture capitalist, an early mentor to Mark Zuckerberg and investor in his company, woke up to the serious damage Facebook was doing to our society and set out to try to stop it.If you had told Roger McNamee three years ago that he would soon be devoting himself to stopping Facebook from destroying democracy, he would have howled with laughter. He had mentored many tech leaders in his illustrious career as an investor, but few things had made him prouder, or been better for his fund's bottom line, than his early service to Mark Zuckerberg. Still a large shareholder in Facebook, he had every good reason to stay on the bright side. Until he simply couldn't. Zucked is McNamee's intimate reckoning with the catastrophic failure of the head of one of the world's most powerful companies to face up to the damage he is doing. It's a story that begins with a series of rude awakenings. First there is the author's dawning realization that the platform is being manipulated by some very bad actors. Then there is the even more unsettling realization that Zuckerberg and Sheryl Sandberg are unable or unwilling to share his concerns, polite as they may be to his face. And then comes Brexit and the election of Donald Trump, and the emergence of one horrific piece of news after another about the malign ends to which the Facebook platform has been put. To McNamee's shock, Facebook's leaders still duck and dissemble, viewing the matter as a public relations problem. Now thoroughly alienated, McNamee digs into the issue, and fortuitously meets up with some fellow travellers who share his concerns, and help him sharpen its focus. Soon he and a dream team of Silicon Valley technologists are charging into the fray, to raise consciousness about the existential threat of Facebook, and the persuasion architecture of the attention economy more broadly – to our public health and to our political order. Zucked is both an enthralling personal narrative and a masterful explication of the forces that have conspired to place us all on the horns of this dilemma. This is the story of a company and its leadership, but it's also a larger tale of a business sector unmoored from normal constraints, at a moment of political and cultural crisis, the worst possible time to be given new tools for summoning the darker angels of our nature and whipping them into a frenzy. This is a wise, hard-hitting, and urgently necessary account that crystallizes the issue definitively for the rest of us.
COPYRIGHT (#ulink_d8af9aa6-b4bb-53c8-947d-d31cfad12ea5)
HarperCollinsPublishers
1 London Bridge Street
London SE1 9GF
www.harpercollins.co.uk (http://www.harpercollins.co.uk)
First published in the US by Penguin Press, an imprint of Penguin Random House LLC 2019
This UK edition published by HarperCollinsPublishers 2019
FIRST EDITION
© Roger McNamee 2019
Cover layout design © HarperCollinsPublishers 2019
Cover photograph © Mack15/Getty Images (thumb icon), Shutterstock.com (globe)
A catalogue record of this book is available from the British Library
Roger McNamee asserts the moral right to be identified as the author of this work
“The Current Moment in History,” remarks by George Soros delivered at the World Economic Forum meeting, Davos, Switzerland, January 25, 2018. Reprinted by permission of George Soros.
While the author has made every effort to provide accurate telephone numbers, internet addresses, and other contact information at the time of publication, neither the publisher nor the author assumes any responsibility for errors or for changes that occur after publication. Further, the publisher does not have any control over and does not assume any responsibility for author or third-party websites or their content.
All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the nonexclusive, non-transferable right to access and read the text of this e-book on screen. No part of this text may be reproduced, transmitted, downloaded, decompiled, reverse engineered, or stored in or introduced into any information storage retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins e-books.
Find out about HarperCollins and the environment at www.harpercollins.co.uk/green (http://www.harpercollins.co.uk/green)
Source ISBN: 9780008318994
Ebook Edition © February 2019 ISBN: 9780008319021
Version 2019-01-17
ADVANCE PRAISE FOR ZUCKED (#ulink_94cd0826-494e-536a-9865-60b2a7339450)
“Roger McNamee’s Zucked fully captures the disastrous consequences that occur when people running companies wielding enormous power don’t listen deeply to their stakeholders, fail to exercise their ethical responsibilities, and don’t make trust their number one value.”
—Marc Benioff, chairman and co-CEO of Salesforce
“McNamee puts his finger on serious problems in online environments, especially social networking platforms. I consider this book to be a must-read for anyone wanting to understand the societal impact of cyberspace.”
—Vint Cerf, internet pioneer
“Roger McNamee is an investor with the nose of an investigator. This unafraid and unapologetic critique is enhanced by McNamee’s personal association with Facebook’s leaders and his long career in the industry. Whether you believe technology is the problem or the solution, one has no choice but to listen. It’s only democracy at stake.”
—Emily Chang, author of Brotopia
“Roger McNamee is truly the most interesting man in the world—legendary investor, virtuoso guitarist, and damn lucid writer. He’s written a terrific book that is both soulful memoir and muckraking exposé of social media. Everyone who spends their day staring into screens needs to read his impassioned tale.”
—Franklin Foer, author of World Without Mind
“A frightening view behind the scenes of how absolute power and panoptic technologies can corrupt our politics and civic commons in this age of increasing-returns monopolies. Complementing Jaron Lanier’s recent warnings with a clear-eyed view of politics, antitrust, and the law, this is essential reading for activists and policymakers as we work to preserve privacy and decency and a civil society in the internet age.”
—Bill Joy, cofounder of Sun Microsystems, creator of the Berkeley Unix operating system
“Zucked is the mesmerizing and often hilarious story of how Facebook went from young darling to adolescent menace, not to mention a serious danger to democracy. With revelations on every page, you won’t know whether to laugh or weep.”
—Tim Wu, author of The Attention Merchants and The Curse of Bigness
DEDICATION (#ulink_c4602f8e-26f8-5088-a0a6-52a9a29acd7d)
To Ann, who inspires me every day
EPIGRAPH (#ulink_fe6258ab-8014-5453-a79c-97679328066f)
Technology is neither good nor bad; nor is it neutral.
—Melvin Kranzberg’s First Law of Technology
We cannot solve our problems with the same thinking we used when we created them.
—Albert Einstein
Ultimately, what the tech industry really cares about is ushering in the future, but it conflates technological progress with societal progress.
—Jenna Wortham
CONTENTS
Cover (#u4106ddd4-a2b0-59cb-8718-a7b99db60dde)
Title Page (#udfea0c95-2f9c-56bb-bad4-b436f32a3532)
Copyright (#ulink_8e319314-137e-56df-ae76-ff7efbfa1676)
Praise (#ulink_7bb17ec4-a0f3-50a8-ad3e-79b0d6cb31d2)
Dedication (#ulink_73e83222-3b14-52ac-b87c-fd0ad5e31ea0)
Epigraph (#ulink_0f667f03-df9c-557c-b270-c3ad2a689b5d)
Prologue (#ulink_4b015999-869d-5288-80b6-7653ef41f755)
1 The Strangest Meeting Ever (#ulink_87a757d2-a73e-5888-ac1b-46d1e7fce716)
2 Silicon Valley Before Facebook (#ulink_21c020b6-8fc6-5df5-a182-982b245ed21d)
3 Move Fast and Break Things (#ulink_43162c58-a738-591f-9e4d-6639a36e51f2)
4 The Children of Fogg (#litres_trial_promo)
5 Mr. Harris and Mr. McNamee Go to Washington (#litres_trial_promo)
6 Congress Gets Serious (#litres_trial_promo)
7 The Facebook Way (#litres_trial_promo)
8 Facebook Digs in Its Heels (#litres_trial_promo)
9 The Pollster (#litres_trial_promo)
10 Cambridge Analytica Changes Everything (#litres_trial_promo)
11 Days of Reckoning (#litres_trial_promo)
12 Success? (#litres_trial_promo)
13 The Future of Society (#litres_trial_promo)
14 The Future of You (#litres_trial_promo)
Epilogue (#litres_trial_promo)
Acknowledgments (#litres_trial_promo)
Appendix 1: Memo to Zuck and Sheryl: Draft Op-Ed for Recode (#litres_trial_promo)
Appendix 2: George Soros’s Davos Remarks: “The Current Moment in History” (#litres_trial_promo)
Bibliographic Essay (#litres_trial_promo)
List of Searchable Terms (#litres_trial_promo)
Other Books By (#litres_trial_promo)
About the Publisher (#litres_trial_promo)
Prologue
Technology is a useful servant but a dangerous master. —CHRISTIAN LOUS LANGE
November 9, 2016
“The Russians used Facebook to tip the election!”
So began my side of a conversation the day after the presidential election. I was speaking with Dan Rose, the head of media partnerships at Facebook. If Rose was taken aback by how furious I was, he hid it well.
Let me back up. I am a longtime tech investor and evangelist. Tech had been my career and my passion, but by 2016, I was backing away from full-time professional investing and contemplating retirement. I had been an early advisor to Facebook founder Mark Zuckerberg—Zuck, to many colleagues and friends—and an early investor in Facebook. I had been a true believer for a decade. Even at this writing, I still own shares in Facebook. In terms of my own narrow self-interest, I had no reason to bite Facebook’s hand. It would never have occurred to me to be an anti-Facebook activist. I was more like Jimmy Stewart in Hitchcock’s Rear Window. He is minding his own business, checking out the view from his living room, when he sees what looks like a crime in progress, and then he has to ask himself what he should do. In my case, I had spent a career trying to draw smart conclusions from incomplete information, and one day early in 2016 I started to see things happening on Facebook that did not look right. I started pulling on that thread and uncovered a catastrophe. In the beginning, I assumed that Facebook was a victim and I just wanted to warn my friends. What I learned in the months that followed shocked and disappointed me. I learned that my trust in Facebook had been misplaced.
This book is the story of why I became convinced, in spite of myself, that even though Facebook provided a compelling experience for most of its users, it was terrible for America and needed to change or be changed, and what I have tried to do about it. My hope is that the narrative of my own conversion experience will help others understand the threat. Along the way, I will share what I know about the technology that enables internet platforms like Facebook to manipulate attention. I will explain how bad actors exploit the design of Facebook and other platforms to harm and even kill innocent people. How democracy has been undermined because of design choices and business decisions by internet platforms that deny responsibility for the consequences of their actions. How the culture of these companies causes employees to be indifferent to the negative side effects of their success. At this writing, there is nothing to prevent more of the same.
This is a story about trust. Technology platforms, including Facebook and Google, are the beneficiaries of trust and goodwill accumulated over fifty years by earlier generations of technology companies. They have taken advantage of our trust, using sophisticated techniques to prey on the weakest aspects of human psychology, to gather and exploit private data, and to craft business models that do not protect users from harm. Users must now learn to be skeptical about products they love, to change their online behavior, insist that platforms accept responsibility for the impact of their choices, and push policy makers to regulate the platforms to protect the public interest.
This is a story about privilege. It reveals how hypersuccessful people can be so focused on their own goals that they forget that others also have rights and privileges. How it is possible for otherwise brilliant people to lose sight of the fact that their users are entitled to self-determination. How success can breed overconfidence to the point of resistance to constructive feedback from friends, much less criticism. How some of the hardest working, most productive people on earth can be so blind to the consequences of their actions that they are willing to put democracy at risk to protect their privilege.
This is also a story about power. It describes how even the best of ideas, in the hands of people with good intentions, can still go terribly wrong. Imagine a stew of unregulated capitalism, addictive technology, and authoritarian values, combined with Silicon Valley’s relentlessness and hubris, unleashed on billions of unsuspecting users. I think the day will come, sooner than I could have imagined just two years ago, when the world will recognize that the value users receive from the Facebook-dominated social media/attention economy revolution masked an unmitigated disaster for our democracy, for public health, for personal privacy, and for the economy. It did not have to be that way. It will take a concerted effort to fix it.
When historians finish with this corner of history, I suspect that they will cut Facebook some slack about the poor choices that Zuck, Sheryl Sandberg, and their team made as the company grew. I do. Making mistakes is part of life, and growing a startup to global scale is immensely challenging. Where I fault Facebook—and where I believe history will, as well—is for the company’s response to criticism and evidence. They had an opportunity to be the hero in their own story by taking responsibility for their choices and the catastrophic outcomes those choices produced. Instead, Zuck and Sheryl chose another path.
This story is still unfolding. I have written this book now to serve as a warning. My goals are to make readers aware of a crisis, help them understand how and why it happened, and suggest a path forward. If I achieve only one thing, I hope it will be to make the reader appreciate that he or she has a role to play in the solution. I hope every reader will embrace the opportunity.
It is possible that the worst damage from Facebook and the other internet platforms is behind us, but that is not where the smart money will place its bet. The most likely case is that the technology and business model of Facebook and others will continue to undermine democracy, public health, privacy, and innovation until a countervailing power, in the form of government intervention or user protest, forces change.
Ten days before the November 2016 election, I had reached out formally to Mark Zuckerberg and Facebook chief operating officer Sheryl Sandberg, two people I considered friends, to share my fear that bad actors were exploiting Facebook’s architecture and business model to inflict harm on innocent people, and that the company was not living up to its potential as a force for good in society. In a two-page memo, I had cited a number of instances of harm, none actually committed by Facebook employees but all enabled by the company’s algorithms, advertising model, automation, culture, and value system. I also cited examples of harm to employees and users that resulted from the company’s culture and priorities. I have included the memo in the appendix.
Zuck created Facebook to bring the world together. What I did not know when I met him but would eventually discover was that his idealism was unbuffered by realism or empathy. He seems to have assumed that everyone would view and use Facebook the way he did, not imagining how easily the platform could be exploited to cause harm. He did not believe in data privacy and did everything he could to maximize disclosure and sharing. He operated the company as if every problem could be solved with more or better code. He embraced invasive surveillance, careless sharing of private data, and behavior modification in pursuit of unprecedented scale and influence. Surveillance, the sharing of user data, and behavioral modification are the foundation of Facebook’s success. Users are fuel for Facebook’s growth and, in some cases, the victims of it.
When I reached out to Zuck and Sheryl, all I had was a hypothesis that bad actors were using Facebook to cause harm. I suspected that the examples I saw reflected systemic flaws in the platform’s design and the company’s culture. I did not emphasize the threat to the presidential election, because at that time I could not imagine that the exploitation of Facebook would affect the outcome, and I did not want the company to dismiss my concerns if Hillary Clinton won, as was widely anticipated. I warned that Facebook needed to fix the flaws or risk its brand and the trust of users. While it had not inflicted harm directly, Facebook was being used as a weapon, and users had a right to expect the company to protect them.
The memo was a draft of an op-ed that I had written at the invitation of the technology blog Recode. My concerns had been building throughout 2016 and reached a peak with the news that the Russians were attempting to interfere in the presidential election. I was increasingly freaked out by what I had seen, and the tone of the op-ed reflected that. My wife, Ann, wisely encouraged me to send the op-ed to Zuck and Sheryl first, before publication. I had been one of Zuck’s many advisors in Facebook’s early days, and I played a role in Sheryl’s joining the company as chief operating officer. I had not been involved with the company since 2009, but I remained a huge fan. My small contribution to the success of one of the greatest companies ever to come out of Silicon Valley was one of the true highlights of my thirty-four-year career. Ann pointed out that communicating through an op-ed might cause the wrong kind of press reaction, making it harder for Facebook to accept my concerns. My goal was to fix the problems at Facebook, not embarrass anyone. I did not imagine that Zuck and Sheryl had done anything wrong intentionally. It seemed more like a case of unintended consequences of well-intended strategies. Other than a handful of email exchanges, I had not spoken to Zuck in seven years, but I had interacted with Sheryl from time to time. At one point, I had provided them with significant value, so it was not crazy to imagine that they would take my concerns seriously. My goal was to persuade Zuck and Sheryl to investigate and take appropriate action. The publication of the op-ed could wait a few days.
Zuck and Sheryl each responded to my email within a matter of hours. Their replies were polite but not encouraging. They suggested that the problems I cited were anomalies that the company had already addressed, but they offered to connect me with a senior executive to hear me out. The man they chose was Dan Rose, a member of their inner circle with whom I was friendly. I spoke with Dan at least twice before the election. Each time, he listened patiently and repeated what Zuck and Sheryl had said, with one important addition: he asserted that Facebook was technically a platform, not a media company, which meant it was not responsible for the actions of third parties. He said it like that should have been enough to settle the matter.
Dan Rose is a very smart man, but he does not make policy at Facebook. That is Zuck’s role. Dan’s role is to carry out Zuck’s orders. It would have been better to speak with Zuck, but that was not an option, so I took what I could get. Quite understandably, Facebook did not want me to go public with my concerns, and I thought that by keeping the conversation private, I was far more likely to persuade them to investigate the issues that concerned me. When I spoke to Dan the day after the election, it was obvious to me that he was not truly open to my perspective; he seemed to be treating the issue as a public relations problem. His job was to calm me down and make my concerns go away. He did not succeed at that, but he could claim one victory: I never published the op-ed. Ever the optimist, I hoped that if I persisted with private conversations, Facebook would eventually take the issue seriously.
I continued to call and email Dan, hoping to persuade Facebook to launch an internal investigation. At the time, Facebook had 1.7 billion active users. Facebook’s success depended on user trust. If users decided that the company was responsible for the damage caused by third parties, no legal safe harbor would protect it from brand damage. The company was risking everything. I suggested that Facebook had a window of opportunity. It could follow the example of Johnson & Johnson when someone put poison in a few bottles of Tylenol on retail shelves in Chicago in 1982. J&J immediately withdrew every bottle of Tylenol from every retail location and did not reintroduce the product until it had perfected tamperproof packaging. The company absorbed a short-term hit to earnings but was rewarded with a huge increase in consumer trust. J&J had not put the poison in those bottles. It might have chosen to dismiss the problem as the work of a madman. Instead, it accepted responsibility for protecting its customers and took the safest possible course of action. I thought Facebook could convert a potential disaster into a victory by doing the same thing.
One problem I faced was that at this point I did not have data for making my case. What I had was a spidey sense, honed during a long career as a professional investor in technology.
I had first become seriously concerned about Facebook in February 2016, in the run-up to the first US presidential primary. As a political junkie, I was spending a few hours a day reading the news and also spending a fair amount of time on Facebook. I noticed a surge on Facebook of disturbing images, shared by friends, that originated on Facebook Groups ostensibly associated with the Bernie Sanders campaign. The images were deeply misogynistic depictions of Hillary Clinton. It was impossible for me to imagine that Bernie’s campaign would allow them. More disturbing, the images were spreading virally. Lots of my friends were sharing them. And there were new images every day.
I knew a great deal about how messages spread on Facebook. For one thing, I have a second career as a musician in a band called Moonalice, and I had long been managing the band’s Facebook page, which enjoyed high engagement with fans. The rapid spread of images from these Sanders-associated pages did not appear to be organic. How did the pages find my friends? How did my friends find the pages? Groups on Facebook do not emerge full grown overnight. I hypothesized that somebody had to be spending money on advertising to get the people I knew to join the Facebook Groups that were spreading the images. Who would do that? I had no answer. The flood of inappropriate images continued, and it gnawed at me.
More troubling phenomena caught my attention. In March 2016, for example, I saw a news report about a group that exploited a programming tool on Facebook to gather data on users expressing an interest in Black Lives Matter, data that they then sold to police departments, which struck me as evil. Facebook banned the group, but not until after irreparable harm had been done. Here again, a bad actor had used Facebook tools to harm innocent victims.
In June 2016, the United Kingdom voted to exit the European Union. The outcome of the Brexit vote came as a total shock. Polling had suggested that “Remain” would triumph over “Leave” by about four points, but precisely the opposite happened. No one could explain the huge swing. A possible explanation occurred to me. What if Leave had benefited from Facebook’s architecture? The Remain campaign was expected to win because the UK had a sweet deal with the European Union: it enjoyed all the benefits of membership, while retaining its own currency. London was Europe’s undisputed financial hub, and UK citizens could trade and travel freely across the open borders of the continent. Remain’s “stay the course” message was based on smart economics but lacked emotion. Leave based its campaign on two intensely emotional appeals. It appealed to ethnic nationalism by blaming immigrants for the country’s problems, both real and imaginary. It also promised that Brexit would generate huge savings that would be used to improve the National Health Service, an idea that allowed voters to put an altruistic shine on an otherwise xenophobic proposal.
The stunning outcome of Brexit triggered a hypothesis: in an election context, Facebook may confer advantages to campaign messages based on fear or anger over those based on neutral or positive emotions. It does this because Facebook’s advertising business model depends on engagement, which can best be triggered through appeals to our most basic emotions. What I did not know at the time is that while joy also works, which is why puppy and cat videos and photos of babies are so popular, not everyone reacts the same way to happy content. Some people get jealous, for example. “Lizard brain” emotions such as fear and anger produce a more uniform reaction and are more viral in a mass audience. When users are riled up, they consume and share more content. Dispassionate users have relatively little value to Facebook, which does everything in its power to activate the lizard brain. Facebook has used surveillance to build giant profiles on every user and provides each user with a customized Truman Show, similar to the Jim Carrey film about a person who lives his entire life as the star of his own television show. It starts out giving users “what they want,” but the algorithms are trained to nudge user attention in directions that Facebook wants. The algorithms choose posts calculated to press emotional buttons because scaring users or pissing them off increases time on site. When users pay attention, Facebook calls it engagement, but the goal is behavior modification that makes advertising more valuable. I wish I had understood this in 2016. At this writing, Facebook is the fourth most valuable company in America, despite being only fifteen years old, and its value stems from its mastery of surveillance and behavioral modification.
When new technology first comes into our lives, it surprises and astonishes us, like a magic trick. We give it a special place, treating it like the product equivalent of a new baby. The most successful tech products gradually integrate themselves into our lives. Before long, we forget what life was like before them. Most of us have that relationship today with smartphones and internet platforms like Facebook and Google. Their benefits are so obvious we can’t imagine foregoing them. Not so obvious are the ways that technology products change us. The process has repeated itself in every generation since the telephone, including radio, television, and personal computers. On the plus side, technology has opened up the world, providing access to knowledge that was inaccessible in prior generations. It has enabled us to create and do remarkable things. But all that value has a cost. Beginning with television, technology has changed the way we engage with society, substituting passive consumption of content and ideas for civic engagement, digital communication for conversation. Subtly and persistently, it has contributed to our conversion from citizens to consumers. Being a citizen is an active state; being a consumer is passive. A transformation that crept along for fifty years accelerated dramatically with the introduction of internet platforms. We were prepared to enjoy the benefits but unprepared for the dark side. Unfortunately, the same can be said for the Silicon Valley leaders whose innovations made the transformation possible.
If you are a fan of democracy, as I am, this should scare you. Facebook has become a powerful source of news in most democratic countries. To a remarkable degree it has made itself the public square in which countries share ideas, form opinions, and debate issues outside the voting booth. But Facebook is more than just a forum. It is a profit-maximizing business controlled by one person. It is a massive artificial intelligence that influences every aspect of user activity, whether political or otherwise. Even the smallest decisions at Facebook reverberate through the public square the company has created with implications for every person it touches. The fact that users are not conscious of Facebook’s influence magnifies the effect. If Facebook favors inflammatory campaigns, democracy suffers.
August 2016 brought a new wave of stunning revelations. Press reports confirmed that Russians had been behind the hacks of servers at the Democratic National Committee (DNC) and Democratic Congressional Campaign Committee (DCCC). Emails stolen in the DNC hack were distributed by WikiLeaks, causing significant damage to the Clinton campaign. The chairman of the DCCC pleaded with Republicans not to use the stolen data in congressional campaigns. I wondered if it were possible that Russians had played a role in the Facebook issues that had been troubling me earlier.
Just before I wrote the op-ed, ProPublica revealed that Facebook’s advertising tools enabled property owners to discriminate based on race, in violation of the Fair Housing Act. The Department of Housing and Urban Development opened an investigation that was later closed, but reopened in April 2018. Here again, Facebook’s architecture and business model enabled bad actors to harm innocent people.
Like Jimmy Stewart in the movie, I did not have enough data or insight to understand everything I had seen, so I sought to learn more. As I did so, in the days and weeks after the election, Dan Rose exhibited incredible patience with me. He encouraged me to send more examples of harm, which I did. Nothing changed. Dan never budged. In February 2017, more than three months after the election, I finally concluded that I would not succeed in convincing Dan and his colleagues; I needed a different strategy. Facebook remained a clear and present danger to democracy. The very same tools that made Facebook a compelling platform for advertisers could also be exploited to inflict harm. Facebook was getting more powerful by the day. Its artificial intelligence engine learned more about every user. Its algorithms got better at pressing users’ emotional buttons. Its tools for advertisers improved constantly. In the wrong hands, Facebook was an ever-more-powerful weapon. And the next US election—the 2018 midterms—was fast approaching.
Yet no one in power seemed to recognize the threat. The early months of 2017 revealed extensive relationships between officials of the Trump campaign and people associated with the Russian government. Details emerged about a June 2016 meeting in Trump Tower between inner-circle members of the campaign and Russians suspected of intelligence affiliations. Congress spun up Intelligence Committee investigations that focused on that meeting.
But still there was no official concern about the role that social media platforms, especially Facebook, had played in the 2016 election. Every day that passed without an investigation increased the likelihood that the interference would continue. If someone did not act quickly, our democratic processes could be overwhelmed by outside forces; the 2018 midterm election would likely be subject to interference, possibly greater than we had seen in 2016. Our Constitution anticipated many problems, but not the possibility that a foreign country could interfere in our elections without consequences. I could not sit back and watch. I needed some help, and I needed a plan, not necessarily in that order.
1
The Strangest Meeting Ever
New technology is not good or evil in and of itself. It’s all about how people choose to use it. —DAVID WONG
I should probably tell the story of how I intersected with Facebook in the first place. In the middle of 2006, Facebook’s chief privacy officer, Chris Kelly, sent me an email stating that his boss was facing an existential crisis and required advice from an unbiased person. Would I be willing to meet with Mark Zuckerberg?
Facebook was two years old, Zuck was twenty-two, and I was fifty. The platform was limited to college students, graduates with an alumni email address, and high school students. News Feed, the heart of Facebook’s user experience, was not yet available. The company had only nine million dollars in revenue in the prior year. But Facebook had huge potential—that was already obvious—and I leapt at the opportunity to meet its founder.
Zuck showed up at my Elevation Partners office on Sand Hill Road in Menlo Park, California, dressed casually, with a messenger bag over his shoulder. U2 singer Bono and I had formed Elevation in 2004, along with former Apple CFO Fred Anderson, former Electronic Arts president John Riccitiello, and two career investors, Bret Pearlman and Marc Bodnick. We had configured one of our conference rooms as a living room, complete with a large arcade video game system, and that is where Zuck and I met. We closed the door and sat down on comfy chairs about three feet apart. No one else was in the room.
Since this was our first meeting, I wanted to say something before Zuck told me about the existential crisis.
“If it has not already happened, Mark, either Microsoft or Yahoo is going to offer one billion dollars for Facebook. Your parents, your board of directors, your management team, and your employees are going to tell you to take the offer. They will tell you that with your share of the proceeds—six hundred and fifty million dollars—you will be able to change the world. Your lead venture investor will promise to back your next company so that you can do it again.
“It’s your company, but I don’t think you should sell. A big company will screw up Facebook. I believe you are building the most important company since Google and that before long you will be bigger than Google is today. You have two huge advantages over previous social media platforms: you insist on real identity and give consumers control over their privacy settings.
“In the long run, I believe Facebook will be far more valuable to parents and grandparents than to college students and recent grads. People who don’t have much time will love Facebook, especially when families have the opportunity to share photos of kids and grandkids.
“Your board of directors, management team, and employees signed up for your vision. If you still believe in your vision, you need to keep Facebook independent. Everyone will eventually be glad you did.”
This little speech took about two minutes to deliver. What followed was the longest silence I have ever endured in a one-on-one meeting. It probably lasted four or five minutes, but it seemed like forever. Zuck was lost in thought, pantomiming a range of Thinker poses. I have never seen anything like it before or since. It was painful. I felt my fingers involuntarily digging into the upholstered arms of my chair, knuckles white, tension rising to a boiling point. At the three-minute mark, I was ready to scream. Zuck paid me no mind. I imagined thought bubbles over his head, with reams of text rolling past. How long would he go on like this? He was obviously trying to decide if he could trust me. How long would it take? How long could I sit there?
Eventually, Zuck relaxed and looked at me. He said, “You won’t believe this.”
I replied, “Try me.”
“One of the two companies you mentioned wants to buy Facebook for one billion dollars. Pretty much everyone has reacted the way you predicted. They think I should take the deal. How did you know?”
“I didn’t know. But after twenty-four years, I know how Silicon Valley works. I know your lead venture investor. I know Yahoo and Microsoft. This is how things go around here.”
I continued, “Do you want to sell the company?”
He replied, “I don’t want to disappoint everyone.”
“I understand, but that is not the issue. Everyone signed up to follow your vision for Facebook. If you believe in your vision, you need to keep Facebook independent. Yahoo and Microsoft will wreck it. They won’t mean to, but that is what will happen. What do you want to do?”
“I want to stay independent.”
I asked Zuck to explain Facebook’s shareholder voting rules. It turned out he had a “golden vote,” which meant that the company would always do whatever he decided. It took only a couple of minutes to figure that out. The entire meeting took no more than half an hour.
Zuck left my office and soon thereafter told Yahoo that Facebook was not for sale. There would be other offers for Facebook, including a second offer from Yahoo, and he would turn them down, too.
So began a mentorship that lasted three years. In a success story with at least a thousand fathers, I played a tiny role, but I contributed on two occasions that mattered to Facebook’s early success: the Yahoo deal and the hiring of Sheryl. Zuck had other mentors, but he called on me when he thought I could help, which happened often enough that for a few years I was a regular visitor to Facebook’s headquarters. Ours was a purely business relationship. Zuck was so amazingly talented at such a young age, and he leveraged me effectively. It began when Facebook was a little startup with big dreams and boundless energy. Zuck had an idealistic vision of connecting people and bringing them together. The vision inspired me, but the magic was Zuck himself. Obviously brilliant, Zuck possessed a range of characteristics that distinguished him from the typical Silicon Valley entrepreneur: a desire to learn, a willingness to listen, and, above all, a quiet confidence. Many tech founders swagger through life, but the best ones—including the founders of Google and Amazon—are reserved, thoughtful, serious. To me, Facebook seemed like the Next Big Thing that would make the world better through technology. I could see a clear path to one hundred million users, which would have been a giant success. It never occurred to me that success would lead to anything but happiness.
The only skin in the game for me at that time was emotional. I had been a Silicon Valley insider for more than twenty years. My fingerprints were on dozens of great companies, and I hoped that one day Facebook would be another. For me, it was a no-brainer. I did not realize then that the technology of Silicon Valley had evolved into uncharted territory, that I should no longer take for granted that it would always make the world a better place. I am pretty certain that Zuck was in the same boat; I had no doubt then of Zuck’s idealism.
Silicon Valley had had its share of bad people, but the limits of the technology itself had generally prevented widespread damage. Facebook came along at a time when it was possible for the first time to create tech businesses so influential that no country would be immune to their influence. No one I knew ever considered that success could have a downside. From its earliest days, Facebook was a company of people with good intentions. In the years I knew them best, the Facebook team focused on attracting the largest possible audience, not on monetization. Persuasive technology and manipulation never came up. It was all babies and puppies and sharing with friends.
I am not certain when Facebook first applied persuasive technology to its design, but I can imagine that the decision was not controversial. Advertisers and media companies had been using similar techniques for decades. Despite complaints about television from educators and psychologists, few people objected strenuously to the persuasive techniques employed by networks and advertisers. Policy makers and the public viewed them as legitimate business tools. On PCs, those tools were no more harmful than on television. Then came smartphones, which changed everything. User count and usage exploded, as did the impact of persuasive technologies, enabling widespread addiction. That is when Facebook ran afoul of the law of unintended consequences. Zuck and his team did not anticipate that the design choices that made Facebook so compelling for users would also enable a wide range of undesirable behaviors. When those behaviors became obvious after the 2016 presidential election, Facebook first denied their existence, then responsibility for them. Perhaps it was a reflexive corporate reaction. In any case, Zuck, Sheryl, the team at Facebook, and the board of directors missed an opportunity to build a new trust with users and policy makers. Those of us who had advised Zuck and profited from Facebook’s success also bear some responsibility for what later transpired. We suffered from a failure of imagination. The notion that massive success by a tech startup could undermine society and democracy did not occur to me or, so far as I know, to anyone in our community. Now the whole world is paying for it.
In the second year of our relationship, Zuck gave Elevation an opportunity to invest. I pitched the idea to my partners, emphasizing my hope that Facebook would become a company in Google’s class. The challenge was that Zuck’s offer would have us invest in Facebook indirectly, through a complicated, virtual security. Three of our partners were uncomfortable with the structure of the investment for Elevation, but they encouraged the rest of us to make personal investments. So Bono, Marc Bodnick, and I invested. Two years later, an opportunity arose for Elevation to buy stock in Facebook, and my partners jumped on it.
When Chris Kelly contacted me, he knew me only by reputation. I had been investing in technology since the summer of 1982. Let me share a little bit of my own history for context, to explain where my mind was when I first entered Zuck’s orbit.
I grew up in Albany, New York, the second youngest in a large and loving family. My parents had six children of their own and adopted three of my first cousins after their parents had a health crisis. One of my sisters died suddenly at two and a half while I was in the womb, an event that had a profound impact on my mother. At age two, I developed a very serious digestive disorder, and doctors told my parents I could not eat grains of any kind. I eventually grew out of it, but until I was ten, I could not eat a cookie, cake, or piece of bread without a terrible reaction. It required self-discipline, which turned out to be great preparation for the life I chose.
My parents were very active in politics and civil rights. The people they taught me to look up to were Franklin Roosevelt and Jackie Robinson. They put me to work on my first political campaign at age four, handing out leaflets for JFK. My father was the president of the Urban League in our home town, which was a big deal in the mid-sixties, when President Johnson pushed the Civil Rights Act and Voting Rights Act through Congress. My mother took me to a civil rights meeting around the time I turned nine so that I could meet my hero, Jackie Robinson.
The year that I turned ten, my parents sent me to summer camp. During the final week, I had a terrible fall during a scavenger hunt. The camp people put me in the infirmary, but I was unable to keep down any food or water for three days, after which I had a raging fever. They took me to a nearby community hospital, where a former military surgeon performed an emergency operation that saved my life. My intestine had been totally blocked by a blood clot. It took six months to recover, costing me half of fourth grade. This turned out to have a profound impact on me. Surviving a near-death experience gave me courage. The recovery reinforced my ability to be happy outside the mainstream. Both characteristics proved valuable in the investment business.
My father worked incredibly hard to support our large family, and he did so well. We lived an upper-middle-class life, but my parents had to watch every penny. My older siblings went off to college when I was in elementary school, so finances were tight some of those years. Being the second youngest in a huge family, I was most comfortable observing the big kids. Health issues reinforced my quiet, observant nature. My mother used me as her personal Find My iPhone whenever she mislaid her glasses, keys, or anything. For some reason, I always knew where everything was.
I was not an ambitious child. Team sports did not play much of a role in my life. It was the sixties, so I immersed myself in the anti-war and civil rights movements from about age twelve. I took piano lessons and sang in a church choir, but my passion for music did not begin until I took up the guitar in my late teens. My parents encouraged me but never pushed. They were role models who prioritized education and good citizenship, but they did not interfere. They expected my siblings and me to make good choices. Through my teenage years, I approached everything but politics with caution, which could easily be confused with reluctance. If you had met me then, you might well have concluded that I would never get around to doing anything.
My high school years were challenging in a different way. I was a good student, but not a great one. I liked school, but my interests were totally different from my classmates’. Instead of sports, I devoted my free time to politics. The Vietnam War remained the biggest issue in the country, and one of my older brothers had already been drafted into the army. It seemed possible that I would reach draft age before the war ended. As I saw it, the rational thing to do was to work to end the war. I volunteered for the McGovern for President campaign in October 1971 and was in the campaign office in either New Hampshire or upstate New York nearly every day from October 1971, the beginning of my tenth-grade year, through the general election thirteen months later. That was the period when I fell in love with the hippie music of San Francisco: the Grateful Dead, Jefferson Airplane, Quicksilver Messenger Service, Big Brother and the Holding Company, and Santana.
I did not like my school, so once the McGovern campaign ended, I applied to School Year Abroad in Rennes, France, for my senior year. It was an amazing experience. Not only did I become fluent in French, I went to school with a group of people who were more like me than any set of classmates before them. The experience transformed me. I applied to Yale University and, to my astonishment, got in.
After my freshman year at Yale, I was awarded an internship with my local congressman, who offered me a permanent job as his legislative assistant a few weeks later. The promotion came with an increase in pay and all the benefits of a full-time job. I said no—I thought the congressman was crazy to promote me at nineteen—but I really liked him and returned for two more summers.
A year later, in the summer of 1976, I took a year off to go to San Francisco with my girlfriend. In my dreams, I was going to the city of the Summer of Love. By the time I got there, though, it was the city of Dirty Harry, more noir than flower power. Almost immediately, my father was diagnosed with inoperable prostate cancer. Trained as a lawyer, my father had started a brokerage firm that grew to a dozen offices. It was an undersized company in an industry that was undergoing massive change. He died in the fall of 1977, at a particularly difficult time for his business, leaving my mother with a house and little else. There was no money for me to return to college. I was on my own, with no college degree. I had my guitar, though, and practiced for many hours every day.
When I first arrived in San Francisco, I had four hundred dollars in my pocket. My dream of being a reporter in the mold of Woodward and Bernstein lasted for about half a day. Three phone calls were all it took to discover that there were no reporter jobs available for a college dropout like me, but every paper needed people in advertising sales. I was way too introverted for traditional sales, but that did not stop me. I discovered a biweekly French-language newspaper where I would be the entire advertising department, which meant not only selling ads but also collecting receivables from advertisers. When you only get paid based on what you collect, you learn to judge the people you sell to. If the ads didn’t work, they wouldn’t pay. I discovered that by focusing on multi-issue advertising commitments from big accounts, such as car dealerships, airlines, and the phone company, I could leverage my time and earn a lot more money per issue. I had no social life, but I started to build savings. In the two and a half years I was in San Francisco, I earned enough money to go back to Yale, which cost no more than 10 percent of what it costs today.
Every weekday morning in San Francisco I watched a locally produced stock market show hosted by Stuart Varney, who went on to a long career in broadcasting at CNN and Fox Business Network. After watching the show for six months and reading Barron’s and stacks of annual reports, I finally summoned the courage to buy one hundred shares of Beech Aircraft. It went up 30 percent in the first week. I was hooked. I discovered that investing was a game, like Monopoly, but with real money. The battle of wits appealed to me. I never imagined then that investing would be my career. In the fall of 1978, I reapplied to Yale. They accepted me again, just weeks before two heartbreaking events chased me from San Francisco: the mass suicide of hundreds of San Franciscans at Jonestown and the murder of San Francisco’s mayor and supervisor Harvey Milk by another member of the city’s board of supervisors.
Celebrating my first Christmas at home since 1975, I received a gift that would change my life. My older brother George, ten years my senior, gave me a Texas Instruments Speak & Spell. Introduced just months earlier, the Speak & Spell combined a keyboard, a one-line alphanumeric display, a voice processor, and some memory to teach elementary school children to pronounce and spell words. But to my brother, it was the future of computing. “This means that in a few years, it will be possible to create a handheld device that holds all your personal information,” he said.
He told me this in 1978. The Apple II had been introduced only a year earlier. The IBM PC was nearly three years in the future. The PalmPilot was more than eighteen years away. But my brother saw the future, and I took it to heart. I went back to college as a history major but was determined to take enough electrical engineering courses that I could design the first personal organizer. I soon discovered that electrical engineering requires calculus, and I had never taken calculus. I persuaded the professor to let me take the entry-level course anyway. He said if I did everything right except the math, he would give me a B (“for bravery”). I accepted. He tutored me every week. I took a second, easier engineering survey course, in which I learned concepts related to acoustics and mechanical engineering. I got catalogues and manuals and tried to design an oversized proof of concept. I could not make it work.
A real highlight of my second swing through Yale was playing in a band called Guff. Three guys in my dorm had started the band, but they needed a guitar player. Guff wrote its own songs and occupied a musical space somewhere near the intersection of the Grateful Dead, Frank Zappa, and punk rock. We played a ton of gigs, but college ended before the band was sufficiently established to justify making a career of it.
The band got paid a little money, but I needed to earn tuition-scale money. Selling ads paid far better than most student jobs, so I persuaded the Yale Law School Film Society to let me create a magazine-style program for their film series. I created a program for both semesters of senior year and earned almost enough money to pay for a year of graduate school.
But before that, in the fall of my senior year, I enrolled in Introduction to Music Theory, a brutal two-semester course for music majors. I was convinced that a basic knowledge of music theory would enable me to write better songs for my band. They randomly assigned me to one of a dozen sections, each with fifteen students, all taught by graduate students. The first class session was the best hour of classroom time I had ever experienced, so I told my roommate to switch from his section to mine. Apparently many others did the same thing, as forty people showed up the second day. That class was my favorite at Yale. The grad student who taught the class, Ann Kosakowski, did not teach the second semester, but early in the new semester, I ran into her as she exited the gymnasium, across the street from my dorm. She was disappointed because she had narrowly lost a squash match in the fifth game to the chair of the music department, so I volunteered to play her the next day. We played squash three days in a row, and I did not win a single point. Not one. But it didn’t matter. I had never played squash and did not care about the score. Ann was amazing. I wanted to get to know her. I invited her on a date to see the Jerry Garcia Band right after Valentine’s Day. A PhD candidate in music theory, Ann asked, “What instrument does Mr. Garcia play?” thinking perhaps it might be the cello. Ann and I are about to celebrate the thirty-ninth anniversary of that first date.
Ann and I graduated together, she a very young PhD, me an old undergraduate. She received a coveted tenure-track position at Swarthmore College, outside of Philadelphia. I could not find a job in Philadelphia, so I enrolled at the Tuck School of Business at Dartmouth, in Hanover, New Hampshire. So began a twenty-one-year interstate commute.
My first job after business school was at T. Rowe Price, in Baltimore, Maryland. It was a lot closer to Philadelphia than Hanover, but still too far to commute every day. That’s when I got hit by two game-changing pieces of good luck: my start date and my coverage group. My career began on the first day of the bull market of 1982, and they asked me to analyze technology stocks. In those days, there were no tech-only funds. T. Rowe Price was the leader in the emerging growth category of mutual funds, which meant they focused on technology more than anyone. I might not be able to make the first personal organizer, I reasoned, but I would be able to invest in it when it came along.
In investing, they say that timing is everything. By assigning me to cover tech on the first day of an epic bull market, T. Rowe Price basically put me in a position where I had a tailwind for my entire career. I can’t be certain that every good thing in my career resulted from that starting condition, but I can’t rule it out either. It was a bull market, so most stocks were going up. In the early days, I just had to produce reports that gave the portfolio managers confidence in my judgment. I did not have a standard pedigree for an analyst, so I decided to see if I could adapt the job to leverage my strengths.
I became an analyst by training, a nerd who gets paid to understand the technology industry. When my career started, most analysts focused primarily on financial statements, but I changed the formula. I have been successful due to an ability to understand products, financial statements, and trends, as well as to judge people. I think of it as real-time anthropology, the study of how humans and technology evolve and interact. I spend most of my time trying to understand the present so I can imagine what might happen in the future. From any position on the chessboard, there are only a limited number of moves. If you understand that in advance and study the possibilities, you will be better prepared to make good choices each time something happens. Despite what people tell you, the technology world does not actually change that much. It follows relatively predictable patterns. Major waves of technology last at least a decade, so the important thing is to recognize when an old cycle is ending and when a new one is starting. As my partner John Powell likes to say, sometimes you can see which body is tied to the railroad tracks before you can see who is driving the train.
The personal computer business started to take off in 1985, and I noticed two things: everyone was my age, and they convened at least monthly in a different city for a conference or trade show. I persuaded my boss to let me join the caravan. Almost immediately I had a stroke of good luck. I was at a conference in Florida when I noticed two guys unloading guitars and amps from the back of a Ford Taurus. Since all guests at the hotel were part of the conference, I asked if there was a jam session I could join. There was. It turns out that the leaders of the PC industry didn’t go out to bars. They rented instruments and played music. When I got to my first jam session, I discovered I had an indispensable skill. Thanks to many years of gigs in bands and bars, I knew a couple hundred songs from beginning to end. No one else knew more than a handful. This really mattered because the other players included the CEO of a major software company, the head of R&D from Apple, and several other industry big shots. Microsoft cofounder Paul Allen played with us from time to time, but only on songs written by Jimi Hendrix. He could shred. Suddenly, I was part of the industry’s social fabric. It is hard to imagine this happening in any other industry, but I was carving my own path.
My next key innovation related to earnings models. Traditional analysts used spreadsheets to forecast earnings, but spreadsheets tend to smooth everything. In tech, where success is binary, hot products always beat the forecast, and products that are not hot always fall short. I didn’t need to worry about earnings models. I just needed to figure out which products were going to be hot. Forecasting products was not easy, but I did not need to be perfect. As with the two guys being chased by a bear, I only needed to do it better than the other guy.
I got my first chance to manage a portfolio in late 1985. I was asked to run the technology sector of one of the firm’s flagship funds; tech represented about 40 percent of the fund. It was the largest tech portfolio in the country at the time, so it was a big promotion and an amazing opportunity. I had been watching portfolio managers for three years, but that did not really prepare me. Portfolio management is a game played with real money. Everyone makes mistakes. What differentiates great portfolio managers is their ability to recognize mistakes early and correct them. Portfolio managers learn by trial and error, with lots of errors. The key is to have more money invested in your good ideas than your bad ones.
T. Rowe launched a pure-play Science & Technology Fund, managed by two of my peers, on September 30, 1987. Nineteen days later, the stock market crashed. Every mutual fund got crushed, and Science & Tech was down 31 percent after only a month in business. While the number was terrible, it was actually better than competitors because the portfolio managers had invested only half their capital when the market collapsed. In the middle of 1988, with the viability of the fund in doubt, the firm reassigned the two managers and asked me to take over. I agreed to do so on one condition: I would run the fund my way. I told my bosses that I intended to be aggressive.
Another piece of amazing luck hit me when T. Rowe Price decided to create a growth-stage venture fund. I was already paying attention to private companies, because in those days, the competition in tech came from startups, not established companies. Over the next few years, I led three key growth-stage venture investments: Electronic Arts, Sybase, and Radius. The lead venture investor in all three companies was Kleiner Perkins Caufield & Byers, one of the leading venture capital firms in Silicon Valley. All three went public relatively quickly, making me popular both at T. Rowe Price and Kleiner Perkins. My primary contact at Kleiner Perkins was a young venture capitalist named John Doerr, whose biggest successes to that point had been Sun Microsystems, Compaq Computer, and Lotus Development. Later, John would be the lead investor in Netscape, Amazon, and Google.
My strategy with the Science & Technology Fund was to focus entirely on emerging companies in the personal computer, semiconductor, and database software industries. I ignored all the established companies, a decision that gave the fund a gigantic advantage. From its launch through the middle of 1991, a period that included the 1987 crash and a second mini-crash in the summer of 1990, the fund achieved a 17 percent per annum return, against 9 percent for the S&P 500 and 6 percent for the technology index. That was when I left T. Rowe Price with John Powell to launch Integral Capital Partners, the first institutional fund to combine public market investments with growth-stage venture capital. We created the fund in partnership with Kleiner Perkins—with John Doerr as our venture capitalist—and Morgan Stanley. Our investors were the people who know us best, the founders and executives of the leading tech companies of that era.
Integral had a charmed run. Being inside the offices of Kleiner Perkins during the nineties meant we were at ground zero for the internet revolution. I was there the day that Marc Andreessen made his presentation for the company that became Netscape, when Jeff Bezos did the same for Amazon, and when Larry Page and Sergey Brin pitched Google. I did not imagine then how big the internet would become, but it did not take long to grasp its transformational nature. The internet would democratize access to information, with benefits to all. Idealism ruled. In 1997, Martha Stewart came in with her home-decorating business, which, thanks to an investment by Kleiner Perkins, soon went public as an internet stock, which seemed insane to me. I was convinced that a mania had begun for dot-coms, embodied in the Pets.com sock puppet and the slapping of a little “e” on the front of a company’s name or a “.com” at the end. I knew that when the bubble burst, there would be a crash that would kill Integral if we did not do something radical.
I took my concerns to our other partner, Morgan Stanley, and they gave me some money to figure out the Next Big Thing in tech investing, a fund that could survive a bear market. It took two years, but Integral launched Silver Lake Partners, the first private equity fund focused on technology. Our investors shared our concerns and committed one billion dollars to the new fund.
Silver Lake planned to invest in mature technology companies. Once a tech company matured in those days, it became vulnerable to competition from startups. Mature companies tend to focus on the needs of their existing customers, which often blinds them to new business opportunities or new technologies. In addition, as growth slows, so too does the opportunity for employees to benefit from stock options, which startups exploit to recruit the best and brightest from established companies. My vision for Silver Lake was to reenergize mature companies by recapitalizing them to enable investment in new opportunities, while also replicating the stock compensation opportunities of a startup. The first Silver Lake fund had extraordinary results, thanks to three investments: Seagate Technology, Datek, and Gartner Group.
During the Silver Lake years, I got a call from the business manager of the Grateful Dead, asking for help. The band’s leader, Jerry Garcia, had died a few years before, leaving the band with no tour to support a staff of roughly sixty people. Luckily, one of the band’s roadies had created a website and sold merchandise directly to fans. The site had become a huge success, and by the time I showed up, it was generating almost as much profit as the band had made in its touring days. Unfortunately, the technology was out of date, but there was an opportunity to upgrade the site, federate it to other bands, and prosper as never before. One of the bands that showed an interest was U2. They found me through a friend of Bono’s at the Department of the Treasury, a woman named Sheryl Sandberg. I met Bono and the Edge at Morgan Stanley’s offices in Los Angeles on the morning after the band had won a Grammy for the song “Beautiful Day.” I could not have named a U2 song, but I was blown away by the intelligence and business sophistication of the two Irishmen. They invited me to Dublin to meet their management. I made two trips during the spring of 2001.
On my way home from that second trip, I suffered a stroke. I didn’t realize it at the time, and I tried to soldier on. Shortly thereafter, after some more disturbing symptoms, I found myself at the Mayo Clinic, where I learned that I had in fact suffered two ischemic strokes, in addition to something called a transient ischemic attack in my brain stem. It was a miracle I had survived the strokes and suffered no permanent impairment.
The diagnosis came as a huge shock. I had a reasonably good diet, a vigorous exercise regime, and a good metabolism, yet I had had two strokes. It turned out that I had a birth defect in my heart, a “patent foramen ovale,” basically the mother of all heart murmurs. I had two choices: I could take large doses of blood thinner and live a quiet life, or I could have open-heart surgery and eliminate the risk forever. I chose surgery.
I had successful surgery in early July 2001, but my recovery was very slow. It took me nearly a year to recover fully. During that time, Apple shipped the first iPod. I thought it was a sign of good things to come and reached out to Steve Jobs to see if he would be interested in recapitalizing Apple. At the time, Apple’s share price was about twelve dollars per share, which, thanks to stock splits, is equivalent to a bit more than one dollar per share today. The company had more than twelve dollars in cash per share, which meant investors were attributing zero value to Apple’s business. Most of the management options had been issued at forty dollars per share, so they were effectively worthless. If Silver Lake did a recapitalization, we could reset the options and align interests between management and shareholders. Apple had lost most of its market share in PCs, but thanks to the iPod and iMac computers, Apple had an opportunity to reinvent itself in the consumer market. The risk/reward of investing struck me as especially favorable. We had several conversations before Steve told me he had a better idea. He wanted me to buy up to 18 percent of Apple shares in the public market and take a board seat.
After a detailed analysis, I proposed an investment to my partners in the early fall of 2002, but they rejected it out of hand. The decision would cost Silver Lake’s investors the opportunity to earn more than one hundred billion dollars in profits.
In early 2003, Bono called up with an opportunity. He wanted to buy Universal Music Group, the world’s largest music label. It was a complicated transaction and took many months of analysis. A team of us did the work and presented it to my other three partners in Silver Lake in September. They agreed to do the deal with Bono, but they stipulated one condition: I would not be part of the deal team. They explained their intention for Silver Lake to go forward as a trio, rather than as a quartet. There had been signals along the way, but I had missed them. I had partnered with deal guys—people who use power when they have it to gain advantages where they can get them—and had not protected myself.
I have never believed in staying where I’m not wanted, so I quit. If I had been motivated by money, I would have hung in there, as there was no way they could force me out. I had conceived the fund, incubated it, brought in the first billion dollars of assets, and played a decisive role on the three most successful investments. But I’m not wired to fight over money. I just quit and walked out. I happened to be in New York and called Bono. He asked me to come to his apartment. When I got there, he said, “Screw them. We’ll start our own fund.” Elevation Partners was born.
In the long term, my departure from Silver Lake worked out for everyone. The second Silver Lake fund got off to a rocky start, as my cofounders struggled with stock picking, but they figured it out and built the firm into an institution that has delivered good investment returns to its investors.
2
Silicon Valley Before Facebook
I think technology really increased human ability. But technology cannot produce compassion. —DALAI LAMA
The technology industry that gave birth to Facebook in 2004 bore little resemblance to the one that had existed only half a dozen years earlier. Before Facebook, startups populated by people just out of college were uncommon, and few succeeded. For the fifty years before 2000, Silicon Valley operated in a world of tight engineering constraints. Engineers never had enough processing power, memory, storage, or bandwidth to do what customers wanted, so they had to make trade-offs. Engineering and software programming in that era rewarded skill and experience. The best engineers and programmers were artists. Just as Facebook came along, however, processing power, memory, storage, and bandwidth went from being engineering limits to turbochargers of growth. The technology industry changed dramatically in less than a decade, but in ways few people recognized. What happened with Facebook and the other internet platforms could not have happened in prior generations of technology. The path the tech industry took from its founding to that change helps to explain both Facebook’s success and how it could do so much damage before the world woke up.
The history of Silicon Valley can be summed in two “laws.” Moore’s Law, coined by a cofounder of Intel, stated that the number of transistors on an integrated circuit doubles every year. It was later revised to a more useful formulation: the performance of an integrated circuit doubles every eighteen to twenty-four months. Metcalfe’s Law, named for a founder of 3Com, said that the value of any network would increase as the square of the number of nodes. Bigger networks are geometrically more valuable than small ones. Moore’s Law and Metcalfe’s Law reinforced each other. As the price of computers fell, the benefits of connecting them rose. It took fifty years, but we eventually connected every computer. The result was the internet we know today, a global network that connects billions of devices and made Facebook and all other internet platforms possible.
Beginning in the fifties, the technology industry went through several eras. During the Cold War, the most important customer was the government. Mainframe computers, giant machines that were housed in special air-conditioned rooms, supervised by a priesthood of technicians in white lab coats, enabled unprecedented automation of computation. The technicians communicated with mainframes via punch cards connected by the most primitive of networks. In comparison to today’s technology, mainframes could not do much, but they automated large-scale data processing, replacing human calculators and bookkeepers with machines. Any customer who wanted to use a computer in that era had to accept a product designed to meet the needs of government, which invested billions to solve complex problems like moon trajectories for NASA and missile targeting for the Department of Defense. IBM was the dominant player in the mainframe era and made all the components for the machines it sold, as well as most of the software. That business model was called vertical integration. The era of government lasted about thirty years. Data networks as we think of them today did not yet exist. Even so, brilliant people imagined a world where small computers optimized for productivity would be connected on powerful networks. In the sixties, J. C. R. Licklider conceived the network that would become the internet, and he persuaded the government to finance its development. At the same time, Douglas Engelbart invented the field of human-computer interaction, which led to him to create the first computer mouse and to conceive the first graphical interface. It would take nearly two decades before Moore’s Law and Metcalfe’s Law could deliver enough performance to enable their vision of personal computing and an additional decade before the internet took off.
Beginning in the seventies, the focus of the tech industry began to shift toward the needs of business. The era began with a concept called time sharing, which enabled many users to share the use of a single computer, reducing the cost to everyone. Time sharing gave rise to minicomputers, which were smaller than mainframes but still staggeringly expensive by today’s standards. Data networking began but was very slow and generally revolved around a single minicomputer. Punch cards gave way to terminals, keyboards attached to the primitive network, eliminating the need for a priesthood of technicians in white lab coats. Digital Equipment, Data General, Prime, and Wang led in minicomputers, which were useful for accounting and business applications but were far too complicated and costly for personal use. Although they were a big step forward relative to mainframes, even minicomputers barely scratched the surface of customer needs. Like IBM, the minicomputer vendors were vertically integrated, making most of the components for their products. Some minicomputers—Wang word processors, for example—addressed productivity applications that would be replaced by PCs. Other applications survived longer, but in the end, the minicomputer business would be subsumed by personal computer technology, if not by PCs themselves. Main frames have survived to the present day, thanks in large part to giant, custom applications like accounting systems, which were created for the government and corporations and are cheaper to maintain on old systems than to re-create on new ones. (Massive server farms based on PC technology now attract any new application that needs mainframe-class processing; it is a much cheaper solution because you can use commodity hardware instead of proprietary mainframes.)
ARPANET, the predecessor to today’s internet, began as a Department of Defense research project in 1969 under the leadership of Bob Taylor, a computer scientist who continued to influence the design of systems and networks until the late nineties. Douglas Engelbart’s lab was one of the first nodes on ARPANET. The goal was to create a nationwide network to protect the country’s command and control infrastructure in the event of a nuclear attack.
The first application of computer technology to the consumer market came in 1972, when Al Alcorn created the game Pong as a training exercise for his boss at Atari, Nolan Bushnell. Bushnell’s impact on Silicon Valley went far beyond the games produced by Atari. He introduced the hippie culture to tech. White shirts with pocket protectors gave way to jeans and T-shirts. Nine to five went away in favor of the crazy, but flexible hours that prevail even today.
In the late seventies, microprocessors made by Motorola, Intel, and others were relatively cheap and had enough performance to allow Altair, Apple, and others to make the first personal computers. PCs like the Apple II took advantage of the growing supply of inexpensive components, produced by a wide range of independent vendors, to deliver products that captured the imagination first of hobbyists, then of consumers and some businesses. In 1979, Dan Bricklin and Bob Frankston introduced VisiCalc, the first spreadsheet for personal computers. It is hard to overstate the significance of VisiCalc. It was an engineering marvel. A work of art. Spreadsheets on Apple IIs transformed the productivity of bankers, accountants, and financial analysts.
Unlike the vertical integration of mainframes and minicomputers, which limited product improvement to the rate of change of the slowest evolving part in the system, the horizontal integration of PCs allowed innovation at the pace of the most rapidly improving parts in the system. Because there were multiple, competing vendors for each component, systems could evolve far more rapidly than equivalent products subject to vertical integration. The downside was that PCs assembled this way lacked the tight integration of mainframes and minicomputers. This created a downstream cost in terms of training and maintenance, but that was not reflected in the purchase price and did not trouble customers. Even IBM took notice.
When IBM decided to enter the PC market, it abandoned vertical integration and partnered with a range of third-party vendors, including Microsoft for the operating system and Intel for the microprocessor. The first IBM PC shipped in 1981, signaling a fundamental change in the tech industry that only became obvious a couple of years later, when Microsoft’s and Intel’s other customers started to compete with IBM. Eventually, Compaq, Hewlett-Packard, Dell, and others left IBM in the dust. In the long run, though, most of the profits in the PC industry went to Microsoft and Intel, whose control of the brains and heart of the device and willingness to cooperate forced the rest of the industry into a commodity business.
ARPANET had evolved to become a backbone for regional networks of universities and the military. PCs continued the trend of smaller, cheaper computers, but it took nearly a decade after the introduction of the Apple II before technology emerged to leverage the potential of clusters of PCs. Local area networks (LANs) got their start in the late eighties as a way to share expensive laser printers. Once installed, LANs attracted developers, leading to new applications, such as electronic mail. Business productivity and engineering applications created incentives to interconnect LANs within buildings and then tie them all together over proprietary wide area networks (WANs) and then the internet. The benefits of connectivity overwhelmed the frustration of incredibly slow networks, setting the stage for steady improvement. It also created a virtuous cycle, as PC technology could be used to design and build better components, increasing the performance of new PCs that could be used to design and build even better components.
Consumers who wanted a PC in the eighties and early nineties had to buy one created to meet the needs of business. For consumers, PCs were relatively expensive and hard to use, but millions bought and learned to operate them. They put up with character-mode interfaces until Macintosh and then Windows finally delivered graphical interfaces that did not, well, totally suck. In the early nineties, consumer-centric PCs optimized for video games came to market.
The virtuous cycle of Moore’s Law for computers and Metcalfe’s Law for networks reached a new level in the late eighties, but the open internet did not take off right away. It required enhancements. The English researcher Tim Berners-Lee delivered the goods when he invented the World Wide Web in 1989 and the first web browser in 1991, but even those innovations were not enough to push the internet into the mainstream. That happened when a computer science student by the name of Marc Andreessen created the Mosaic browser in 1993. Within a year, startups like Yahoo and Amazon had come along, followed in 1995 by eBay, and the web that we now know had come to life.
By the mid-nineties, the wireless network evolved to a point that enabled widespread adoption of cell phones and alphanumeric pagers. The big applications were phone calls and email, then text messaging. The consumer era had begun. The business era had lasted nearly twenty years—from 1975 to 1995—but no business complained when it ended. Technology aimed at consumers was cheaper and somewhat easier to use, exactly what businesses preferred. It also rewarded a dimension that had not mattered to business: style. It took a few years for any vendor to get the formula right.
The World Wide Web in the mid-nineties was a beautiful thing. Idealism and utopian dreams pervaded the industry. The prevailing view was that the internet and World Wide Web would make the world more democratic, more fair, and more free. One of the web’s best features was an architecture that inherently delivered net neutrality: every site was equal. In that first generation, everything on the web revolved around pages, every one of which had the same privileges and opportunities. Unfortunately, the pioneers of the internet made omissions that would later haunt us all. The one that mattered most was the choice not to require real identity. They never imagined that anonymity would lead to problems as the web grew.
Time would expose the naïveté of the utopian view of the internet, but at the time, most participants bought into that dream. Journalist Jenna Wortham described it this way: “The web’s earliest architects and pioneers fought for their vision of freedom on the Internet at a time when it was still small forums for conversation and text-based gaming. They thought the web could be adequately governed by its users without their need to empower anyone to police it.” They ignored early signs of trouble, such as toxic interchanges on message boards and in comments sections, which they interpreted as growing pains, because the potential for good appeared to be unlimited. No company had to pay the cost of creating the internet, which in theory enabled anyone to have a website. But most people needed tools for building websites, applications servers and the like. Into the breach stepped the “open source” community, a distributed network of programmers who collaborated on projects that created the infrastructure of the internet. Andreessen came out of that community. Open source had great advantages, most notably that its products delivered excellent functionality, evolved rapidly, and were free. Unfortunately, there was one serious problem with the web and open source products: the tools were not convenient or easy to use. The volunteers of the open source community had one motivation: to build the open web. Their focus was on performance and functionality, not convenience or ease of use. That worked well for the infrastructure at the heart of the internet, but not so much for consumer-facing applications.
The World Wide Web took off in 1994, driven by the Mosaic/Netscape browser and sites like Amazon, Yahoo, and eBay. Businesses embraced the web, recognizing its potential as a better way to communicate with other businesses and consumers. This change made the World Wide Web geometrically more valuable, just as Metcalfe’s Law predicted. The web dominated culture in the late nineties, enabling a stock market bubble and ensuring near-universal adoption. The dot-com crash that began in early 2000 left deep scars, but the web continued to grow. In this second phase of the web, Google emerged as the most important player, organizing and displaying what appeared to be all the world’s information. Apple broke the code on tech style—their products were a personal statement—and rode the consumer wave to a second life. Products like the iMac and iPod, and later the iPhone and iPad, restored Apple to its former glory and then some. At this writing, Apple is the most valuable company in the world. (Fortunately, Apple is also the industry leader in protecting user privacy, but I will get to that later.)
In the early years of the new millennium, a game changing model challenged the page-centric architecture of the World Wide Web. Called Web 2.0, the new architecture revolved around people. The pioneers of Web 2.0 included people like Mark Pincus, who later founded Zynga; Reid Hoffman, the founder of LinkedIn; and Sean Parker, who had cofounded the music file sharing company Napster. After Napster, Parker launched a startup called Plaxo, which put address books in the cloud. It grew by spamming every name in every address book to generate new users, an idea that would be copied widely by social media platforms that launched thereafter. In the same period, Google had a brilliant insight: it saw a way to take control of a huge slice of the open internet. No one owned open source tools, so there was no financial incentive to make them attractive for consumers. They were designed by engineers, for engineers, which could be frustrating to non-engineers.
Google saw an opportunity to exploit the frustration of consumers and some business users. Google made a list of the most important things people did on the web, including searches, browsing, and email. In those days, most users were forced to employ a mix of open source and proprietary tools from a range of vendors. Most of the products did not work together particularly well, creating a friction Google could exploit. Beginning with Gmail in 2004, Google created or acquired compelling products in maps, photos, videos, and productivity applications. Everything was free, so there were no barriers to customer adoption. Everything worked together. Every app gathered data that Google could exploit. Customers loved the Google apps. Collectively, the Google family of apps replaced a huge portion of the open World Wide Web. It was as though Google had unilaterally put a fence around half of a public park and then started commercializing it.
The steady march of technology in the half century prior to 2000 produced so much value—and so many delightful surprises—that the industry and customers began to take positive outcomes for granted. Technology optimism was not equivalent to the law of gravity, but engineers, entrepreneurs, and investors believed that everything they did made the world a better place. Most participants bought into some form of the internet utopia. What we did not realize at the time was that the limits imposed by not having enough processing power, memory, storage, and network bandwidth had acted as a governor, limiting the damage from mistakes to a relatively small number of customers. Because the industry had done so much good in the past, we all believed that everything it would create in the future would also be good. It was not a crazy assumption, but it was a lazy one that would breed hubris.
When Zuck launched Facebook in early 2004, the tech industry had begun to emerge from the downturn caused by the dot-com meltdown. Web 2.0 was in its early stages, with no clear winners. For Silicon Valley, it was a time of transformation, with major change taking place in four arenas: startups, philosophy, economics, and culture. Collectively, these changes triggered unprecedented growth and wealth creation. Once the gravy train started, no one wanted to get off. When fortunes can be made overnight, few people pause to ask questions or consider side effects.
The first big Silicon Valley change related to the economics of startups. Hurdles that had long plagued new companies evaporated. Engineers could build world-class products quickly, thanks to the trove of complementary software components, like the Apache server and the Mozilla browser, from the open source community. With open source stacks as a foundation, engineers could focus all their effort on the valuable functionality of their app, rather than building infrastructure from the ground up. This saved time and money. In parallel, a new concept emerged—the cloud—and the industry embraced the notion of centralization of shared resources. The cloud is like Uber for data—customers don’t need to own their own data center or storage if a service provides it seamlessly from the cloud. Today’s leader in cloud services, Amazon Web Services (AWS), leveraged Amazon.com’s retail business to create a massive cloud infrastructure that it offered on a turnkey basis to startups and corporate customers. By enabling companies to outsource their hardware and network infrastructure, paying a monthly fee instead of the purchase price of an entire system, services like AWS lowered the cost of creating new businesses and shortened the time to market. Startups could mix and match free open source applications to create their software infrastructure. Updates were made once, in the cloud, and then downloaded by users, eliminating what had previously been a very costly and time-consuming process of upgrading individual PCs and servers. This freed startups to focus on their real value added, the application that sat on top of the stack. Netflix, Box, Dropbox, Slack, and many other businesses were built on this model.
Thus began the “lean startup” model. Without the huge expense and operational burden of creating a full tech infrastructure, new companies did not have to aim for perfection when they launched a new product, which had been Silicon Valley’s primary model to that point. For a fraction of the cost, they could create a minimum viable product (MVP), launch it, and see what happened. The lean startup model could work anywhere, but it worked best with cloud software, which could be updated as often as necessary. The first major industry created with the new model was social media, the Web 2.0 startups that were building networks of people rather than pages. Every day after launch, founders would study the data and tweak the product in response to customer feedback. In the lean startup philosophy, the product is never finished. It can always be improved. No matter how rapidly a startup grew, AWS could handle the load, as it demonstrated in supporting the phenomenal growth of Netflix. What in earlier generations would have required an army of experienced engineers could now be accomplished by relatively inexperienced engineers with an email to AWS. Infrastructure that used to require a huge capital investment could now be leased on a monthly basis. If the product did not take off, the cost of failure was negligible, particularly in comparison to the years before 2000. If the product found a market, the founders had alternatives. They could raise venture capital on favorable terms, hire a bigger team, improve the product, and spend to acquire more users. Or they could do what the founders of Instagram and WhatsApp would eventually do: sell out for billions with only a handful of employees.
Facebook’s motto—“Move fast and break things”—embodies the lean startup philosophy. Forget strategy. Pull together a few friends, make a product you like, and try it in the market. Make mistakes, fix them, repeat. For venture investors, the lean startup model was a godsend. It allowed venture capitalists to identify losers and kill them before they burned through much cash. Winners were so valuable that a fund needed only one to provide a great return.
When hardware and networks act as limiters, software must be elegant. Engineers sacrifice frills to maximize performance. The no-frills design of Google’s search bar made a huge difference in the early days, providing a competitive advantage relative to Excite, Altavista, and Yahoo. A decade earlier, Microsoft’s early versions of Windows failed in part because hardware in that era could not handle the processing demands imposed by the design. By 2004, every PC had processing power to spare. Wired networks could handle video. Facebook’s design outperformed MySpace in almost every dimension, providing a relative advantage, but the company did not face the fundamental challenges that had prevailed even a decade earlier. Engineers had enough processing power, storage, and network bandwidth to change the world, at least on PCs. Programming still rewarded genius and creativity, but an entrepreneur like Zuck did not need a team of experienced engineers with systems expertise to execute a business plan. For a founder in his early twenties, this was a lucky break. Zuck could build a team of people his own age and mold them. Unlike Google, Facebook was reluctant to hire people with experience. Inexperience went from being a barrier to being an advantage, as it kept labor costs low and made it possible for a young man in his twenties to be an effective CEO. The people in Zuck’s inner circle bought into his vision without reservation, and they conveyed that vision to the rank-and-file engineers. On its own terms, Facebook’s human resources strategy worked exceptionally well. The company exceeded its goals year after year, creating massive wealth for its shareholders, but especially for Zuck. The success of Facebook’s strategy had a profound impact on the human resources culture of Silicon Valley startups.
In the early days of Silicon Valley, software engineers generally came from the computer science and electrical engineering programs at MIT, Caltech, and Carnegie Mellon. By the late seventies, Berkeley and Stanford had joined the top tier. They were followed in the mid-nineties by the University of Illinois at Urbana-Champaign, the alma mater of Marc Andreessen, and other universities with strong computer science programs. After 2000, programmers were coming from just about every university in America, including Harvard.
When faced with a surplus for the first time, engineers had new and exciting options. The wave of startups launched after 2003 could have applied surplus processing, memory, storage, and bandwidth to improve users’ well-being and happiness, for example. A few people tried, which is what led to the creation of the Siri personal assistant, among other things. The most successful entrepreneurs took a different path. They recognized that the penetration of broadband might enable them to build global consumer technology brands very quickly, so they opted for maximum scale. To grow as fast as possible, they did everything they could to eliminate friction like purchase prices, criticism, and regulation. Products were free, criticism and privacy norms ignored. Faced with the choice between asking permission or begging forgiveness, entrepreneurs embraced the latter. For some startups, challenging authority was central to their culture. To maximize both engagement and revenues, Web 2.0 startups focused their technology on the weakest elements of human psychology. They set out to create habits, evolved habits into addictions, and laid the groundwork for giant fortunes.
The second important change was philosophical. American business philosophy was becoming more and more proudly libertarian, nowhere more so than in Silicon Valley. The United States had beaten the Depression and won World War II through collective action. As a country, we subordinated the individual to the collective good, and it worked really well. When the Second World War ended, the US economy prospered by rebuilding the rest of the world. Among the many peacetime benefits was the emergence of a prosperous middle class. Tax rates were high, but few people complained. Collective action enabled the country to build the best public education system in the world, as well as the interstate highway system, and to send men to the moon. The average American enjoyed an exceptionally high standard of living.
Then came the 1973 oil crisis, when the Organization of Petroleum Exporting Countries initiated a boycott of countries that supported Israel in the Yom Kippur War. The oil embargo exposed a flaw in the US economy: it was built on cheap oil. The country had lived beyond its means for most of the sixties, borrowing aggressively to pay for the war in Vietnam and the Great Society social programs, which made it vulnerable. When rising oil prices triggered inflation and economic stagnation, the country transitioned into a new philosophical regime.
The winner was libertarianism, which prioritized the individual over the collective good. It might be framed as “you are responsible only for yourself.” As the opposite of collectivism, libertarianism is a philosophy that can trace its roots to the frontier years of the American West. In the modern context, it is closely tied to the belief that markets are always the best way to allocate resources. Under libertarianism, no one needs to feel guilty about ambition or greed. Disruption can be a strategy, not just a consequence. You can imagine how attractive a philosophy that absolves practitioners of responsibility for the impact of their actions on others would be to entrepreneurs and investors in Silicon Valley. They embraced it. You could be a hacker, a rebel against authority, and people would reward you for it. Unstated was the leverage the philosophy conferred on those who started with advantages. The well-born and lucky could attribute their success to hard work and talent, while blaming the less advantaged for not working hard enough or being untalented. Many libertarian entrepreneurs brag about the “meritocracy” inside their companies. Meritocracy sounds like a great thing, but in practice there are serious issues with Silicon Valley’s version of it. If contributions to corporate success define merit when a company is small and has a homogeneous employee base, then meritocracy will encourage the hiring of people with similar backgrounds and experience. If the company is not careful, this will lead to a homogeneous workforce as the company grows. For internet platforms, this means an employee base consisting overwhelmingly of white and Asian males in their twenties and thirties. This can have an impact on product design. For example, Google’s facial-recognition software had problems recognizing people of color, possibly reflecting a lack of diversity in the development team. Homogeneity narrows the range of acceptable ideas and, in the case of Facebook, may have contributed to a work environment that emphasizes conformity. The extraordinary lack of diversity in Silicon Valley may reflect the pervasive embrace of libertarian philosophy. Zuck’s early investor and mentor Peter Thiel is an outspoken advocate for libertarian values.
The third big change was economic, and it was a natural extension of libertarian philosophy. Neoliberalism stipulated that markets should replace government as the rule setter for economic activity. President Ronald Reagan framed neoliberalism with his assertion that “government is not the solution to our problem; it is the problem.” Beginning in 1981, the Reagan administration began removing regulations on business. He restored confidence, which unleashed a big increase in investment and economic activity. By 1982, Wall Street bought into the idea, and stocks began to rise. Reagan called it Morning in America. The problems—stagnant wages, income inequality, and a decline in startup activity outside of tech—did not emerge until the late nineties.
Deregulation generally favored incumbents at the expense of startups. New company formation, which had peaked in 1977, has been in decline ever since. The exception was Silicon Valley, where large companies struggled to keep up with rapidly evolving technologies, creating opportunities for startups. The startup economy in the early eighties was tiny but vibrant. It grew with the PC industry, exploded in the nineties, and peaked in 2000 at $120 billion, before declining by 87 percent over two years. The lean startup model collapsed the cost of startups, such that the number of new companies rebounded very quickly. According to the National Venture Capital Association, venture funding recovered to seventy-nine billion dollars in 2015 on 10,463 deals, more than twice the number funded in 2008. The market power of Facebook, Google, Amazon, and Apple has altered the behavior of investors and entrepreneurs, forcing startups to sell out early to one of the giants or crowd into smaller and less attractive opportunities.
Under Reagan, the country also revised its view of corporate power. The Founding Fathers associated monopoly with monarchy and took steps to ensure that economic power would be widely distributed. There were ebbs and flows as the country adjusted to the industrial revolution, mechanization, technology, world wars, and globalization, but until 1981, the prevailing view was that there should be limits to the concentration of economic power and wealth. The Reagan Revolution embraced the notion that the concentration of economic power was not a problem so long as it did not lead to higher prices for consumers. Again, Silicon Valley profited from laissez-faire economics.
Technology markets are not monopolies by nature. That said, every generation has had dominant players: IBM in mainframes, Digital Equipment in minicomputers, Microsoft and Intel in PCs, Cisco in data networking, Oracle in enterprise software, and Google on the internet. The argument against monopolies in technology is that major innovations almost always come from new players. If you stifle the rise of new companies, innovation may suffer.
Before the internet, the dominant tech companies sold foundational technologies for the architecture of their period. With the exception of Digital Equipment, all of the tech market leaders of the past still exist today, though none could prevent their markets from maturing, peaking, and losing ground to subsequent generations. In two cases, IBM and Microsoft, the business practices that led to success eventually caught the eye of antitrust regulators, resulting in regulatory actions that restored competitive balance. Without the IBM antitrust case, there likely would have been no Microsoft. Without the Microsoft case, it is hard to imagine Google succeeding as it did. Beginning with Google, the most successful technology companies sat on top of stacks created by others, which allowed them to move faster than any market leaders before them. Google, Facebook, and others also broke the mold by adopting advertising business models, which meant their products were free to use, eliminating another form of friction and protecting them from antitrust regulation. They rode the wave of wired broadband adoption and then 4G mobile to achieve global scale in what seemed like the blink of an eye. Their products enjoyed network effects, which occur when the value of a product increases as you add users to the network. Network effects were supposed to benefit users. In the cases of Facebook and Google, that was true for a time, but eventually the value increase shifted decisively to the benefit of owners of the network, creating insurmountable barriers to entry. Facebook and Google, as well as Amazon, quickly amassed economic power on a scale not seen since the days of Standard Oil one hundred years earlier. In an essay on Medium, the venture capitalist James Currier pointed out that the key to success in the internet platform business is network effects and Facebook enjoyed more of them than any other company in history. He said, “To date, we’ve actually identified that Facebook has built no less than six of the thirteen known network effects to create defensibility and value, like a castle with six concentric layers of walls. Facebook’s walls grow higher all the time, and on top of them Facebook has fortified itself with all three of the other known defensibilities in the internet age: brand, scale, and embedding.”
By 2004, the United States was more than a generation into an era dominated by a hands-off, laissez-faire approach to regulation, a time period long enough that hardly anyone in Silicon Valley knew there had once been a different way of doing things. This is one reason why few people in tech today are calling for regulation of Facebook, Google, and Amazon, antitrust or otherwise.
One other factor made the environment of 2004 different from earlier times in Silicon Valley: angel investors. Venture capitalists had served as the primary gatekeepers of the startup economy since the late seventies, but they spent a few years retrenching after the dot-com bubble burst. Into the void stepped angel investors—individuals, mostly former entrepreneurs and executives—who guided startups during their earliest stages. Angel investors were perfectly matched to the lean startup model, gaining leverage from relatively small investments. One angel, Ron Conway, built a huge brand, but the team that had started PayPal proved to have much greater impact. Peter Thiel, Elon Musk, Reid Hoffman, Max Levchin, Jeremy Stoppleman, and their colleagues were collectively known as the PayPal Mafia, and their impact transformed Silicon Valley. Not only did they launch Tesla, Space-X, LinkedIn, and Yelp, they provided early funding to Facebook and many other successful players. More important than the money, though, were the vision, value system, and connections of the PayPal Mafia, which came to dominate the social media generation. Validation by the PayPal Mafia was decisive for many startups during the early days of social media. Their management techniques enabled startups to grow at rates never before experienced in Silicon Valley. The value system of the PayPal Mafia helped their investments create massive wealth, but may have contributed to the blindness of internet platforms to harms that resulted from their success. In short, we can trace both the good and the bad of social media to the influence of the PayPal Mafia.
Thanks to lucky timing, Facebook benefitted not only from lower barriers for startups and changes in philosophy and economics but also from a new social environment. Silicon Valley had prospered in the suburbs south of San Francisco, mostly between Palo Alto and San Jose. Engineering nerds did not have a problem with life in the sleepy suburbs because many had families with children, and the ones who did not have kids did not expect to have the option of living in the city. Beginning with the dot-com bubble of the late nineties, however, the startup culture began to attract kids fresh out of school, who were not so happy with suburban life as their predecessors. In a world where experience had declining economic value, the new generation favored San Francisco as a place to live. The transition was bumpy, as most of the San Francisco–based dot-coms went up in flames in 2000, but after the start of the new millennium, the tech population in San Francisco grew steadily. While Facebook originally based itself in Palo Alto—the heart of Silicon Valley, not far from Google, Hewlett-Packard, and Apple—a meaningful percentage of its employees chose to live in the big city. Had Facebook come along during the era of scarcity, when experienced engineers ruled the Valley, it would have had a profoundly different culture. Faced with the engineering constraints of earlier eras, however, the Facebook platform would not have worked well enough to succeed. Facebook came along at the perfect time.
San Francisco is hip, with diverse neighborhoods, decent public transportation, access to recreation, and lots of nightlife. It attracted a different kind of person than Sunnyvale or Mountain View, including two related types previously unseen in Silicon Valley: hipsters and bros. Hipsters had burst onto the public consciousness as if from a base in Brooklyn, New York, heavy on guys with beards, plaid shirts, and earrings. They seemed to be descendants of San Francisco’s bohemian past, a modern take on the Beats. The bros were different, though perhaps more in terms of style than substance. Ambitious, aggressive, and exceptionally self-confident, they embodied libertarian values. Symptoms included a lack of empathy or concern for consequences to others. The hipster and bro cultures were decidedly male. There were women in tech, too, more than in past generations of Silicon Valley, but the culture continued to be dominated by men who failed to appreciate the obvious benefits of treating women as peers. Too many in Silicon Valley missed the lesson that treating others as equals is what good people do. For them, I make a simple economic case: women are 51 percent of the US population; they account for 85 percent of consumer purchases; they control 60 percent of all personal wealth. They know what they want better than men do, yet in Silicon Valley, which invests billions in consumer-facing startups, men hold most of the leadership positions. Women who succeed often do so by beating the boys at their own game, something that Silicon Valley women do with ever greater frequency. Bloomberg journalist Emily Chang described this culture brilliantly in her book, Brotopia.
With the biggest influx of young people since the Summer of Love, the tech migration after 2000 had a visible impact on the city, precipitating a backlash that began quietly but grew steadily. The new kids boosted the economy with tea shops and co-working spaces that sprung up like mushrooms after a summer rain in the forest. But they seemed not to appreciate that their lifestyle might disturb the quiet equilibrium that had preceded their arrival. With a range of new services catering to their needs, delivered by startups of their peers, the hipsters and bros eventually provoked a reaction. Tangible manifestations of their presence, like the luxury buses that took them to jobs at Google, Facebook, Apple, and other companies down in Silicon Valley, drew protests from peeved locals. An explosion of Uber and Lyft vehicles jammed the city’s streets, dramatically increasing commute times. Insensitive blog posts, inappropriate business behavior, and higher housing costs ensured that locals would neither forgive nor forget.
Zuck enjoyed the kind of privileged childhood one would expect for a white male whose parents were medical professionals living in a beautiful suburb. As a student at Harvard, he had the idea for Facebook. Thanks to great focus and enthusiasm, Zuck would almost certainly have found success in Silicon Valley in any era, but he was particularly suited to his times. Plus, as previously noted, he had an advantage not available to earlier generations of entrepreneurs: he could build a team of people his age—many of whom had never before had a full-time job—and mold them. This allowed Facebook to accomplish things that had never been done before.
For Zuck and the senior management of Facebook, the goal of connecting the world was self-evidently admirable. The philosophy of “move fast and break things” allowed for lots of mistakes, and Facebook embraced the process, made adjustments, and continued forward. The company maintained a laser focus on Zuck’s priorities, never considering the possibility that there might be flaws in this approach, even when the evidence of such flaws became overwhelming. From all appearances, Zuck and his executive team did not anticipate that people would use Facebook differently than Zuck had envisioned, that putting more than two billion people on the same network would lead to tribalism, that Facebook Groups would amplify that tribalism, that bad actors would take advantage to harm innocent people. They failed to imagine unintended consequences from an advertising business based on behavior modification. They ignored critics. They missed the opportunity to take responsibility when the reputational cost would have been low. When called to task, they protected their business model and prerogatives, making only small changes to their business practices. This trajectory is worth understanding in greater depth.
3
Move Fast and Break Things
Try not to become a man of success, but rather try to become a man of value. —ALBERT EINSTEIN
During Mark Zuckerberg’s sophomore year at Harvard, he created a program called Facemash that allowed users to compare photos of two students and choose which was “hotter.” The photos were taken from the online directories of nine Harvard dormitories. According to an article in Fast Company magazine, the application had twenty-two thousand photo views in the first four hours and spread rapidly on campus before being shut down within a week by the authorities. Harvard threatened to expel Zuckerberg for security, copyright, and privacy violations. The charges were later dropped. The incident caught the attention of three Harvard seniors, Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra, who invited Zuck to consult on their social network project, HarvardConnection.com.
In an interview with the campus newspaper, Zuck complained that the university would be slow to implement a universal student directory and that he could do it much faster. He started in January 2004 and launched TheFacebook.com on February 4. Six days later, the trio of seniors accused Zuck of pretending to help on their project and then stealing their ideas for TheFacebook. (The Winklevoss twins and Narendra ultimately filed suit and settled in 2008 for 1.2 million shares of Facebook stock.) Within a month, more than half of the Harvard student body had registered on Zuck’s site. Three of Zuck’s friends joined the team, and a month later they launched TheFacebook at Columbia, Stanford, and Yale. It spread rapidly to other college campuses. By June, the company relocated from Cambridge, Massachusetts, to Palo Alto, California, brought in Napster cofounder Sean Parker as president, and took its first venture capital from Peter Thiel.
TheFacebook delivered exactly what its name described: each page provided a photo with personal details and contact information. There was no News Feed and no frills, but the color scheme and fonts would be recognizable to any present-day user. While many features were missing, the thing that stands out is the effectiveness of the first user interface. There were no mistakes that would have to be undone.
The following year, Zuck and team paid two hundred thousand dollars to buy the “facebook.com” domain and changed the company’s name. Accel Partners, one of the leading Silicon Valley venture funds, invested $12.7 million, and the company expanded access to high school students and employees of some technology firms. The functionality of the original Facebook was the same as TheFacebook, but the user interface evolved. Some of the changes were subtle, such as the multitone blue color scheme, but others, such as the display of thumbnail photos of friends, remain central to the current look. Again, Facebook made improvements that would endure. Sometimes users complained about new features and products—this generally occurred when Zuck and his team pushed users too hard to disclose and share more information—but Facebook recovered quickly each time. The company never looked back.
Facebook was not the first social network. SixDegrees.com started in 1997 and Makeoutclub in 1999, but neither really got off the ground. Friendster, which started in 2002, was the first to reach one million users. Friendster was the model for Facebook. It got off to a fantastic start, attracted investors and users, but then fell victim to performance problems that crippled the business. Friendster got slower and slower, until users gave up and left the platform. Started in 2003, MySpace figured out how to scale better than Friendster, but it, too, eventually had issues. Allowing users to customize pages made the system slow, but in the end, it was the ability of users to remain anonymous that probably did the most damage to MySpace. Anonymity encouraged the posting of pornography, the elimination of which drained MySpace’s resources, and enabled adults to pose as children, which led to massive problems.
The genius of Zuck and his original team was in reconceptualizing the problem. They recognized that success depended on building a network that could scale without friction. Sean Parker described the solution this way in Adam Fisher’s Valley of Genius: “The ‘social graph’ is a math concept from graph theory, but it was a way of trying to explain to people who were kind of academic and mathematically inclined that what we were building was not a product so much as it was a network composed of nodes with a lot of information flowing between those nodes. That’s graph theory. Therefore we’re building a social graph. It was never meant to be talked about publicly.” Perhaps not, but it was brilliant. The notion that a small team in their early twenties with little or no work experience figured it out on the first try is remarkable. The founders also had the great insight that real identity would simplify the social graph, reducing each user to a single address. These two ideas would not only help Facebook overcome the performance problems that sank Friendster and MySpace, they would remain core to the company’s success as it grew past two billion users.
When I first met Zuck in 2006, I was very familiar with Friendster and MySpace and had a clear sense that Facebook’s design, its insistence on real identity, and user control of privacy would enable the company to succeed where others had failed. Later on, Facebook would relax its policies on identity and privacy to enable faster growth. Facebook’s terms of service still require real identity, but enforcement is lax, consistent with the company’s commitment to minimize friction, and happens only when other users complain. By the end of the decade, user privacy would become a pawn to be traded to accelerate growth.
In 2006, it was not obvious how big the social networking market would be, but I was already convinced that Facebook had an approach that might both define the category and make it economically successful. Facebook was a hit with college students, but I thought the bigger opportunity would be with adults, whose busy schedules were tailor-made for the platform. To me, that suggested a market opportunity of at least one hundred million users or more in English-speaking countries. In those days, one hundred million users would have justified a valuation of at least ten billion dollars, or ten times the number Yahoo had offered. It never occurred to me then that Facebook would fly past two billion monthly users, though I do remember the first time Zuck told me his target was a billion users. It happened some time in 2009, when Facebook was racing from two hundred to three hundred million users. I thought it was a mistake to maximize user count. The top 20 percent of users would deliver most of the value. I worried that the pursuit of one billion users would force Zuck to do business in places or on terms that should make him uncomfortable. As it turned out, there were no visible compromises when Facebook passed a billion monthly users in September 2012. The compromises were very well hidden.
The company had plenty of capital when I first met Zuck, so there was no immediate opportunity for me to invest, but as I’ve said, the notion of helping the twenty-two-year-old founder of a game-changing startup deal with an existential crisis really appealed to me. As a longtime technology investor, I received many requests for free help, and I loved doing it. Good advice can be the first step in a lasting relationship and had ultimately led to many of my best investments. The strategy required patience—and a willingness to help lots of companies that might not work out—but it made my work life fresh and fun.
My first impression of Zuck was that he was a classic Silicon Valley nerd. In my book, being a nerd is a good thing, especially for a technology entrepreneur. Nerds are my people. I didn’t know much about Zuck as a person and knew nothing about the episode that nearly led to his expulsion from Harvard until much later. What I saw before me was a particularly intense twenty-two-year-old who took all the time he needed to think before he acted. As painful as that five minutes of silence was for me, it signaled caution, which I took as a positive. The long silence also signaled weak social skills, but that would not have been unusual in a technology founder. But in that first meeting, I was able to help Zuck resolve a serious problem. Not only did he leave my office with the answer he needed, he had a framework for justifying it to the people in his life who wanted their share of one billion dollars. At the time, Zuck was very appreciative. A few days later, he invited me to his office, which was in the heart of Palo Alto, just down the street from the Stanford University campus. The interior walls were covered with graffiti. Professional graffiti. In Zuck’s conference room, we talked about the importance of having a cohesive management team where everyone shared the same goals. Those conversations continued several times a month for three years. Thanks to the Yahoo offer, Zuck understood that he could no longer count on everyone on his team. Some executives had pushed hard to sell the company. Zuck asked for my perspective on team building, which I was able to provide in the course of our conversations. A year later, he upgraded several positions, most notably his chief operating officer and his chief financial officer.
Toward the end of 2006, Zuck learned that a magazine for Harvard alumni was planning a story about the Winklevoss brothers and again turned to me for help. I introduced him to a crisis-management public relations firm and helped him minimize the fallout from the story.
I trust my instincts about people. My instincts are far from perfect, but they have been good enough to enable a long career. Intensity of the kind I saw in Zuck is a huge positive in an entrepreneur. Another critical issue for me is a person’s value system. In my interactions with him, Zuck was consistently mature and responsible. He seemed remarkably grown-up for his age. He was idealistic, convinced that Facebook could bring people together. He was comfortable working with women, which is not common among Silicon Valley entrepreneurs. My meetings with Zuck almost always occurred in his office, generally just the two of us, so I had an incomplete picture of the man, but he was always straight with me. I liked Zuck. I liked his team. I was a fan of Facebook.
This is a roundabout way of saying that my relationship with Zuck was all business. I was one of the people he would call on when confronted with new or challenging issues. Mentoring is fun for me, and Zuck could not have been a better mentee. We talked about stuff that was important to Zuck, where I had useful experience. More often than not, he acted on my counsel.
Zuck had other mentors, several of whom played a much larger role than I did. He spoke to me about Peter Thiel, who was an early investor and board member. I don’t know how often Zuck spoke with Thiel, but I know he took Peter’s advice very seriously. Philosophically, Thiel and I are polar opposites, and I respected Zuck for being able to work with both of us. Washington Post CEO Don Graham had started advising Zuck at least a year before me. As one of the best-connected people in our nation’s capital, Don would have been a tremendous asset to Zuck as Facebook grew to global scale. Marc Andreessen, the Netscape founder turned venture capitalist, played a very important role in Zuck’s orbit, as he was a hard-core technologist who had once been a very young entrepreneur. Presumably, Zuck also leaned on Jim Breyer, the partner from Accel who made the first institutional investment in Facebook, but Zuck did not talk about Breyer the way he did about Thiel.
In researching this book for key moments in the history of Facebook, one that stands out occurred months before I got involved. In the fall of 2005, Facebook gave users the ability to upload photographs. They did it with a new wrinkle—tagging the people in the photo—that helped to define Facebook’s approach to engagement. Tagging proved to be a technology with persuasive power, as users felt obligated to react or reciprocate when informed they had been tagged. A few months after my first meeting with Zuck, Facebook made two huge changes: it launched News Feed, and it opened itself up to anyone over the age of thirteen with a valid email address. News Feed is the heart of the Facebook user experience, and it is hard today to imagine that the site did well for a couple of years without it. Then, in January 2007, Facebook introduced a mobile web product to leverage the widespread adoption of smartphones. The desktop interface also made a big leap.
In the summer of 2007, Zuck called to offer me an opportunity to invest. He actually offered me a choice: invest or join the board. Given my profession and our relationship, the choice was easy. I did not need to be on the board to advise Zuck. The investment itself was complicated. One of Facebook’s early employees needed to sell a piece of his stake, but under the company’s equity-incentive plan there was no easy way to do this. We worked with Facebook to create a structure that balanced both our needs and those of the seller. When the deal was done, there was no way to sell our shares until after an initial public offering. Bono, Marc, and I were committed for the long haul.
Later that year, Microsoft bought 1.6 percent of Facebook for $240 million, a transaction that valued the company at $15 billion. The transaction was tied to a deal where Microsoft would sell advertising for Facebook. Microsoft paid a huge premium to the price we paid, reflecting its status as a software giant with no ability to compete in social. Facebook understood that it had leverage over Microsoft and priced the shares accordingly. As investors, we knew the Microsoft valuation did not reflect the actual worth of Facebook. It was a “strategic investment” designed to give Microsoft a leg up over Google and other giants.
Soon thereafter, Facebook launched Beacon, a system that gathered data about user activity on external websites to improve Facebook ad targeting and to enable users to share news about their purchases. When a Facebook user interacted with a Beacon partner website, the data would be sent to Facebook and reflected in the user’s News Feed. Beacon was designed to make Facebook advertising much more valuable, and Facebook hoped that users would be happy to share their interests and purchase activities with friends. Unfortunately, Facebook did not give users any warning and did not give them any ability to control Beacon. Their activities on the web would appear in their Facebook feed even when the user was not on Facebook. Imagine having “Just looked at sex toys on Amazon.com” show up in your feed. Users thought Beacon was creepy. Most users did not know what Facebook was doing with Beacon. When they found out, they were not happy. Zuck’s cavalier attitude toward user privacy, evident from the first day of Facemash back at Harvard, had blown up in his face. MoveOn organized a protest campaign, arguing that Facebook should not publish user activity off the site without explicit permission. Users filed class action lawsuits. Beacon was withdrawn less than a year after launch.
In the fall of 2007, Zuck told me he wanted to hire someone to build Facebook’s monetization. I asked if he was willing to bring in a strong number two, someone who could be a chief operating officer or president. He said yes. I did not say anything, but a name sprang to mind immediately: Sheryl Sandberg. Sheryl had been chief of staff to Secretary of the Treasury Larry Summers during Bill Clinton’s second term. In that job, she had partnered with Bono on the singer’s successful campaign to spur the world’s leading economies to forgive billions in debt owed by countries in the developing world. Together, Bono and Sheryl helped many emerging countries to reenergize their economies, which turned out to be a good deal for everyone involved. Sheryl introduced Bono to me, which eventually led the two of us to collaborate on Elevation Partners. Sheryl came to Silicon Valley in early 2001 and hung out in my office for a few weeks. We talked to Sheryl about joining Integral, but my partner John Powell had a better idea. John and I were both convinced that Sheryl would be hugely successful in Silicon Valley, but John pointed out that there were much bigger opportunities than Integral. He thought the right place for Sheryl was Google and shared that view with John Doerr, who was a member of Google’s board of directors. Sheryl took a job at Google to help build AdWords, the product that links ads to search results.
AdWords is arguably the most successful advertising product in history, and Sheryl was one of the people who made that happen. Based on what I knew about Sheryl, her success came as no surprise. One day in 2007, Sheryl came by to tell me she had been offered a leadership position at The Washington Post. She asked me what I thought. I suggested that she consider Facebook instead. Thanks to Watergate and the Pentagon Papers, the Post was iconic, but being a newspaper, it did not have a workable plan to avoid business model damage from the internet. Facebook seemed like a much better match for Sheryl than the Post, and she seemed like the best possible partner for Zuck and Facebook. Sheryl told me she had once met Zuck at a party, but did not know him and worried that they might not be a good fit. I encouraged Sheryl to get to know Zuck and see where things went. After my first conversation with Sheryl, I called Zuck and told him I thought Sheryl would be the best person to build Facebook’s advertising business. Zuck worried that advertising on Facebook would not look like Google’s AdWords—which was true—but I countered that building AdWords might be the best preparation for creating a scalable advertising model on Facebook. It took several separate conversations with Zuck and Sheryl to get them to meet, but once they got together, they immediately found common ground. Sheryl joined the company in March 2008. Looking at a March 2008 Wall Street Journal article on Sheryl’s hire and Zuck’s other efforts to stabilize the company by accepting help from more experienced peers, I’m reminded that Facebook’s current status as a multibillion-dollar company seemed far from inevitable in those days. The article highlighted the company’s image problems and mentioned Zuck complaining to me about the difficulties of being a CEO. Still, growth accelerated.
The underlying technology of the disastrous Beacon project resurfaced in late 2008 as Facebook Connect, a product that allowed users to sign into third-party sites with their Facebook credentials. News of hacks and identity theft had created pressure for stronger passwords, which users struggled to manage. The value of Connect was that it enabled people to memorize a single, strong Facebook password for access to thousands of sites. Users loved Connect for its convenience, but it is not obvious that they understood that it enabled Facebook to track them in many places around the web. With the benefit of hindsight, we can see the costs that accompanied the convenience of Connect. I tried Connect on a few news sites, but soon abandoned it when I realized what it meant for privacy.
The data that Facebook collected through Connect led to huge improvements in targeting and would ultimately magnify catastrophes like the Russian interference in the 2016 election. Other users must have noticed that Facebook knew surprising things about them, but may have told themselves the convenience of Connect justified the loss of privacy. With Connect, Facebook addressed a real need. Maintaining secure credentials is inconvenient, but the world would have been better off had users adopted a solution that did not exploit their private data. Convenience, it turns out, was the sweetener that led users to swallow a lot of poison.
Facebook’s user count reached one hundred million in the third quarter of 2008. This was astonishing for a company that was only four and half years old, but Facebook was just getting started. Only seven months later, the user count hit two hundred million, aided by the launch of the Like button. The Like button soon defined the Facebook experience. “Getting Likes” became a social phenomenon. It gave users an incentive to spend more time on the site and joined photo tagging as a trigger for addiction to Facebook. To make its advertising valuable, Facebook needs to gain and hold user attention, which it does with behavior modification techniques that promote addiction, according to a growing body of evidence. Behavior modification and addiction would play a giant role in the Facebook story, but were not visible during my time as a mentor to Zuck and would remain unknown to me until 2017.
It turns out everyone wants to be liked, and the Like button provided a yardstick of social validation and social reciprocity—packaged as a variable reward—that transformed social networking. It seemed that every Facebook user wanted to know how many Likes they received for each post, and that tempted many users to return to the platform several times a day. Facebook amplified the signal with notifications, teasing users constantly. The Like button helped boost the user count to 305 million by the end of September 2009. Like buttons spread like wildfire to sites across the web, and along with Connect enabled Facebook to track its users wherever they browsed.
The acquisition of FriendFeed in August 2009 gave Facebook an application for aggregating feeds from a wide range of apps and blogs. It also provided technology and a team that would protect Facebook’s flank from the new kid on the block, Twitter. Over the following year, Facebook acquisitions would enable photo sharing and the importing of contacts. Such acquisitions made Facebook more valuable to users, but that was nothing compared to the value they created for Facebook’s advertising. On every metric, Facebook prospered. Revenue grew rapidly. Facebook’s secret sauce was its ability to imitate and improve upon the ideas of others, and then scale them. The company demonstrated an exceptional aptitude for managing hypergrowth, a skill that is as rare as it is valuable. In September 2009, the company announced that it had turned cash flow positive. This is not the same as turning profitable, but it was actually a more important milestone. It meant that Facebook generated enough revenue to cover all its cash expenses. It would not need more venture capital to survive. The company was only five and a half years old.
With Sheryl on board as chief operating officer in charge of delivering revenues, Facebook quickly developed its infrastructure to enable rapid growth. This simplified Zuck’s life so he could focus on strategic issues. Facebook had transitioned from startup to serious business. This coming-of-age had implications for me, too. Effectively, Zuck had graduated. With Sheryl as his partner, I did not think Zuck would need mentoring from me any longer. My domain expertise in mobile made me valuable as a strategy advisor, but even that would be a temporary gig. Like most successful entrepreneurs and executives, Zuck is brilliant (and ruthless) about upgrading his closest advisors as he goes along. In the earliest days of Facebook, Sean Parker played an essential role as president, but his skills stopped matching the company’s needs, so Zuck moved on from him. He also dropped the chief operating officer who followed Parker and replaced him with Sheryl. The process is Darwinian in every sense. It is natural and necessary. I have encountered it so many times that I can usually anticipate the right moment to step back. I never give it a moment’s thought.
Knowing that we had accomplished everything we could have hoped for at the time I began mentoring him, I sent Zuck a message saying that my job was done. He was appreciative and said we would always be friends. At this point, I stopped being an insider, but I remained a true believer in Facebook. While failures like Beacon had foreshadowed problems to come, all I could see was the potential of Facebook as a force for good. The Arab Spring was still a year away, but the analyst in me could see how Facebook might be used by grassroots campaigns. What I did not grasp was that Zuck’s ambition had no limit. I did not appreciate that his focus on code as the solution to every problem would blind him to the human cost of Facebook’s outsized success. And I never imagined that Zuck would craft a culture in which criticism and disagreement apparently had no place.
The following year, 2010, was big for Facebook in surprising ways. By July, Facebook had five hundred million users, half of whom visited the site every day. Average daily usage was thirty-four minutes. Users who joined Facebook to stay in touch with family soon found new functions to enjoy. They spent more time on the site, shared more posts, and saw more ads.
October saw the release of The Social Network, a feature film about the early days of Facebook. The film was a critical and commercial success, winning three Academy Awards and four Golden Globes. The plot focused on Zuck’s relationship with the Winklevoss twins and the lawsuit that resulted from it. The portrayal of Zuck was unflattering. Zuck complained that the film did not accurately tell the story, but hardly anyone besides him seemed to care. I chose not to watch the film, preferring the Zuck I knew to a version crafted in Hollywood.
Just before the end of 2010, Facebook improved its user interface again, edging closer to the look and feel we know today. The company finished 2010 with 608 million monthly users. The rate of user growth remained exceptionally high, and minutes of use per user per day continued to rise. Early in 2011, Facebook received an investment of five hundred million dollars for 1 percent of the company, pushing the valuation up to fifty billion dollars. Unlike the Microsoft deal, this transaction reflected a financial investor’s assessment of Facebook’s value. At this point, even Microsoft was making money on its investment. Facebook was not only the most exciting company since Google, it showed every indication that it would become one of the greatest tech companies of all time. New investors were clamoring to buy shares. By June 2011, DoubleClick announced that Facebook was the most visited site on the web, with more than one trillion visits. Nielsen disagreed, saying Facebook still trailed Google, but it appeared to be only a matter of time before the two companies would agree that Facebook was #1.
In March 2011, I saw a presentation that introduced the first seed of doubt into my rosy view of Facebook. The occasion was the annual TED Conference in Long Beach, the global launch pad for TED Talks. The eighteen-minute Talks are thematically organized over four days, providing brain candy to millions far beyond the conference. That year, the highlight for me was a nine-minute talk by Eli Pariser, the board president of MoveOn.org. Eli had an insight that his Facebook and Google feeds had stopped being neutral. Even though his Facebook friend list included a balance of liberals and conservatives, his tendency to click more often on liberal links had led the algorithms to prioritize such content, eventually crowding out conservative content entirely. He worked with friends to demonstrate that the change was universal on both Facebook and Google. The platforms were pretending to be neutral, but they were filtering content in ways that were invisible to users. Having argued that the open web offered an improvement on the biases of traditional content editors, the platforms were surreptitiously implementing algorithmic filters that lacked the value system of human editors. Algorithms would not act in a socially responsible way on their own. Users would think they were seeing a balance of content when in fact they were trapped in what Eli called a “filter bubble” created and enforced by algorithms. He hypothesized that giving algorithms gatekeeping power without also requiring civic responsibility would lead to unexpected, negative consequences. Other publishers were jumping on board the personalization bandwagon. There might be no way for users to escape from filter bubbles.
Eli’s conclusion? If platforms are going to be gatekeepers, they need to program a sense of civic responsibility into their algorithms. They need to be transparent about the rules that determine what gets through the filter. And they need to give users control of their bubble.
I was gobsmacked. It was one of the most insightful talks I had ever heard. Its import was obvious. When Eli finished, I jumped out of my seat and made a beeline to the stage door so that I could introduce myself. If you view the talk on TED.com today, you will immediately appreciate its importance. At the time I did not see a way for me to act on Eli’s insight at Facebook. I no longer had regular contact with Zuck, much less inside information. I was not up to speed on the engineering priorities that had created filter bubbles or about plans for monetizing them. But Eli’s talk percolated in my mind. There was no good way to spin filter bubbles. All I could do was hope that Zuck and Sheryl would have the sense not to use them in ways that would harm users. (You can listen to Eli Pariser’s “Beware Online ‘Filter Bubbles’” talk for yourself on TED.com.)
Meanwhile, Facebook marched on. Google introduced its own social network, Google+, in June 2011, with considerable fanfare. By the time Google+ came to market, Google had become a gatekeeper between content vendors and users, forcing content vendors who wanted to reach their own audience to accept Google’s business terms. Facebook took a different path to a similar place. Where most of Google’s products delivered a single function that gained power from being bundled, Facebook had created an integrated platform, what is known in the industry as a walled garden, that delivered many forms of value. Some of the functions on the platform had so much value that Facebook spun them off as stand-alone products. One example: Messenger.
Thanks to its near monopoly of search and the AdWords advertising platform that monetized it, Google knew more about purchase intentions than any other company on earth. A user looking to buy a hammer would begin with a search on Google, getting a set of results along with three AdWords ads from vendors looking to sell hammers. The search took milliseconds. The user bought a hammer, the advertiser sold one, and Google got paid for the ad. Everyone got what they wanted. But Google was not satisfied. It did not know the consumer’s identity. Google realized that its data set of purchase intent would have greater value if it could be tied to customer identity. I call this McNamee’s 7th Law: data sets become geometrically more valuable when you combine them. That is where Gmail changed the game. Users got value in the form of a good email system, but Google received something far more valuable. By tying purchase intent to identity, Google laid the foundation for new business opportunities. It then created Google Maps, enabling it to tie location to purchase intent and identity. The integrated data set rivaled Amazon’s, but without warehouses and inventory it generated much greater profits for Google. Best of all, combined data sets often reveal insights and business opportunities that could not have been imagined previously. The new products were free to use, but each one contributed data that transformed the value of Google’s advertising products. Facebook did something analogous with each function it added to the platform. Photo tagging expanded the social graph. News Feed enriched it further. The Like button delivered data on emotional triggers. Connect tracked users as they went around the web. The value is not really in the photos and links posted by users. The real value resides in metadata—data about data—which is what we call the data that describes where the user was when he or she posted, what they were doing, with whom they were doing it, alternatives they considered, and more. Broadcast media like television, radio, and newspapers lack the real-time interactivity necessary to create valuable metadata. Thanks to metadata, Facebook and Google create a picture of the user that can be monetized more effectively than traditional media. When collected on the scale of Google and Facebook, metadata has unimaginable value. When people say, “In advertising businesses, users are not the customer; they are the product,” this is what they are talking about. But in the process, Facebook in particular changed the nature of advertising. Traditional advertising seeks to persuade, but in a one-size-fits-most kind of way. The metadata that Facebook and others collected enabled them to find unexpected patterns, such as “four men who collect baseball cards, like novels by Charles Dickens, and check Facebook after midnight bought a certain model of Toyota,” creating an opportunity to package male night owls who collect baseball cards and like Dickens for car ads. Facebook allows advertisers to identify each user’s biases and appeal to them individually. Insights gathered this way changed the nature of ad targeting. More important, though, all that data goes into Facebook’s (or Google’s) artificial intelligence and can be used by advertisers to exploit the emotions of users in ways that increase the likelihood that they purchase a specific model of car or vote in a certain way. As the technology futurist Jaron Lanier has noted, advertising on social media platforms has evolved into a form of manipulation.
Google+ was Google’s fourth foray into social networking. Why did Google try so many times? Why did it keep failing? By 2011, it must have been obvious to Google that Facebook had the key to a new and especially valuable online advertising business. Unlike traditional media or even search, social networking provided signals about each user’s emotional state and triggers. Relative to the monochrome of search, social network advertising offered Technicolor, the equivalent of Oz vs. Kansas in The Wizard of Oz. If you are trying to sell a commodity product like a hammer, search advertising is fine, but for branded products like perfume or cars or clothing, social networking’s data on emotions has huge incremental value. Google wanted a piece of that action. Google+ might have added a new dimension to Google’s advertising business, but Facebook had a prohibitive lead when Google+ came to market, and the product’s flaws prevented it from gaining much traction with people outside of Google. All it offered was interesting features, and Facebook imitated the good parts quickly.
Facebook took no chances with Google+. The company went to battle stations and devoted every resource to stopping Google on the beach of social networking. The company cranked up its development efforts, dramatically increasing the size limits for posts, partnering with Skype, introducing the Messenger texting product, and adding a slew of new tools for creating applications on the platform. As 2012 began, Facebook was poised for a breakout year. The company had a new advertising product—Open Graph—that leveraged its Social Graph, the tool to capture everything it knew from both inside Facebook and around the web. Initially, Facebook gave advertisers access only to data captured inside the platform. Facebook also enabled advertisements in the News Feed for the first time. News Feed ads really leveraged Facebook’s user experience. Ads blended in with posts from friends, which meant more people saw them, but there was also a downside: it was very hard to get an ad to stand out the way it would on radio or TV or in print.
The big news early in 2012 came when Facebook filed for an initial public offering (IPO) and then acquired Instagram for one billion dollars. The Facebook IPO, which took place on May 17, raised sixteen billion dollars, making it the third largest in US history. The total valuation of $104 billion was the highest ever for a newly public company. Facebook had revenues of nearly four billion dollars and net income of one billion dollars in the year prior to the IPO and found itself in the Fortune 500 list of companies from day one.
As impressive as all those numbers are, the IPO itself was something of a train wreck. Trading glitches occurred during the first day, preventing some trades from going through, and the stock struggled to stay above the IPO price. The deal set a record for trading volume on the first day after an IPO: 460 million shares.
The months leading up to the IPO saw weakness in Facebook’s advertising sales that triggered reductions in the company’s revenue forecast. When a company is preparing for an IPO, forecast reductions can be disastrous, as public investors have no incentive to buy into uncertainty. In Facebook’s case, investors’ extreme enthusiasm for the company—based primarily on user growth and Facebook’s increasing impact on society—meant the IPO could survive the reduction in forecast, but Zuck’s dream of a record-setting offering might be at risk. As described by former Facebook advertising targeting manager Antonio García Martínez in his book Chaos Monkeys, “The narratives the company had woven about the new magic of social-media marketing were in deep reruns with advertisers, many of whom were beginning to openly question the fortunes they had spent on Facebook thus far, often with little to show for it.” For all its success with users, Facebook had not yet created an advertising product that provided the targeting necessary to provide appropriate results for advertisers. Martínez went on to say, “A colossal yearlong bet the company had made on a product called Open Graph, and its accompanying monetization spin-off, Sponsored Stories, had been an absolute failure in the market.” Advertisers had paid a lot of money to Facebook, believing the company’s promises about ad results, but did not get the value they felt they deserved. For Facebook, this was a moment of truth. By pushing the IPO valuation to record levels, Facebook set itself up for a rocky start as a public company.
The newly public stock sold off almost immediately and went into free fall after Yahoo Finance reported that the investment banks that had underwritten the IPO had reduced their earnings forecasts just before the offering. In the heat of the deal, had those forecast changes been effectively communicated to buyers of the stock? The situation was sufficiently disturbing that regulatory authorities initiated a review. Lawsuits followed, alleging a range of violations with respect to the trading glitches and the actions of one underwriter. A subsequent set of lawsuits named the underwriters, Zuck and Facebook’s board, and Nasdaq. The Wall Street Journal characterized the IPO as a “fiasco.”
For Facebook’s business, though, the IPO was an undisputed blessing. The company received a staggering amount of free publicity before the deal, essentially all of it good. That turbocharged user growth, news of which enabled Facebook to survive the IPO issues with relatively little damage. Investors trusted that a company with such impressive user growth would eventually figure out monetization. Once again, Facebook pushed the envelope, stumbled, and got away with it. Then they did something really aggressive.
The data from inside Facebook alone did not deliver enough value for advertisers. Thanks to Connect and the ubiquitous Like and Share buttons, Facebook had gathered staggering amounts of data about user behavior from around the web. The company had chosen not to use the off-site data for commercial purposes, a self-imposed rule that it decided to discard when the business slowed down. No one knew yet how valuable the external data would be, but they decided to find out. As Martínez describes it, Zuck and Sheryl began cautiously, fearful of alienating users.
Thanks to the IPO, Facebook enjoyed a tsunami of user growth. Within a few months, user growth restored investor confidence. It also overwhelmed the complaints from advertisers, who had to go where their customers were, even if the ad vehicles on Facebook were disappointing. The pressure to integrate user data from activities away from Facebook into the ad products lessened a bit, but the fundamental issues with targeting and the value of ads remained. As a result, the decision to integrate user data from outside Facebook would not be reversed.
Конец ознакомительного фрагмента.
Текст предоставлен ООО «ЛитРес».
Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/roger-mcnamee/zucked-how-users-got-used-and-what-we-can-do-about-it/) на ЛитРес.
Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.