Smarter Than You Think: How Technology is Changing Our Minds for the Better

Smarter Than You Think: How Technology is Changing Our Minds for the Better
Clive Thompson


A brilliant examination into how the internet is profoundly changing the way we think.In this groundbreaking book, Wired writer Clive Thompson argues that the internet is boosting our brainpower, encouraging new ways of thinking, and making us more not less intelligent as is so often claimed.Our lives have been changed utterly and irrevocably by the rise of the internet and it is only now that we can begin to analyse this extraordinary phenomenon. The author argues that as we rely more and more for machines to help us think, our thinking itself is becoming richer and more complex. We’re able to learn more, retain it longer, to write in curious new forms, and even to think entirely new types of thoughts.Outsmart is filled with stories of people who are living through these profound technological changes. In a series of postcards from the near future, we meet characters such as Gordon Bell, an ageing millionaire who is saving a digital copy of everything that happens to him, and Eric Hovitz, one of the world’s leading artificial-intelligence researchers, who is creating software that is designed to let your computer sense your mood and then predict when you’re going to be most productive at work.Lucidly written and argued, Outsmart is a breathtaking original look at our Brave New World.




















Copyright (#u8510e361-e09c-5a9c-b962-4f62d75d6b1a)


William Collins

An imprint of HarperCollinsPublishers Ltd

77–85 Fulham Palace Road,

Hammersmith, London W6 8JB

WilliamCollinsBooks.com (http://WilliamCollinsBooks.com)

First published in Great Britain by William Collins in 2013

Copyright © Clive Thompson 2013

Clive Thompson asserts the moral right to be identified as the author of this work

A catalogue record for this book is available from the British Library

All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this e-book on-screen. No part of this text may be reproduced, transmitted, down-loaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins.

Source ISBN: 9780007427796

Ebook Edition © September 2013 ISBN: 9780007427789

Version: 2014-09-06




From the reviews of Smarter Than You Think: (#u8510e361-e09c-5a9c-b962-4f62d75d6b1a)


‘We should be grateful to have such a clear-eyed and lucid interpreter of our changing technological culture as Clive Thompson. Smarter Than You Think is an important, insightful book about who we are, and who we are becoming’

Joshua Foer, New York Times bestselling author of Moonwalking with Einstein

‘Almost without noticing it, the internet has become our intellectual exoskeleton. Rather than just observing this evolution, Clive Thompson takes us to the people, places and technologies driving it, bringing deep reporting, storytelling and analysis to one of the most profound shifts in human history’

Chris Anderson, author of The Long Tail

‘There’s good news in this dazzling book: technology is not the enemy. Smarter Than You Think reports on how the digital world has helped individuals harness a powerful, collaborative intelligence – becoming better problem-solvers and more creative human beings’

Jane McGonigal, author of Reality is Broken

‘Thompson has started an important debate in this lively and accessible book’

Scotsman




Dedication (#u8510e361-e09c-5a9c-b962-4f62d75d6b1a)


To Emily, Gabriel, and Zev


Contents

Cover (#ud05700c4-0271-51ca-b27a-f8eaa008b0ca)

Title Page (#u7f5b8db3-b839-5351-bc87-9f5527d664d7)

Copyright

Praise

Dedication

The Rise of the Centaurs

We, the Memorious

Public Thinking

The New Literacies

The Art of Finding

The Puzzle-Hungry World

Digital School

Ambient Awareness

The Connected Society

Epilogue

Notes

Index

Acknowledgments

About the Author

About the Publisher




The Rise of the Centaurs_ (#u8510e361-e09c-5a9c-b962-4f62d75d6b1a)


Who’s better at chess—computers or humans?

The question has long fascinated observers, perhaps because chess seems like the ultimate display of human thought: the players sit like Rodin’s Thinker, silent, brows furrowed, making lightning-fast calculations. It’s the quintessential cognitive activity, logic as an extreme sport.

So the idea of a machine outplaying a human has always provoked both excitement and dread. In the eighteenth century, Wolfgang von Kempelen caused a stir


with his clockwork Mechanical Turk—an automaton that played an eerily good game of chess, even beating Napoleon Bonaparte. The spectacle was so unsettling that onlookers cried out in astonishment when the Turk’s gears first clicked into motion. But the gears, and the machine, were fake; in reality, the automaton was controlled by a chess savant cunningly tucked inside the wooden cabinet. In 1915, a Spanish inventor unveiled a genuine, honest-to-goodness robot


that could actually play chess—a simple endgame involving only three pieces, anyway. A writer for Scientific American fretted that the inventor “Would Substitute Machinery for the Human Mind.”

Eighty years later, in 1997, this intellectual standoff clanked to a dismal conclusion when world champion Garry Kasparov was defeated by IBM’s Deep Blue supercomputer in a tournament of six games. Faced with a machine that could calculate two hundred million positions a second


, even Kasparov’s notoriously aggressive and nimble style broke down. In its final game, Deep Blue used such a clever ploy—tricking Kasparov into letting the computer sacrifice a knight—that it trounced him in nineteen moves. “I lost my fighting spirit,”


Kasparov said afterward, pronouncing himself “emptied completely.”


Riveted, the journalists announced a winner. The cover of Newsweek proclaimed the event “The Brain’s Last Stand.”


Doomsayers predicted that chess itself was over


. If machines could outthink even Kasparov, why would the game remain interesting? Why would anyone bother playing? What’s the challenge?

Then Kasparov did something unexpected


.

The truth is, Kasparov wasn’t completely surprised by Deep Blue’s victory. Chess grand masters had predicted for years


that computers would eventually beat humans, because they understood the different ways humans and computers play. Human chess players learn by spending years studying


the world’s best opening moves and endgames; they play thousands of games, slowly amassing a capacious, in-brain library of which strategies triumphed and which flopped. They analyze their opponents’ strengths and weaknesses, as well as their moods. When they look at the board, that knowledge manifests as intuition—a eureka moment when they suddenly spy the best possible move.

In contrast, a chess-playing computer has no intuition at all. It analyzes the game using brute force; it inspects the pieces currently on the board, then calculates all options. It prunes away moves that lead to losing positions, then takes the promising ones and runs the calculations again. After doing this a few times—and looking five or seven moves out—it arrives at a few powerful plays. The machine’s way of “thinking” is fundamentally unhuman. Humans don’t sit around crunching every possible move, because our brains can’t hold that much information at once. If you go eight moves out in a game of chess,


there are more possible games than there are stars in our galaxy. If you total up every game possible? It outnumbers the atoms in the known universe. Ask chess grand masters, “How many moves can you see out?” and they’ll likely deliver the answer attributed to the Cuban grand master José Raúl Capablanca: “One, the best one.”




The fight between computers and humans in chess was, as Kasparov knew, ultimately about speed. Once computers could see all games roughly seven moves out, they would wear humans down. A person might make a mistake; the computer wouldn’t. Brute force wins. As he pondered Deep Blue, Kasparov mused on these different cognitive approaches.

It gave him an audacious idea. What would happen if, instead of competing against one another, humans and computers collaborated? What if they played on teams together—one computer and a human facing off against another human and a computer? That way, he theorized, each might benefit from the other’s peculiar powers. The computer would bring the lightning-fast—if uncreative—ability to analyze zillions of moves, while the human would bring intuition and insight, the ability to read opponents and psych them out. Together, they would form what chess players later called a centaur: a hybrid beast endowed with the strengths of each.

In June 1998, Kasparov played the first public game of human-computer collaborative chess, which he dubbed “advanced chess,” against Veselin Topalov, a top-rated grand master. Each used a regular computer with off-the-shelf chess software and databases of hundreds of thousands of chess games, including some of the best ever played. They considered what moves the computer recommended; they examined historical databases to see if anyone had ever been in a situation like theirs before. Then they used that information to help plan. Each game was limited to sixty minutes, so they didn’t have infinite time to consult the machines; they had to work swiftly.

Kasparov found the experience “as disturbing as it was exciting.” Freed from the need to rely exclusively on his memory, he was able to focus more on the creative texture of his play. It was, he realized, like learning to be a race-car driver: He had to learn how to drive the computer, as it were—developing a split-second sense of which strategy to enter into the computer for assessment, when to stop an unpromising line of inquiry, and when to accept or ignore the computer’s advice. “Just as a good Formula One driver really knows his own car, so did we have to learn the way the computer program worked,” he later wrote. Topalov, as it turns out, appeared to be an even better Formula One “thinker” than Kasparov. On purely human terms, Kasparov was a stronger player; a month before, he’d trounced Topalov 4–0. But the centaur play evened the odds. This time, Topalov fought Kasparov to a 3–3 draw.




In 2005, there was a “freestyle” chess tournament


in which a team could consist of any number of humans or computers, in any combination. Many teams consisted of chess grand masters who’d won plenty of regular, human-only tournaments, achieving chess scores of 2,500 (out of 3,000). But the winning team didn’t include any grand masters at all. It consisted of two young New England men, Steven Cramton and Zackary Stephen (who were comparative amateurs, with chess rankings down around 1,400 to 1,700), and their computers.

Why could these relative amateurs beat chess players with far more experience and raw talent? Because Cramton and Stephen were expert at collaborating with computers. They knew when to rely on human smarts and when to rely on the machine’s advice. Working at rapid speed—these games, too, were limited to sixty minutes—they would brainstorm moves, then check to see what the computer thought, while also scouring databases to see if the strategy had occurred in previous games. They used three different computers simultaneously, running five different pieces of software; that way they could cross-check whether different programs agreed on the same move. But they wouldn’t simply accept what the machine accepted, nor would they merely mimic old games. They selected moves that were low-rated by the computer if they thought they would rattle their opponents psychologically.

In essence, a new form of chess intelligence was emerging. You could rank the teams like this: (1) a chess grand master was good; (2) a chess grand master playing with a laptop was better. But even that laptop-equipped grand master could be beaten by (3) relative newbies, if the amateurs were extremely skilled at integrating machine assistance. “Human strategic guidance combined with the tactical acuity of a computer,” Kasparov concluded, “was overwhelming.”

Better yet, it turned out these smart amateurs could even outplay a supercomputer on the level of Deep Blue. One of the entrants that Cramton and Stephen trounced in the freestyle chess tournament was a version of Hydra, the most powerful chess computer in existence


at the time; indeed, it was probably faster and stronger than Deep Blue itself. Hydra’s owners let it play entirely by itself, using raw logic and speed to fight its opponents. A few days after the advanced chess event, Hydra destroyed the world’s seventh-ranked grand master in a man-versus-machine chess tournament.

But Cramton and Stephen beat Hydra. They did it using their own talents and regular Dell and Hewlett-Packard computers, of the type you probably had sitting on your desk in 2005, with software you could buy for sixty dollars.


All of which brings us back to our original question here: Which is smarter at chess—humans or computers?

Neither.

It’s the two together, working side by side.

We’re all playing advanced chess these days. We just haven’t learned to appreciate it.

Our tools are everywhere, linked with our minds, working in tandem. Search engines answer our most obscure questions; status updates give us an ESP-like awareness of those around us; online collaborations let far-flung collaborators tackle problems too tangled for any individual. We’re becoming less like Rodin’s Thinker and more like Kasparov’s centaurs. This transformation is rippling through every part of our cognition—how we learn, how we remember, and how we act upon that knowledge emotionally, intellectually, and politically. As with Cramton and Stephen, these tools can make even the amateurs among us radically smarter than we’d be on our own, assuming (and this is a big assumption) we understand how they work. At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.

In a sense, this is an ancient story. The “extended mind” theory of cognition argues that the reason humans are so intellectually dominant is that we’ve always outsourced bits of cognition, using tools to scaffold our thinking into ever-more-rarefied realms. Printed books amplified our memory. Inexpensive paper and reliable pens made it possible to externalize our thoughts quickly. Studies show that our eyes zip around the page while performing long division on paper, using the handwritten digits as a form of prosthetic short-term memory.


“These resources enable us to pursue


manipulations and juxtapositions of ideas and data that would quickly baffle the un-augmented brain,” as Andy Clark, a philosopher of the extended mind, writes.

Granted, it can be unsettling to realize how much thinking already happens outside our skulls. Culturally, we revere the Rodin ideal—the belief that genius breakthroughs come from our gray matter alone. The physicist Richard Feynman once got into an argument about this with the historian Charles Weiner. Feynman understood the extended mind; he knew that writing his equations and ideas on paper was crucial to his thought. But when Weiner looked over a pile of Feynman’s notebooks, he called them a wonderful “record of his day-to-day work.” No, no, Feynman replied testily. They weren’t a record of his thinking process. They were his thinking process:

“I actually did the work on the paper,”


he said.

“Well,” Weiner said, “the work was done in your head, but the record of it is still here.”

“No, it’s not a record, not really. It’s working. You have to work on paper and this is the paper. Okay?”

Every new tool shapes the way we think, as well as what we think about. The printed word helped make our cognition linear and abstract,


along with vastly enlarging our stores of knowledge. Newspapers shrank the world; then the telegraph shrank it even more dramatically. With every innovation, cultural prophets bickered over whether we were facing a technological apocalypse or a utopia. Depending on which Victorian-age pundit you asked, the telegraph was either going usher in an era of world peace (“It is impossible that old prejudices and hostilities should longer exist,”


as Charles F. Briggs and Augustus Maverick intoned) or drown us in a Sargasso of idiotic trivia (“We are eager to tunnel under the Atlantic


… but perchance the first news that will leak through into the broad, flapping American ear will be that the Princess Adelaide has the whooping cough,” as Thoreau opined). Neither prediction was quite right, of course, yet neither was quite wrong. The one thing that both apocalyptics and utopians understand and agree upon is that every new technology pushes us toward new forms of behavior while nudging us away from older, familiar ones. Harold Innis—the lesser-known but arguably more interesting intellectual midwife of Marshall McLuhan—called this the bias of a new tool.


Living with new technologies means understanding how they bias everyday life.

What are the central biases of today’s digital tools? There are many, but I see three big ones that have a huge impact on our cognition. First, they allow for prodigious external memory: smartphones, hard drives, cameras, and sensors routinely record more information than any tool before them. We’re shifting from a stance of rarely recording our ideas and the events of our lives to doing it habitually. Second, today’s tools make it easier for us to find connections—between ideas, pictures, people, bits of news—that were previously invisible. Third, they encourage a superfluity of communication and publishing. This last feature has many surprising effects that are often ill understood. Any economist can tell you that when you suddenly increase the availability of a resource, people do more things with it, which also means they do increasingly unpredictable things. As electricity became cheap and ubiquitous in the West, its role expanded from things you’d expect—like nighttime lighting—to the unexpected and seemingly trivial: battery-driven toy trains, electric blenders, vibrators. The superfluity of communication today has produced everything from a rise in crowd-organized projects like Wikipedia to curious new forms of expression: television-show recaps, map-based storytelling, discussion threads that spin out of a photo posted to a smartphone app, Amazon product-review threads wittily hijacked for political satire. Now, none of these three digital biases is immutable, because they’re the product of software and hardware, and can easily be altered or ended if the architects of today’s tools (often corporate and governmental) decide to regulate the tools or find they’re not profitable enough. But right now, these big effects dominate our current and near-term landscape.

In one sense, these three shifts—infinite memory, dot connecting, explosive publishing—are screamingly obvious to anyone who’s ever used a computer. Yet they also somehow constantly surprise us by producing ever-new “tools for thought”


(to use the writer Howard Rheingold’s lovely phrase) that upend our mental habits in ways we never expected and often don’t apprehend even as they take hold. Indeed, these phenomena have already woven themselves so deeply into the lives of people around the globe that it’s difficult to stand back and take account of how much things have changed and why. While this book maps out what I call the future of thought, it’s also frankly rooted in the present, because many parts of our future have already arrived, even if they are only dimly understood. As the sci-fi author William Gibson famously quipped: “The future is already here


—it’s just not very evenly distributed.” This is an attempt to understand what’s happening to us right now, the better to see where our augmented thought is headed. Rather than dwell in abstractions, like so many marketers and pundits—not to mention the creators of technology, who are often remarkably poor at predicting how people will use their tools—I focus more on the actual experiences of real people.

To provide a concrete example of what I’m talking about, let’s take a look at something simple and immediate: my activities while writing the pages you’ve just read.

As I was working, I often realized I couldn’t quite remember a detail and discovered that my notes were incomplete. So I’d zip over to a search engine. (Which chess piece did Deep Blue sacrifice when it beat Kasparov? The knight!) I also pushed some of my thinking out into the open: I blogged admiringly about the Spanish chess-playing robot from 1915, and within minutes commenters offered smart critiques. (One pointed out that the chess robot wasn’t that impressive because it was playing an endgame that was almost impossible to lose: the robot started with a rook and a king, while the human opponent had only a mere king.) While reading Kasparov’s book How Life Imitates Chess on my Kindle, I idly clicked on “popular highlights” to see what passages other readers had found interesting—and wound up becoming fascinated by a section on chess strategy I’d only lightly skimmed myself. To understand centaur play better, I read long, nuanced threads on chess-player discussion groups, effectively eavesdropping on conversations of people who know chess far better than I ever will. (Chess players who follow the new form of play seem divided—some think advanced chess is a grim sign of machines’ taking over the game, and others think it shows that the human mind is much more valuable than computer software.) I got into a long instant-messaging session with my wife, during which I realized that I’d explained the gist of advanced chess better than I had in my original draft, so I cut and pasted that explanation into my notes. As for the act of writing itself? Like most writers, I constantly have to fight the procrastinator’s urge to meander online, idly checking Twitter links and Wikipedia entries in a dreamy but pointless haze—until I look up in horror and realize I’ve lost two hours of work, a missing-time experience redolent of a UFO abduction. So I’d switch my word processor into full-screen mode, fading my computer desktop to black so I could see nothing but the page, giving me temporary mental peace.

In this book I explore each of these trends. First off, there’s the emergence of omnipresent computer storage, which is upending the way we remember, both as individuals and as a culture. Then there’s the advent of “public thinking”: the ability to broadcast our ideas and the catalytic effect that has both inside and outside our minds. We’re becoming more conversational thinkers—a shift that has been rocky, not least because everyday public thought uncorks the incivility and prejudices that are commonly repressed in face-to-face life. But at its best (which, I’d argue, is surprisingly often), it’s a thrilling development, reigniting ancient traditions of dialogue and debate. At the same time, there’s been an explosion of new forms of expression that were previously too expensive for everyday thought—like video, mapping, or data crunching. Our social awareness is shifting, too, as we develop ESP-like “ambient awareness,” a persistent sense of what others are doing and thinking. On a social level, this expands our ability to understand the people we care about. On a civic level, it helps dispel traditional political problems like “pluralistic ignorance,” catalyzing political action, as in the Arab Spring.

Are these changes good or bad for us? If you asked me twenty years ago, when I first started writing about technology, I’d have said “bad.” In the early 1990s, I believed that as people migrated online, society’s worst urges might be uncorked: pseudonymity would poison online conversation, gossip and trivia would dominate, and cultural standards would collapse. Certainly some of those predictions have come true, as anyone who’s wandered into an angry political forum knows. But the truth is, while I predicted the bad stuff, I didn’t foresee the good stuff. And what a torrent we have: Wikipedia, a global forest of eloquent bloggers, citizen journalism, political fact-checking—or even the way status-update tools like Twitter have produced a renaissance in witty, aphoristic, haiku-esque expression. If this book accentuates the positive, that’s in part because we’ve been so flooded with apocalyptic warnings of late. We need a new way to talk clearly about the rewards and pleasures of our digital experiences—one that’s rooted in our lived experience and also detangled from the hype of Silicon Valley.

The other thing that makes me optimistic about our cognitive future is how much it resembles our cognitive past. In the sixteenth century, humanity faced


a printed-paper wave of information overload—with the explosion of books that began with the codex and went into overdrive with Gutenberg’s movable type. As the historian Ann Blair notes, scholars were alarmed: How would they be able to keep on top of the flood of human expression? Who would separate the junk from what was worth keeping? The mathematician Gottfried Wilhelm Leibniz bemoaned “that horrible mass of books which keeps on growing,” which would doom the quality writers to “the danger of general oblivion” and produce “a return to barbarism.”


Thankfully, he was wrong. Scholars quickly set about organizing the new mental environment by clipping their favorite passages from books and assembling them into huge tomes—florilegia, bouquets of text—so that readers could sample the best parts. They were basically blogging, going through some of the same arguments modern bloggers go through. (Is it enough to clip a passage, or do you also have to verify that what the author wrote was true? It was debated back then, as it is today.) The past turns out to be oddly reassuring, because a pattern emerges. Each time we’re faced with bewildering new thinking tools, we panic—then quickly set about deducing how they can be used to help us work, meditate, and create.

History also shows that we generally improve and refine our tools to make them better. Books, for example, weren’t always as well designed as they are now. In fact, the earliest ones were, by modern standards, practically unusable—often devoid of the navigational aids we now take for granted, such as indexes, paragraph breaks, or page numbers. It took decades—centuries, even—for the book to be redesigned into a more flexible cognitive tool, as suitable for quick reference as it is for deep reading. This is the same path we’ll need to tread with our digital tools. It’s why we need to understand not just the new abilities our tools give us today, but where they’re still deficient and how they ought to improve.

I have one caveat to offer. If you were hoping to read about the neuroscience of our brains and how technology is “rewiring” them, this volume will disappoint you.

This goes against the grain of modern discourse, I realize. In recent years, people interested in how we think have become obsessed with our brain chemistry. We’ve marveled at the ability of brain scanning—picturing our brain’s electrical activity or blood flow—to provide new clues as to what parts of the brain are linked to our behaviors. Some people panic that our brains are being deformed on a physiological level by today’s technology: spend too much time flipping between windows and skimming text instead of reading a book, or interrupting your conversations to read text messages, and pretty soon you won’t be able to concentrate on anything—and if you can’t concentrate on it, you can’t understand it either. In his book The Shallows, Nicholas Carr eloquently raised this alarm, arguing that the quality of our thought, as a species, rose in tandem with the ascendance of slow-moving, linear print and began declining with the arrival of the zingy, flighty Internet. “I’m not thinking the way I used to think,”


he worried.

I’m certain that many of these fears are warranted. It has always been difficult for us to maintain mental habits of concentration and deep thought; that’s precisely why societies have engineered massive social institutions


(everything from universities to book clubs and temples of worship) to encourage us to keep it up. It’s part of why only a relatively small subset of people become regular, immersive readers, and part of why an even smaller subset go on to higher education. Today’s multitasking tools really do make it harder than before to stay focused during long acts of reading and contemplation. They require a high level of “mindfulness”—paying attention to your own attention. While I don’t dwell on the perils of distraction in this book, the importance of being mindful resonates throughout these pages. One of the great challenges of today’s digital thinking tools is knowing when not to use them, when to rely on the powers of older and slower technologies, like paper and books.

That said, today’s confident talk by pundits and journalists about our “rewired” brains has one big problem: it is very premature. Serious neuroscientists agree that we don’t really know how our brains are wired to begin with. Brain chemistry is particularly mysterious when it comes to complex thought, like memory, creativity, and insight. “There will eventually be neuroscientific explanations


for much of what we do; but those explanations will turn out to be incredibly complicated,” as the neuroscientist Gary Marcus pointed out when critiquing the popular fascination with brain scanning. “For now, our ability to understand how all those parts relate is quite limited, sort of like trying to understand the political dynamics of Ohio from an airplane window above Cleveland.” I’m not dismissing brain scanning; indeed, I’m confident it’ll be crucial in unlocking these mysteries in the decades to come. But right now the field is so new that it is rash to draw conclusions, either apocalyptic or utopian, about how the Internet is changing our brains. Even Carr, the most diligent explorer in this area, cited only a single brain-scanning study that specifically probed how people’s brains respond to using the Web,


and those results were ambiguous.

The truth is that many healthy daily activities, if you scanned the brains of people participating in them, might appear outright dangerous to cognition. Over recent years, professor of psychiatry James Swain and teams of Yale and University of Michigan scientists


scanned the brains of new mothers and fathers as they listened to recordings of their babies’ cries. They found brain circuit activity similar to that in people suffering from obsessive-compulsive disorder. Now, these parents did not actually have OCD. They were just being temporarily vigilant about their newborns. But since the experiments appeared to show the brains of new parents being altered at a neural level, you could write a pretty scary headline if you wanted: BECOMING A PARENT ERODES YOUR BRAIN FUNCTION! In reality, as Swain tells me, it’s much more benign. Being extra fretful and cautious around a newborn is a good thing for most parents: Babies are fragile. It’s worth the tradeoff. Similarly, living in cities—with their cramped dwellings and pounding noise—stresses us out on a straightforwardly physiological level


and floods our system with cortisol, as I discovered while researching stress in New York City several years ago. But the very urban density that frazzles us mentally also makes us 50 percent more productive,


and more creative, too, as Edward Glaeser argues in Triumph of the City, because of all those connections between people. This is “the city’s edge in producing ideas.”


The upside of creativity is tied to the downside of living in a sardine tin, or, as Glaeser puts it, “Density has costs as well as benefits.”


Our digital environments likely offer a similar push and pull. We tolerate their cognitive hassles and distractions for the enormous upside of being connected, in new ways, to other people.

I want to examine how technology changes our mental habits, but for now, we’ll be on firmer ground if we stick to what’s observably happening in the world around us: our cognitive behavior, the quality of our cultural production, and the social science that tries to measure what we do in everyday life. In any case, I won’t be talking about how your brain is being “rewired.” Almost everything rewires it, including this book.

The brain you had before you read this paragraph? You don’t get that brain back. I’m hoping the trade-off is worth it.

The rise of advanced chess didn’t end the debate about man versus machine, of course. In fact, the centaur phenomenon only complicated things further for the chess world—raising questions about how reliant players were on computers and how their presence affected the game itself. Some worried that if humans got too used to consulting machines, they wouldn’t be able to play without them. Indeed, in June 2011, chess master Christoph Natsidis was caught


illicitly using a mobile phone during a regular human-to-human match. During tense moments, he kept vanishing for long bathroom visits; the referee, suspicious, discovered Natsidis entering moves into a piece of chess software on his smartphone. Chess had entered a phase similar to the doping scandals that have plagued baseball and cycling, except in this case the drug was software and its effect cognitive.

This is a nice metaphor for a fear that can nag at us in our everyday lives, too, as we use machines for thinking more and more. Are we losing some of our humanity? What happens if the Internet goes down: Do our brains collapse, too? Or is the question naive and irrelevant—as quaint as worrying about whether we’re “dumb” because we can’t compute long division without a piece of paper and a pencil?

Certainly, if we’re intellectually lazy or prone to cheating and shortcuts, or if we simply don’t pay much attention to how our tools affect the way we work, then yes—we can become, like Natsidis, overreliant. But the story of computers and chess offers a much more optimistic ending, too. Because it turns out that when chess players were genuinely passionate about learning and being creative in their game, computers didn’t degrade their own human abilities. Quite the opposite: it helped them internalize the game much more profoundly and advance to new levels of human excellence.

Before computers came along, back when Kasparov was a young boy in the 1970s in the Soviet Union, learning grand-master-level chess was a slow, arduous affair. If you showed promise and you were very lucky, you could find a local grand master to teach you. If you were one of the tiny handful who showed world-class promise, Soviet leaders would fly you to Moscow and give you access to their elite chess library, which contained laboriously transcribed paper records of the world’s top games. Retrieving records was a painstaking affair; you’d contemplate a possible opening, use the catalog to locate games that began with that move, and then the librarians would retrieve records from thin files, pulling them out using long sticks resembling knitting needles. Books of chess games were rare and incomplete. By gaining access to the Soviet elite library, Kasparov and his peers developed an enormous advantage over their global rivals. That library was their cognitive augmentation.

But beginning in the 1980s, computers took over the library’s role and bested it. Young chess enthusiasts could buy CD-ROMs filled with hundreds of thousands of chess games. Chess-playing software could show you how an artificial opponent would respond to any move. This dramatically increased the pace at which young chess players built up intuition. If you were sitting at lunch and had an idea for a bold new opening move, you could instantly find out which historic players had tried it, then war-game it yourself by playing against software. The iterative process of thought experiments—“If I did this, then what would happen?”—sped up exponentially.

Chess itself began to evolve. “Players became more creative and daring,” as Frederic Friedel, the publisher of the first popular chess databases and software, tells me. Before computers, grand masters would stick to lines of attack they’d long studied and honed. Since it took weeks or months for them to research and mentally explore the ramifications of a new move, they stuck with what they knew. But as the next generation of players emerged, Friedel was astonished by their unusual gambits, particularly in their opening moves. Chess players today, Kasparov has written, “are almost as free of dogma as the machines with which they train. Increasingly, a move isn’t good or bad because it looks that way or because it hasn’t been done that way before. It’s simply good if it works and bad if it doesn’t.”

Most remarkably, it is producing players who reach grand master status younger. Before computers, it was extremely rare for teenagers to become grand masters. In 1958, Bobby Fischer stunned the world by achieving that status at fifteen. The feat was so unusual it was over three decades before the record was broken, in 1991. But by then computers had emerged, and in the years since, the record has been broken twenty times, as more and more young players became grand masters. In 2002, the Ukrainian Sergey Karjakin became one at the tender age of twelve.




So yes, when we’re augmenting ourselves, we can be smarter. We’re becoming centaurs. But our digital tools can also leave us smarter even when we’re not actively using them.

Let’s turn to a profound area where our thinking is being augmented: the world of infinite memory.




Конец ознакомительного фрагмента.


Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/clive-thompson/smarter-than-you-think-how-technology-is-changing-our-minds/) на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.


Smarter Than You Think: How Technology is Changing Our Minds for the Better Clive Thompson
Smarter Than You Think: How Technology is Changing Our Minds for the Better

Clive Thompson

Тип: электронная книга

Жанр: Техническая литература

Язык: на английском языке

Издательство: HarperCollins

Дата публикации: 16.04.2024

Отзывы: Пока нет Добавить отзыв

О книге: A brilliant examination into how the internet is profoundly changing the way we think.In this groundbreaking book, Wired writer Clive Thompson argues that the internet is boosting our brainpower, encouraging new ways of thinking, and making us more not less intelligent as is so often claimed.Our lives have been changed utterly and irrevocably by the rise of the internet and it is only now that we can begin to analyse this extraordinary phenomenon. The author argues that as we rely more and more for machines to help us think, our thinking itself is becoming richer and more complex. We’re able to learn more, retain it longer, to write in curious new forms, and even to think entirely new types of thoughts.Outsmart is filled with stories of people who are living through these profound technological changes. In a series of postcards from the near future, we meet characters such as Gordon Bell, an ageing millionaire who is saving a digital copy of everything that happens to him, and Eric Hovitz, one of the world’s leading artificial-intelligence researchers, who is creating software that is designed to let your computer sense your mood and then predict when you’re going to be most productive at work.Lucidly written and argued, Outsmart is a breathtaking original look at our Brave New World.

  • Добавить отзыв