The Science of Storytelling

The Science of Storytelling
Will Storr


Humans have been telling stories ever since we came down from the trees.But do we really understand why?And if we did, would we be able to tell them better?We would be nothing without story. Story moulds who we are, from our character to our cultural identity. Story compels us to act out our dreams and ambitions, and shapes our politics and beliefs. We use story to construct our relationships, to keep order in our law courts and governments, to make sense of the world in our newspapers and social media. Even when we sleep, we dream in story. Storytelling is an essential part of what makes us human.There have been many attempts to understand what makes a good story – from Joseph Campbell’s well-worn theories about myth and archetype to recent attempts to crack the ‘Bestseller Code’. But few have used a scientific approach. This is curious, for if we are to truly understand the machinations of storytelling, we must first come to understand the ultimate storyteller – the human brain.In this original and surprising book, Will Storr takes a scalpel to story. Leading us on a journey from the Hebrew scriptures to Mr Men, from Booker prize-winning literature to box set TV, he demonstrates how master storytellers manipulate and compel us using a dazzling display of psychological research and cutting-edge neuroscience. With the help of world leading story-analysts and brain experts, he shows how we can use this science to tell better stories – and reveals the benefits this can have on everything from our creative endeavours and careers to our happiness and wellbeing.























Copyright (#ulink_a31d1504-e925-5be5-96ac-f7c422ad6002)


William Collins

An imprint of HarperCollinsPublishers

1 London Bridge Street

London SE1 9GF

www.WilliamCollinsBooks.com (http://www.WilliamCollinsBooks.com)

This eBook first published in Great Britain by 4th Estate in 2019

Copyright © William Storr 2019

Cover design by Jack Smythe

William Storr asserts the moral right to be identified as the author of this work

A catalogue record for this book is available from the British Library

All rights reserved under International and Pan-American Copyright Conventions. By payment of the required fees, you have been granted the non-exclusive, non-transferable right to access and read the text of this e-book on-screen. No part of this text may be reproduced, transmitted, down-loaded, decompiled, reverse engineered, or stored in or introduced into any information storage and retrieval system, in any form or by any means, whether electronic or mechanical, now known or hereinafter invented, without the express written permission of HarperCollins

Source ISBN: 9780008276935

Ebook Edition © April 2019 ISBN: 9780008276959

Version: 2019-04-08




Dedication (#ulink_ba5adef4-918d-5b05-9f8c-b869d921ce18)


For my firstborn, Parker




Epigraph (#ulink_c81417bd-0ab5-5526-aef9-ea659425ff2e)


‘Ah, but a man’s reach should exceed his grasp,

Or what’s a heaven for?’

Robert Browning (1812–1889)


Contents

COVER (#u406b4d6d-8f54-5119-8f6c-927910c9a593)

TITLE PAGE (#u70d1e287-cbe1-55b3-b4e5-109639572879)

COPYRIGHT (#u4ac5d878-c673-5cb5-9e36-6535dcf51860)

DEDICATION (#u1d6969a8-8d35-5b90-aa14-cda95d8b170c)

EPIGRAPH (#ubcc6876c-a923-5742-9ec9-0ea503cf9264)

INTRODUCTION (#uff9e2ce1-4703-521e-8fcf-cc12ed8957b6)

CHAPTER ONE: CREATING A WORLD (#u321d45bf-ac86-5108-861e-a8c2a533a8b2)

1.0 Where does a story begin? (#ulink_c71de59c-e2b8-5f17-8274-80da309b7db1)

1.1 Moments of change; the control-seeking brain (#ulink_7f4d798b-0b2a-5905-a5ab-a4a19c8f91b7)

1.2 Curiosity (#ulink_a371751c-19a7-58e6-9f1c-73c2b359e372)

1.3 The model-making brain; how we read; grammar; filmic word order; simplicity; active versus passive language; specific detail; show-not-tell (#ulink_d0474c69-e453-5e17-b2b9-598fc96e43bc)

1.4 World-making in fantasy and science fiction (#ulink_bf3b00c4-9d6f-534e-bf1d-88704e55de26)

1.5 The domesticated brain; theory of mind in animism and religion; how theory-of-mind mistakes create drama (#ulink_38c1a8b8-0482-5913-8aad-5e680eb98ec9)

1.6 Salience; creating tension with detail (#ulink_b181ea2c-54e4-51eb-9c7c-6352189f2dbd)

1.7 Neural models; poetry; metaphor (#ulink_1dcfcf14-3c41-5e50-9e31-ed235a12647b)

1.8 Cause and effect; literary versus mass-market storytelling (#ulink_0822e372-df71-59fd-80a1-b4c2a88e61d4)

1.9 Change is not enough (#ulink_b6c62ef3-4c82-5eff-95e9-b82ff0482ba9)

CHAPTER TWO: THE FLAWED SELF (#u507bda99-08aa-5b76-bbbc-ba740b4eab6d)

2.0 The flawed self; the theory of control (#ulink_a7e52f61-1a53-588b-a96a-a55729d01930)

2.1 Personality and plot (#litres_trial_promo)

2.2 Personality and setting (#litres_trial_promo)

2.3 Personality and point of view (#litres_trial_promo)

2.4 Culture and character; Western versus Eastern story (#litres_trial_promo)

2.5 Anatomy of a flawed self; the ignition point (#litres_trial_promo)

2.6 Fictional memories; moral delusions; antagonists and moral idealism; antagonists and toxic self-esteem; the hero-maker narrative (#litres_trial_promo)

2.7 David and Goliath (#litres_trial_promo)

2.8 How flawed characters create meaning (#litres_trial_promo)

CHAPTER THREE: THE DRAMATIC QUESTION (#litres_trial_promo)

3.0 Confabulation and the deluded character; the dramatic question (#litres_trial_promo)

3.1 Multiple selves; the three-dimensional character (#litres_trial_promo)

3.2 The two levels of story; how subconscious character struggle creates plot (#litres_trial_promo)

3.3 Modernist stories (#litres_trial_promo)

3.4 Wanting and needing (#litres_trial_promo)

3.5 Dialogue (#litres_trial_promo)

3.6 The roots of the dramatic question; social emotions; heroes and villains; moral outrage (#litres_trial_promo)

3.7 Status play (#litres_trial_promo)

3.8 King Lear; humiliation (#litres_trial_promo)

3.9 Stories as tribal propaganda (#litres_trial_promo)

3.10 Antiheroes; empathy (#litres_trial_promo)

3.11 Origin damage (#litres_trial_promo)

CHAPTER FOUR: PLOTS, ENDINGS AND MEANING (#litres_trial_promo)

4.0 Goal directedness; video games; personal projects; eudaemonia; plots (#litres_trial_promo)

4.1 Plot as recipe versus plot as symphony of change (#litres_trial_promo)

4.2 The final battle (#litres_trial_promo)

4.3 Endings; control; the God moment (#litres_trial_promo)

4.4 Story as a simulacrum of consciousness; transportation (#litres_trial_promo)

4.5 The power of story (#litres_trial_promo)

4.6 The lesson of story (#litres_trial_promo)

4.7 The consolation of story (#litres_trial_promo)

APPENDIX: THE SACRED FLAW APPROACH (#litres_trial_promo)

A NOTE ON THE TEXT (#litres_trial_promo)

ACKNOWLEDGMENTS (#litres_trial_promo)

NOTES AND SOURCES (#litres_trial_promo)

INDEX (#litres_trial_promo)

ABOUT THE AUTHOR (#litres_trial_promo)

ALSO BY WILL STORR (#litres_trial_promo)

ABOUT THE PUBLISHER (#litres_trial_promo)




INTRODUCTION (#ulink_ee20cb80-193d-5a77-90b5-02c200420e27)


We know how this ends. You’re going to die and so will everyone you love. And then there will be heat death. All the change in the universe will cease, the stars will die, and there’ll be nothing left of anything but infinite, dead, freezing void. Human life, in all its noise and hubris, will be rendered meaningless for eternity.

But that’s not how we live our lives. Humans might be in unique possession of the knowledge that our existence is essentially meaningless, but we carry on as if in ignorance of it. We beetle away happily, into our minutes, hours and days, with the fact of the void hovering over us. To look directly into it, and respond with an entirely rational descent into despair, is to be diagnosed with a mental-health condition, categorised as somehow faulty.

The cure for the horror is story. Our brains distract us from this terrible truth by filling our lives with hopeful goals and encouraging us to strive for them. What we want, and the ups and downs of our struggle to get it, is the story of us all. It gives our existence the illusion of meaning and turns our gaze from the dread. There’s simply no way to understand the human world without stories. They fill our newspapers, our law courts, our sporting arenas, our government debating chambers, our school playgrounds, our computer games, the lyrics to our songs, our private thoughts and public conversations and our waking and sleeping dreams. Stories are everywhere. Stories are us.

It’s story that makes us human. Recent research suggests language evolved principally to swap ‘social information’ (#litres_trial_promo) back when we were living in Stone Age tribes. In other words, we’d gossip. We’d tell tales about the moral rights and wrongs of other people, punish the bad behaviour, reward the good, and thereby keep everyone cooperating and the tribe in check. Stories about people being heroic or villainous, and the emotions of joy and outrage they triggered, were crucial to human survival. We’re wired to enjoy them.

Some researchers believe grandparents came to perform a vital role in such tribes (#litres_trial_promo): elders told different kinds of stories (#litres_trial_promo) – about ancestor heroes, exciting quests and spirits and magic – that helped children to navigate their physical, spiritual and moral worlds. It’s from these stories that complex human culture emerged. When we started farming and rearing livestock, and our tribes settled down and slowly merged into states, these grandparental campfire tales morphed into great religions that had the power to hold large numbers of humans together. Still, today, modern nations are principally defined by the stories we tell about our collective selves: our victories and defeats; our heroes and foes; our distinctive values and ways of being, all of which are encoded in the tales we tell and enjoy.

We experience our day-to-day lives in story mode. The brain creates a world for us to live in and populates it with allies and villains. It turns the chaos and bleakness of reality into a simple, hopeful tale, and at the centre it places its star – wonderful, precious me – who it sets on a series of goals that become the plots of our lives. Story is what brain does. It is a ‘story processor’, writes the psychologist Professor Jonathan Haidt (#litres_trial_promo), ‘not a logic processor’. Story emerges from human minds as naturally as breath emerges from between human lips. You don’t have to be a genius to master it. You’re already doing it. Becoming better at telling stories is simply a matter of peering inwards, at the mind itself, and asking how it does it.

This book has an unusual genesis in that it’s based on a storytelling course that is, in turn, based on research I’ve carried out for various books. My interest in the science of storytelling began about a decade ago when I was working on my second book, The Heretics, which was an investigation into the psychology of belief. I wanted to find out how intelligent people end up believing crazy things. The answer I found was that, if we’re psychologically healthy, our brain makes us feel as if we’re the moral heroes at the centre of the unfolding plots of our lives. Any ‘facts’ it comes across tend to be subordinate to that story. If these ‘facts’ flatter our heroic sense of ourselves, we’re likely to credulously accept them, no matter how smart we think we are. If they don’t, our minds will tend to find some crafty way of rejecting them. The Heretics was my introduction to the idea of the brain as a storyteller. It not only changed the way I saw myself, it changed the way I saw the world.

It also changed the way I thought about my writing. As I was researching The Heretics, I also happened to be working on my first novel. Having struggled with fiction for years I’d finally buckled and bought a selection of traditional ‘how-to’ guides. Reading through them, I noticed something odd. Some of the things the story theorists were saying about narrative were strikingly similar to what the psychologists and neuroscientists I’d been interviewing had been telling me about brain and mind. The storytellers and the scientists had started off in completely different places and had ended up discovering the same things.

As I continued my research, for subsequent books, I continued making these connections. I started to wonder if it might be possible to join the two fields up and thereby improve my own storytelling. That ultimately led to my starting a science-based course for writers which turned out to be unexpectedly successful. Being faced regularly with roomfuls of extremely smart authors, journalists and screenwriters pushed me to deepen my investigations. Soon, I realised I had about enough stuff to fill a short book.

My hope is that what follows will be of interest to anyone curious about the science of the human condition, even if they have little practical interest in storytelling. But it’s also for the storytellers. The challenge any of us faces is that of grabbing and keeping the attention of other people’s brains. I’m convinced we can all become better at what we do by finding out a bit about how they work.

This is an approach that stands in contrast to more traditional attempts at decoding story. These typically involve scholars comparing successful stories or traditional myths from around the world and working out what they have in common. From such techniques come predefined plots that put narrative events in a sequence, like a recipe. The most influential of these is undoubtedly Joseph Campbell’s ‘Monomyth’ (#litres_trial_promo), which, in its full form, has seventeen parts that track the phases of a hero’s journey from their initial ‘call to adventure’ onwards.

Such plot structures have been hugely successful. They’ve drawn crowds of millions and dollars by the billions. They’ve led to an industrial revolution in yarn-spinning that’s especially evident in cinema and long-form television. Some examples, such as the Campbell-inspired Star Wars: A New Hope, are wonderful. But too many more are Mars Bar stories, delicious and moreish but ultimately cold, corporate and cooked up by committee.

For me, the problem with the traditional approach is that it’s led to a preoccupation with structure. It’s easy to see why this has happened. Often the search has been for the One True Story – the ultimate, perfect plot structure by which every tale can be judged. And how are you going to describe that if not by dissecting it into its various movements?

I suspect it’s this emphasis on structure that’s responsible for the clinical feel from which many modern stories suffer. I believe the focus on plot should be shifted onto character. It’s people, not events, that we’re naturally interested in. It’s the plight of specific, flawed and fascinating individuals that makes us cheer, weep and ram our heads into the sofa cushion. The surface events of the plot are crucial, of course, and structure ought to be present, functional and disciplined. But it’s only there to support its cast.

While there are general structural principles, and a clutch of basic story shapes which are useful to understand, trying to dictate obligatory dos and don’ts that go beyond these extremely broad outlines is probably a mistake. A journey into the science of storytelling reveals that there are many things that attract and hold the attention of brains. Storytellers engage a number of neural processes that evolved for a variety of reasons and are waiting to be played like instruments in an orchestra: moral outrage, unexpected change, status play, specificity, curiosity, and so on. By understanding them, we can more easily create stories that are gripping, profound, emotional and original.

This, I hope, is an approach that will prove more creatively freeing. One benefit of understanding the science of storytelling is that it illuminates the ‘whys’ behind the ‘rules’ we’re commonly given. Such knowledge should be empowering. Knowing why the rules are the rules means we know how to break them intelligently and successfully.

But none of this is to say we should disregard what story theorists such as Campbell have discovered. On the contrary. Many popular storytelling books contain brilliant insights about narrative and human nature that science has only recently caught up with. I quote a number of their authors in these pages. I’m not even arguing that we should ignore their valuable plot designs – they can easily be used to complement this book. It’s really just a question of emphasis. I believe that compelling and unique plots are more likely to emerge from character than from a bullet-pointed list. And the best way to create characters that are rich and true and full of narrative surprise is to find out how characters operate in real life – and that means turning to science.

I’ve tried to write the storytelling book I wish I’d had, back when I was working on my novel. I’ve tried to balance The Science of Storytelling in such a way that it’s of practical use without killing the creative spirit by issuing lists of ‘You Musts’. I agree with the novelist and teacher of creative writing John Gardner (#litres_trial_promo), who argues that ‘most supposed aesthetic absolutes prove relative under pressure’. If you’re embarking on a storytelling project, I’d suggest you view what follows not as a series of obligations, but as weapons you can choose if and how to deploy. I’ve also outlined a practice that’s proved successful in my classes over the years. The ‘Sacred Flaw Approach’ is a character-first process, an attempt to create a story that mimics the various ways a brain creates a life, and which therefore feels true and fresh, and comes pre-loaded with potential drama.

This book is divided into four chapters, each of which explores a different layer of storytelling. To begin, we’ll examine how storytellers and brains create the vivid worlds they exist within. Next, we’ll encounter the flawed protagonist at the centre of that world. Then we’ll dive into that person’s subconscious, revealing the hidden struggles and wills that make human life so strange and difficult, and the stories we tell about it so profound, compelling, unexpected and emotional. Finally, we’ll be looking at the meaning and purpose of story and taking a fresh look at plots and endings.

What follows is an attempt to make sense of some of what generations of brilliant story theorists have discovered in the face of what equally brilliant women and men in the sciences have come to know. I am infinitely indebted to them all.

Will Storr

September, 2018




CHAPTER ONE: (#ulink_497d4ddd-1211-5cfe-8d13-a5ea532b7107)

CREATING A WORLD (#ulink_497d4ddd-1211-5cfe-8d13-a5ea532b7107)











1.0 (#ulink_4de7fa4c-9bab-5938-8c3a-6f2a59712cd3)


Where does a story begin? Well, where does anything begin? At the beginning, of course. Alright then: Charles Foster Kane was born in Little Salem, Colorado, USA, in 1862. His mother was Mary Kane, his father was Thomas Kane. Mary Kane ran a boarding house ….

It’s not working. A birth may be the beginning of a life and, if the brain was a data processor, that’s surely where our tale would start. But raw biographical data have little meaning to the storytelling brain. What it desires – what it insists upon, in exchange for the rare gift of its attention – is something else.




1.1 (#ulink_f519de9e-b4f0-5e2e-ae05-725881eae06d)


Many stories begin with a moment of unexpected change. And that’s how they continue too. Whether it’s a sixty-word tabloid piece about a TV star’s tiara falling off or a 350,000-word epic such as Anna Karenina, every story you’ll ever hear amounts to ‘something changed’. Change is endlessly fascinating to brains. ‘Almost all perception is based on the detection of change’ (#litres_trial_promo) says the neuroscientist Professor Sophie Scott. ‘Our perceptual systems basically don’t work unless there are changes to detect.’ In a stable environment, the brain is relatively calm (#litres_trial_promo). But when it detects change, that event is immediately registered as a surge of neural activity.

It’s from such neural activity that your experience of life emerges. Everything you’ve ever seen and thought; everyone you’ve loved and hated; every secret you’ve kept, every dream you’ve pursued, every sunset, every dawn, every pain, bliss, taste and longing – it’s all a creative product of storms of information that loop and flow around your brain’s distant territories. That 1.2-kg lump of pink computational jelly you keep between your ears might fit comfortably in two cupped hands but, taken on its own scale, it’s vast beyond comprehension. You have 86 billion brain cells or ‘neurons’ and every one of them is as complex as a city (#litres_trial_promo). Signals flow between them at speeds of up to 120 metres per second (#litres_trial_promo). They travel along 150,000 to 180,000 kms of synaptic wiring (#litres_trial_promo), enough to wrap around the planet four times.

But what’s all this neural power for? Evolutionary theory tells us our purpose is to survive and reproduce. These are complex aims, not least reproduction, which, for humans, means manipulating what potential mates think of us. Convincing a member of the opposite sex that we’re a desirable mate is a challenge that requires a deep understanding of social concepts such as attraction, status, reputation and rituals of courting. Ultimately, then, we could say the mission of the brain is this: control. Brains have to perceive the physical environment and the people that surround it in order to control them. It’s by learning how to control the world that they get what they want.

Control is why brains are on constant alert for the unexpected. Unexpected change is a portal through which danger arrives to swipe at our throats. Paradoxically, however, change is also an opportunity. It’s the crack in the universe through which the future arrives. Change is hope. Change is promise. It’s our winding path to a more successful tomorrow. When unexpected change strikes we want to know, what does it mean? Is this change for the good or the bad? Unexpected change makes us curious, and curious is how we should feel in the opening movements of an effective story.

Now think of your face, not as a face, but as a machine that’s been formed by millions of years of evolution for the detection of change. There’s barely a space on it that isn’t somehow dedicated to the job. You’re walking down the street, thinking about nothing in particular, and there’s unexpected change – there’s a bang; someone calls your name. You stop. Your internal monologue ceases. Your powers of attention switch on. You turn that amazing change-detecting machine in its direction to answer the question, ‘What’s happening?’

This is what storytellers do. They create moments of unexpected change that seize the attention of their protagonists and, by extension, their readers and viewers. Those who’ve tried to unravel the secrets of story have long known about the significance of change. Aristotle argued that ‘peripeteia’, a dramatic turning point, is one of the most powerful moments in drama, whilst the story theorist and celebrated commissioner of screen drama John Yorke has written (#litres_trial_promo) that ‘the image every TV director in fact or fiction always looks for is the close-up of the human face as it registers change.’

These changeful moments are so important, they’re often packed into a story’s first sentences:

That Spot! He hasn’t eaten his supper. Where can he be?

(Eric Hill, Where’s Spot?)

Where’s Papa going with that ax?

(E. B. White, Charlotte’s Web)

When I wake up, the other side of the bed is cold.

(Suzanne Collins, The Hunger Games)

These openers create curiosity by describing specific moments of change. But they also hint darkly at troubling change to come. Could Spot be under a bus? Where is that man going with that axe? The threat of change is also a highly effective technique for arousing curiosity. The director Alfred Hitchcock, who was a master at alarming brains by threatening that unexpected change was looming, went as far as to say, ‘There’s no terror in the bang, only in the anticipation of it.’ (#litres_trial_promo)

But threatening change doesn’t have to be as overt as a psycho’s knife behind a shower curtain.

Mr and Mrs Dursley, of number four Privet Drive, were proud to say that they were perfectly normal, thank you very much.

(J. K. Rowling, Harry Potter and the Philosopher’s Stone)

Rowling’s line is wonderfully pregnant with the threat of change. Experienced readers know something is about to pop the rather self-satisfied world of the Dursleys. This opener uses the same technique Jane Austen employs in Emma, which famously begins:

Emma Woodhouse, handsome, clever and rich, with a comfortable home and a happy disposition, seemed to unite some of the best blessings of existence; and had lived nearly twenty-one years in the world with very little to distress or vex her.

As Austen’s line suggests, using moments of change or the threat of change in opening sentences isn’t some hack trick for children’s authors. Here’s the start of Hanif Kureishi’s literary novel Intimacy:

It is the saddest night, for I am leaving and not coming back.

Here’s how Donna Tartt’s The Secret History begins:

The snow in the mountains was melting and Bunny had been dead for several weeks before we came to understand the gravity of our situation.

Here’s Albert Camus starting The Outsider:

Mother died today. Or yesterday. I don’t know.

And here’s Jonathan Franzen, opening his literary masterpiece The Corrections in precisely the same way that Eric Hill opened Where’s Spot?

The madness of an autumn prairie cold front coming through. You could feel it: something terrible was going to happen.

Neither is it limited to modern story:

Rage! Sing, Goddess, [of] Achilles’ rage, black and murderous, that cost the Greeks incalculable pain, pitched countless souls of heroes into Hades’ dark, and left their bodies to rot as feasts for dogs and birds, as Zeus’ will was done. Begin with the clash between Agamemnon, the Greek warlord, and godlike Achilles.

(Homer, The Iliad)

Or fiction:

A spectre is haunting Europe – the spectre of communism.

(Karl Marx, The Communist Manifesto)

And even when a story starts without much apparent change …

All happy families are alike; each unhappy family is unhappy in its own way.

(Leo Tolstoy, Anna Karenina – first sentence.)

… if it’s going to earn the attention of masses of brains, you can bet change is on the way:

All was confusion in the Oblonskys’ house. The wife had found out that the husband was having an affair with their former French governess and had announced to the husband that she could not live in the same house with him.

(Leo Tolstoy, Anna Karenina – sentences two and three.)

In life, most of the unexpected changes we react to will turn out to be of no importance: the bang was just a lorry door; it wasn’t your name, it was a mother calling for her child. So you slip back into reverie and the world, once more, becomes a smear of motion and noise. But, every now and then, that change matters. It forces us to act. This is when story begins.




1.2 (#ulink_98376b99-1a3c-5b6b-9974-8a71bbc458bc)


Unexpected change isn’t the only way to arouse curiosity. As part of their mission to control the world, brains need to properly understand it. This makes humans insatiably inquisitive: between the ages of two and five, it’s thought that we ask around 40,000 ‘explanatory’ questions (#litres_trial_promo) of our caregivers. Humans have an extraordinary thirst for knowing how things work and why. Storytellers excite these instincts by creating worlds but stopping short of telling readers everything about them.

The secrets of human curiosity have been explored by psychologists, perhaps most famously by Professor George Loewenstein. He writes of a test in which participants were confronted by a grid (#litres_trial_promo) of squares on a computer screen. They were asked to click five of them. Some participants found that, with each click, another picture of an animal appeared. But a second group saw small component parts of a single animal. With each square they clicked, another part of a greater picture was revealed. This second group were much more likely to keep on clicking squares after the required five, and then keep going until enough of them had been turned that the mystery of the animal’s identity had been solved. Brains, concluded the researchers, seem to become spontaneously curious when presented with an ‘information set’ they realise is incomplete. ‘There is a natural inclination to resolve information gaps (#litres_trial_promo),’ wrote Loewenstein, ‘even for questions of no importance.’

Another study had participants being shown three photographs (#litres_trial_promo) of parts of someone’s body: hands, feet and torso. A second group saw two parts, a third saw one, while another group still saw none. Researchers found that the more photos of the person’s body parts the participants saw, the greater was their desire to see a complete picture of the person. There is, concluded Loewenstein, a ‘positive relationship between curiosity and knowledge’. The more context we learn about a mystery, the more anxious we become to solve it. As the stories reveal more of themselves, we increasingly want to know, Where is Spot? Who is ‘Bunny’ and how did he die and how is the narrator implicated in his death?

Curiosity is shaped like a lowercase n (#litres_trial_promo). It’s at its weakest when people have no idea about the answer to a question and also when entirely convinced they do. The place of maximum curiosity – the zone in which storytellers play – is when people think they have some idea but aren’t quite sure. Brain scans reveal that curiosity begins as a little kick in the brain’s reward system: we crave to know the answer, or what happens next in the story, in the way we might crave drugs or sex or chocolate. This pleasantly unpleasant state, that causes us to squirm with tantalised discomfort at the delicious promise of an answer, is undeniably powerful. During one experiment, psychologists noted archly that their participants’ ‘compulsion to know the answer was so great that they were willing to pay for the information, even though curiosity could have been sated for free after the session.’

In his paper ‘The Psychology of Curiosity’ (#litres_trial_promo), Loewenstein breaks down four ways of involuntarily inducing curiosity in humans: (1) the ‘posing of a question or presentation of a puzzle’; (2) ‘exposure to a sequence of events with an anticipated but unknown resolution’; (3) ‘the violation of expectations that triggers a search for an explanation’; (4) knowledge of ‘possession of information by someone else’.

Storytellers have long known these principles, having discovered them by practice and instinct. Information gaps create gnawing levels of curiosity in the readers of Agatha Christie and the viewers of Prime Suspect, stories in which they’re (1) posed a puzzle; (2) exposed to a sequence of events with an anticipated but unknown resolution; (3) surprised by red herrings, and (4) tantalised by the fact that someone knows whodunnit, and how, but we don’t. Without realising it, deep in the detail of his dry, academic paper, Loewenstein has written a perfect description of police-procedural drama.

It’s not just detective stories that rely on information gaps. John Patrick Shanley’s Pulitzer Prize-winning stage play Doubt toyed brilliantly with its audience’s desire to know whether its protagonist, the avuncular and rebellious Catholic priest Father Flynn, was, in fact, a paedophile. The long-form journalist Malcolm Gladwell is a master at building curiosity about Loewensteinian ‘questions of no importance’ and manages the feat no more effectively than in his story ‘The Ketchup Conundrum’, in which he becomes a detective trying to solve the mystery of why it’s so hard to make a sauce to rival Heinz.

Some of our most successful mass-market storytellers also rely on information gaps. J. J. Abrams is co-creator of the longform television series Lost, which followed characters who mysteriously manage to survive an airline crash on a South Pacific island. There they discover mysterious polar bears; a mysterious band of ancient beings known as ‘the Others’; a mysterious French woman; a mysterious ‘smoke monster’ and a mysterious metal door in the ground. Fifteen million viewers in the US alone were drawn to watch that first series, in which a world was created then filled until psychedelic with information gaps. Abrams has described his controlling theory of storytelling as consisting of the opening of ‘mystery boxes’. Mystery, he’s said, ‘is the catalyst for imagination (#litres_trial_promo) … what are stories but mystery boxes?’




1.3 (#ulink_545fc64e-f3cb-5c31-83ff-94e282563c60)


In order to tell the story of your life, your brain needs to conjure up a world for you to live inside, with all its colours and movements and objects and sounds. Just as characters in fiction exist in a reality that’s been actively created, so do we. But that’s not how it feels to be a living, conscious human. It feels as if we’re looking out of our skulls, observing reality directly and without impediment. But this is not the case. The world we experience as ‘out there’ is actually a reconstruction of reality that is built inside our heads. It’s an act of creation by the storytelling brain.

This is how it works. You walk into a room. Your brain predicts what the scene should look and sound and feel like, then it generates a hallucination based on these predictions. It’s this hallucination that you experience as the world around you. It’s this hallucination you exist at the centre of, every minute of every day. You’ll never experience actual reality because you have no direct access to it. ‘Consider that whole beautiful world around you, with all its (#litres_trial_promo) colours and sounds and smells and textures,’ writes the neuroscientist and fiction writer Professor David Eagleman. ‘Your brain is not directly experiencing any of that. Instead, your brain is locked in a vault of silence and darkness inside your skull.’

This hallucinated reconstruction of reality is sometimes referred to as the brain’s ‘model’ of the world. Of course, this model of what’s actually out there needs to be somewhat accurate, otherwise we’d be walking into walls and ramming forks into our necks. For accuracy, we have our senses. Our senses seem incredibly powerful: our eyes are crystalline windows through which we observe the world in all its colour and detail; our ears are open tubes into which the noises of life freely tumble. But this is not the case. They actually deliver only limited and partial information to the brain.

Take the eye, our dominant sense organ. If you hold out your arm and look at your thumbnail (#litres_trial_promo), that’s all you can see in high definition and full colour at once. Colour ends 20 to 30 degrees outside that core and the rest of your sight is fuzzy (#litres_trial_promo). You have two lemon-sized blind spots and blink fifteen to twenty times a minute (#litres_trial_promo), which blinds you for fully 10 per cent of your waking life. You don’t even see in three dimensions.

How is it, then, that we experience vision as being so perfect? Part of the answer lies in the brain’s obsession with change. That large fuzzy area of your vision is sensitive to changes in pattern and texture as well as movement. As soon as it detects unexpected change, your eye sends its tiny high-definition core – which is a 1.5-millimetre depression in the centre of your retina – to inspect it. This movement – known as a ‘saccade’ – is the fastest in the human body. We make four to five saccades every second (#litres_trial_promo), over 250,000 in a single day. Modern filmmakers mimic saccadic behaviour (#litres_trial_promo) when editing. Psychologists examining the so-called ‘Hollywood style’ find the camera makes ‘match action cuts’ to new salient details just as a saccade might, and is drawn to similar events, such as bodily movement.

The job of all the senses is to pick up clues from the outside world in various forms: lightwaves, changes in air pressure, chemical signals. That information is translated into millions of tiny electrical pulses. Your brain reads these electrical pulses, in effect, like a computer reads code. It uses that code to actively construct your reality, fooling you into believing this controlled hallucination is real. It then uses its senses as fact-checkers, rapidly tweaking what it’s showing you whenever it detects something unexpected.

It’s because of this process that we sometimes ‘see’ things that aren’t actually there. Say it’s dusk and you think you’ve seen a strange, stooping man with a top hat and a cane loitering by a gate, but you soon realise it’s just a tree stump and a bramble. You say to your companion, ‘I thought I saw a weird guy over there.’ You did see that weird guy over there. Your brain thought he was there so it put him there. Then when you approached and new, more accurate, information was detected, it rapidly redrew the scene, and your hallucination was updated.

Similarly, we often don’t see things that are actually there. A series of iconic experiments had participants watch a video of people throwing a ball around. They had to count the number of times the ball was passed. Half didn’t spot a man in a gorilla suit walk directly into the middle of the screen (#litres_trial_promo), bang his chest three times, and leave after fully nine seconds. Other tests have confirmed we can also be (#litres_trial_promo) ‘blind’ to auditory information (the sound of someone saying ‘I am a gorilla’ for nineteen seconds) as well as touch and smell information. There’s a surprising limit to how much our brains can actually process. Pass that limit and the object is simply edited out. It’s not included in our hallucinated reality. It literally becomes invisible to us. These findings have dire potential consequences. In a test of a simulated vehicle stop (#litres_trial_promo), 58 per cent of police trainees and 33 per cent of experienced officers ‘failed to notice a gun positioned in full view on the passenger dashboard’.

Things naturally become worse when our fact-checking senses become damaged. When people’s eyesight develops sudden flaws, their hallucinatory model of reality can begin to flicker and fail. They sometimes see clowns, circus animals and cartoon characters in the areas that have gone dark. Religious people have apparent visitations. These individuals are not ‘mad’ and neither are they rare. The condition affects millions. Dr Todd Feinberg writes of a patient, Lizzy (#litres_trial_promo), who suffered strokes in her occipital lobes. As can happen in such cases, her brain didn’t immediately process the fact she’d gone ‘suddenly and totally’ blind, so it continued projecting its hallucinated model of the world. Visiting her hospital bed, Feinberg enquired if she was having trouble with her vision in any way. ‘No,’ she said. When he asked her to take a look around and tell him what she saw, she moved her head accordingly.

‘It’s good to see friends and family, you know,’ she said. ‘It makes me feel like I’m in good hands.’

But there was nobody else there.

‘Tell me their names,’ said Feinberg.

‘I don’t know everybody. They’re my brother’s friends.’

‘Look at me. What am I wearing?’

‘A casual outfit. You know, a jacket and pants. Mostly navy blue and maroon.’

Feinberg was in his hospital whites. Lizzy continued their chat smiling and acting ‘as if she had not a care in the world’.

These relatively recent findings by neuroscientists demand a spooky question. If our senses are so limited, how do we know what’s actually happening outside the dark vault of our skulls? Disturbingly, we don’t know for sure. Like an old television that can only pick up black and white, our biological technology simply can’t process most of what’s actually going on in the great oceans of electromagnetic radiation that surround us. Human eyes are able to read less than one ten-trillionth of the light spectrum (#litres_trial_promo). ‘Evolution shaped us with perceptions that allow us to survive,’ the cognitive scientist Professor Donald Hoffman has said (#litres_trial_promo). ‘But part of that involves hiding from us the stuff we don’t need to know. And that’s pretty much all of reality, whatever reality might be.’

We do know that actual reality is radically different than the model of it that we experience in our heads. For instance, there’s no sound out there. If a tree falls in a forest and there’s no one around to hear it, it creates changes in air pressure and vibrations in the ground. The crash is an effect that happens in the brain. When you stub your toe and feel pain throbbing out of it, that, too, is an illusion. That pain is not in your toe, but in your brain.

There’s no colour out there either. Atoms are colourless. All the colours we do ‘see’ are a blend of three cones that sit in the eye: red, green and blue. This makes us Homo sapiens relatively impoverished members of the animal kingdom: some birds have six cones; mantis shrimp (#litres_trial_promo) have sixteen; bees’ eyes are able to see (#litres_trial_promo) the electromagnetic structure of the sky. The colourful worlds they experience beggar human imagination. Even the colours we do ‘see’ are mediated by culture. Russians are raised (#litres_trial_promo) to see two types of blue and, as a result, see eight-striped rainbows. Colour is a lie. It’s set-dressing, worked up by the brain. One theory has it that we began painting colours onto objects millions of years ago in order to identify ripe fruit (#litres_trial_promo). Colour helps us interact with the external world and thereby better control it.

The only thing we’ll ever really know are those electrical pulses that are sent up by our senses. Our storytelling brain uses those pulses to create the colourful set in which to play out our lives. It populates that set with a cast of actors with goals and personalities, and finds plots for us to follow. Even sleep is no barrier to the brain’s story-making processes. Dreams feel real (#litres_trial_promo) because they’re made of the same hallucinated neural models we live inside when awake. The sights are the same, the smells are the same, objects feel the same to the touch. Craziness happens partly because the fact-checking senses are offline, and partly because the brain has to make sense of chaotic bursts of neural activity that are the result of our state of temporary paralysis. It explains this confusion as it explains everything: by roughing together a model of the world and magicking it into a cause-and-effect story.

One common dream has us falling off a building or tumbling down steps, a brain story that’s typically triggered to explain a ‘myoclonic jerk’ (#litres_trial_promo), a sudden, jarring contraction of the muscles. Indeed, just like the stories we tell each other for fun, dream narratives often centre on dramatic, unexpected change. Researchers find the majority of dreams feature at least one event of threatening and unexpected change, with most of us experiencing up to five such events every night. Wherever studies have been done (#litres_trial_promo), from East to West, from city to tribe, dream plots reflect this. ‘The most common is being chased or attacked,’ writes story psychologist Professor Jonathan Gottschall. ‘Other universal themes include falling from a great height, drowning, being lost or trapped, being naked in public, getting injured, getting sick or dying, and being caught in a natural or manmade disaster.’

So now we’ve discovered how reading works. Brains take information from the outside world – in whatever form they can – and turn it into models. When our eyes scan over letters in a book, the information they contain is converted into electrical pulses. The brain reads these electrical pulses and builds a model of whatever information those letters provided. So if the words on the page describe a barn door hanging on one hinge, the reader’s brain will model a barn door hanging on one hinge. They’ll ‘see’ it in their heads. Likewise, if the words describe a ten-foot wizard with his knees on back to front, the brain will model a ten-foot wizard with his knees on back to front. Our brain rebuilds the model world that was originally imagined by the author of the story. This is the reality of Leo Tolstoy’s brilliant assertion that ‘a real work of art destroys, in the consciousness of the receiver, the separation between himself and the artist.’

A clever scientific study examining this process seems to have caught people in the act of ‘watching’ the models of stories (#litres_trial_promo) that their brains were busily building. Participants wore glasses that tracked their saccades. When they heard stories in which lots of events happened above the line of the horizon, their eyes kept making micro-movements upwards, as if they were actively scanning the models their brains were generating of its scenes. When they heard ‘downward’ stories, that’s where their eyes went too.

The revelation that we experience the stories we read by building hallucinated models of them in our heads makes sense of many of the rules of grammar we were taught at school. For the neuroscientist Professor Benjamin Bergen, grammar acts like a film director, telling the brain what to model and when. He writes that grammar ‘appears to modulate what part of an evoked simulation someone (#litres_trial_promo) is invited to focus on, the grain of detail with which the simulation is performed, or what perspective to perform that simulation from’.

According to Bergen, we start modelling words as soon as we start reading them. We don’t wait until we get to the end of the sentence. This means the order in which writers place their words matters. This is perhaps why transitive construction (#litres_trial_promo) – Jane gave a Kitten to her Dad – is more effective than the ditransitive – Jane gave her Dad a kitten. Picturing Jane, then the Kitten, then her Dad mimics the real-world action that we, as readers, should be modelling. It means we’re mentally experiencing the scene in the correct sequence. Because writers are, in effect, generating neural movies in the minds of their readers, they should privilege word order that’s filmic, imagining how their reader’s neural camera will alight upon each component of a sentence.

For the same reason, active sentence construction (#litres_trial_promo) – Jane kissed her Dad – is more effective than passive – Dad was kissed by Jane. Witnessing this in real life, Jane’s initial movement would draw our attention and then we’d watch the kiss play out. We wouldn’t be dumbly staring at Dad, waiting for something to happen. Active grammar means readers model the scene on the page in the same way that they’d model it if it happened in front of them. It makes for easier and more immersive reading.

A further powerful tool for the model-creating storyteller is the use of specific detail. If writers want their readers to properly model their story-worlds they should take the trouble to describe them as precisely as possible. Precise and specific description makes for precise and specific models. One study concluded that, to make vivid scenes, three specific qualities (#litres_trial_promo) of an object should be described, with the researcher’s examples including ‘a dark blue carpet’ and ‘an orange striped pencil.’

The findings Bergen describes also suggest the reason writers are continually encouraged to ‘show not tell’. As C. S. Lewis implored a young writer in 1956 (#litres_trial_promo), ‘instead of telling us a thing was “terrible”, describe it so that we’ll be terrified. Don’t say it was “delightful”; make us say “delightful” when we’ve read the description.’ The abstract information contained in adjectives such as ‘terrible’ and ‘delightful’ is thin gruel for the model-building brain. In order to experience a character’s terror or delight or rage or panic or sorrow, it has to make a model of it. By building its model of the scene, in all its vivid and specific detail, it experiences what’s happening on the page almost as if it’s actually happening. Only that way (#litres_trial_promo) will the scene truly rouse our emotions.

Mary Shelley may have been a teenager writing more than 170 years before the discovery of our model-making processes, but when she introduces us to Frankenstein’s monster she displays an impressive instinct for its ramifications: filmic word order; specificity and show-not-tell.

It was already one in the morning; the rain pattered dismally against the panes, and my candle was nearly burned out, when, by the glimmer of the half-extinguished light, I saw the dull yellow eye of the creature open; it breathed hard, and a convulsive motion agitated its limbs. How can I describe my emotions at this catastrophe, or how delineate the wretch whom with such infinite care and pains I had endeavoured to form? His limbs were in proportion, and I had selected his features as beautiful. Beautiful! Great god! His yellow skin scarcely covered the work of muscles and arteries beneath; his hair was of a lustrous black, and flowing; his teeth was of a pearly whiteness; but these luxuriances only formed a more horrid contrast with his watery eyes, that seemed almost of the same colour as the dun-white sockets in which they were set, his shrivelled complexion and straight black lips.

Immersive model worlds can also be summoned by the evocation of the senses. Touches, tastes, scents and sounds can be recreated in the brains of readers as the neural networks associated with these sensations become activated when they see the right words. All it takes is deployment of specific detail, with the sensory information (‘a cabbagey’) paired to visual information (‘brown sock’). This simple technique is used to magical effect in Patrick Süskind’s novel Perfume. It tells of an orphan with an awesome sense of smell who’s born in a malodorous fish market. He takes us into his world of eighteenth-century Paris by conjuring a kingdom of scent:

the streets stank of manure, the courtyards of urine, the stairwells stank of mouldering wood and rat droppings, the kitchens of spoiled cabbage and mutton fat; the unaired parlours stank of stale dust, the bedrooms of greasy sheets, damp featherbeds and the pungently sweet aroma of chamber-pots. The stench of sulphur rose from the chimneys, the stench of caustic lyes from the tanneries, and from the slaughterhouses came the stench of congealed blood. People stank of sweat and unwashed clothes; from their mouths came the stench of rotting teeth, from their bellies that of onions, and from their bodies, if they were no longer very young, came the stench of rancid cheese and sour milk and tumorous disease … [the heat of day squeezed] its putrefying vapour, a blend of rotting melon and the fetid odour of burned animal horn, out into the nearby alleys.




1.4 (#ulink_7a95765f-02e5-5aaf-ac98-360e5b38bb60)


The brain’s propensity for automatic model-making is exploited with superb effect by tellers of fantasy and science-fiction stories. Simply naming a planet, ancient war or obscure technical detail seems to trigger the neural process of building it, as if it actually exists. One of the first books I fell in love with as a boy was J.R.R. Tolkien’s The Hobbit. My best friend Oliver and I obsessed over the maps it contained – ‘Mount Gundabad’; ‘Desolation of Smaug’; ‘West lies Mirkwood the Great – there are spiders.’ When his father made photocopies of them for us, these maps became the focus of a summer of blissful play. The places Tolkien sketched out, on those maps, felt as real to us as the sweet shop in Silverdale Road.

In Star Wars, when Han Solo boasts that his ship the Millennium Falcon ‘made the Kessel Run in less than twelve parsecs’ we have the strange experience of knowing it’s an actor doing gibberish whilst simultaneously somehow feeling as if it’s real. The line works because of its absolute specificity and its adherence to what sounds like truth (the ‘Kessel Run’ really could be a race while ‘parsecs’ are a genuine measurement of distance, equivalent to 3.26 light years). As ridiculous as some of this language actually is, rather than taking us out of the storyteller’s fictional hallucination, it manages to give it even more density.

By merest suggestion, the Kessel Run becomes real. We can imagine the dusty planet on which the race begins, hear the whine and blast of the engines, smell the alien piss around the back of the mechanics’ wind-flapping encampments. This is just what happens in Bladerunner’s most famous scene, in which the replicant Roy Batty, on the edge of death, tells Rick Deckard, ‘I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate.’

Those C-beams! That gate! Their wonder lies in the fact that they’re merely suggested. Like monsters in the most frightening horror stories, they feel all the more real for being the creations, not of the writer, but of our own incessant model-making imaginations.




1.5 (#ulink_7bd8735d-e196-5f93-b78b-6d8876efc63c)


The hallucinated world our brain creates for us is specialised. It’s honed towards our particular survival needs. Like all animals, our species can only detect the narrow band of reality that’s necessary for us to get by. Dogs live principally in a world of smell, moles in touch and knife-fish in a realm of electricity. The human world is predominantly that of people. Our hyper-social brains are designed to control an environment of other selves.

Humans have an extraordinary gift for reading and understanding the minds of other people. In order to control our environment of humans, we have to be able to predict what they’re going to do. The importance and complexity of human behaviour means we have an insatiable curiosity about it. Storytellers exploit both these mechanisms and this curiosity; the stories they tell are a deep investigation into the ever-fascinating whys of what people do.

We’ve been a social species, whose survival has depended upon human cooperation, for hundreds of thousands of years. But over the last 1,000 generations it’s been argued (#litres_trial_promo) that these social instincts have been rapidly honed and strengthened. This ‘sharp acceleration’ of selection for social traits, writes developmental psychologist Professor Bruce Hood, has left us with brains that are ‘exquisitely engineered to interact with other brains’.

For earlier humans that roamed hostile environments, aggression and physicality had been critical. But the more cooperative we became, the less useful these traits proved. When we started living in settled communities, they grew especially troublesome. There, it would’ve been the people who were better at getting along with others, rather than the physically dominant, who’d have been more successful.

This success in the community would’ve meant greater reproductive success, which would’ve gradually led to the emergence of a new strain of human. These humans had thinner and weaker bones than their ancestors and greatly reduced muscle mass, their physical strength as much as halving (#litres_trial_promo). They also had the kind of brain chemistry and hormones that predisposed them to behaviour specialised for settled communal living. They’d have been less interpersonally aggressive, but more adept at the kind of psychological manipulation necessary for negotiating, trading and diplomacy. They’d become expert at controlling their environment of other human minds.

You might compare it to the difference between a wolf and a dog. A wolf survives by cooperating as well as fighting for dominance and killing prey. A dog does so by manipulating its human owner such that they’d do anything for them. The power my beloved labradoodle Parker has over my own brain is frankly embarrassing. (I’ve dedicated this bloody book to her.) In fact, this might be more than a mere analogy. Researchers such as Hood argue that modern humans, just like dogs, have gone through a process of domestication. Support for the idea comes partly from the fact that, over the last 20,000 years, our brains have shrunk by between ten and fifteen per cent, the same reduction that’s been observed in all the thirty or so other animals that humans have domesticated. Just as with those creatures, our domestication means we’re tamer than our ancestors, better at reading social signals and more dependent on others. But, writes Hood, ‘no other animal has taken domestication to the extent that we have.’ Our brains may have initially evolved to ‘cope with a potentially threatening world of predators, limited food and adverse weather, but we now rely on it to navigate an equally unpredictable social landscape.’

Unpredictable humans. This is the stuff of story.

For modern humans, controlling the world means controlling other people, and that means understanding them. We’re wired to be fascinated by others and get valuable information from their faces. This fascination begins almost immediately. Whereas ape and monkey parents (#litres_trial_promo) spend almost no time looking at their babies’ faces, we’re helplessly drawn to them. Newborns are attracted to human faces more than to any other object (#litres_trial_promo) and, one hour from birth, begin imitating them (#litres_trial_promo). By two, they’ve learned to control their social worlds by smiling (#litres_trial_promo). By the time they’re adults, they’ve become so adept at reading people that they’re making calculations about status and character automatically, in one tenth of a second (#litres_trial_promo). The evolution of our strange, extremely other-obsessed brains has brought with it weird side-effects. Human obsession with faces is so fierce we see them almost anywhere: in fire; in clouds; down spooky corridors; in toast.

We sense minds everywhere too. Just as the brain models the outside world it also builds models of minds. This skill, which is an essential weapon in our social armoury, is known as ‘theory of mind’. It enables us to imagine what others are thinking, feeling and plotting, even when they’re not present. We can experience the world from another’s perspective. For the psychologist Professor Nicholas Epley this capacity, which is obviously essential for storytelling, gave us incredible power. ‘Our species has conquered the Earth because (#litres_trial_promo) of our ability to understand the minds of others,’ he writes, ‘not because of our opposable thumbs or handiness with tools.’ We develop this skill at around the age of four. It’s then that we become story-ready; equipped to understand the logic of narrative.

The human ability to populate our minds with imagined other minds is the start of religion. Shamans in hunter-gatherer tribes would enter trance states and interact with spirits, and use these interactions as attempts to control the world. Religions were also typically animistic: our storytelling brains would project human-like minds into trees, rocks, mountains and animals, imagining they were possessed by gods who were responsible for changeful events, and required controlling with ritual and sacrifice.

Childhood stories reflect our natural tendency for such hyperactive mind-detecting. In fairytales, human-like minds are everywhere: mirrors talk, pigs eat breakfast, frogs turn into princes. Youngsters naturally treat their dolls and teddies as if they’re inhabited by selves. I remember feeling terrible guilt for preferring my pink bear, handmade by my Grandmother, to my shop-bought brown bear. I knew they both knew how I felt, and that left me distracted and sad.

We never really grow out of our inherent animism. Which one of us hasn’t kicked a door that’s slammed on our fingers believing, in that disorientating flash of pain, that it attacked us out of spite? Who among us hasn’t told a self-assembly wardrobe to fuck off? Whose storytelling brain doesn’t commit its own literary-style pathetic fallacy, allowing the sun to make them optimistic about the coming day or the brooding clouds pessimistic? Studies indicate that those who anthropomorphise (#litres_trial_promo) a human personality onto their cars show less interest in trading them. Bankers project human moods (#litres_trial_promo) onto the movements of the markets and place their trades accordingly.

When we’re reading, hearing or watching a story we deploy our theory-of-mind skills by automatically making hallucinatory models of the minds of its characters. Some authors model the minds of their own characters with such force that they hear them talk. Charles Dickens, William Blake and Joseph Conrad all spoke of (#litres_trial_promo) such extraordinary experiences. The novelist and psychologist Professor Charles Fernyhough (#litres_trial_promo) has led research in which 19 per cent of ordinary readers reported hearing the voices of fictional characters even after they’d put their books down. Some reported a kind of literary possession, with the character influencing the tone and nature of their thoughts.

But much as humans excel at such feats of theory of mind, we also tend to dramatically overestimate our abilities. Although there’s an admitted absurdity in claiming to be able to quantify human behaviour with such absolute numerical precision, some research suggests strangers read another’s thoughts (#litres_trial_promo) and feelings with an accuracy of just 20 per cent. Friends and lovers? A mere 35 per cent. Our errors about what others are thinking are a major cause of human drama. As we move through life, wrongly predicting what people are thinking and how they’ll react when we try to control them, we haplessly trigger feuds and fights and misunderstandings that fire devastating spirals of unexpected change into our social worlds.

Comedy, whether by William Shakespeare or John Cleese and Connie Booth, is often built on such mistakes. But whatever the mode of storytelling, well-imagined characters always have theories about the minds of other characters and – because this is drama – those theories will often be wrong.This wrongness will lead to unexpected consequences and yet more drama. The influential post-war director Alexander Mackendrick writes, ‘I start by asking (#litres_trial_promo): What does A think B is thinking about A? It sounds complicated (and it is) but this is the very essence of giving some density to a character and, in turn, a scene.’

The author Richard Yates uses a theory-of-mind mistake to create a pivotal moment of drama in his classic Revolutionary Road. The novel charts the dissolving marriage of Frank and April Wheeler. When they were young, and newly in love, Frank and April dreamed of bohemian lives in Paris. But, when we meet them, middle-aged reality has struck. Frank and April have two children, with a third on the way, and have moved into a cookie-cutter suburb. Frank’s secured a job at his father’s old company and has found himself rather settling into a life of boozy lunches and housewife-at-home ease. But April isn’t happy. She still dreams of Paris. They argue, bitterly. Sex is withheld. Frank sleeps with a girl at work. And then he makes his theory-of-mind mistake.

In order to break the impasse with his wife, Frank decides to confess his infidelity. His theory of April’s mind appears to be that she’ll be thrown into a state of catharsis that will jolt her back into reality. There’ll be tears to mop up, sure, but those tears will just remind the ol’ gal why she loves him.

This is not what happens. When he confesses, April asks, Why? Not why he slept with the girl, but why is he bothering to tell her? She doesn’t care about his fling. This isn’t what Frank was expecting at all. He wants her to care! ‘I know you do,’ April tells him. ‘And I suppose I would, if I loved you; but you see I don’t. I don’t love you and I never really have and I never really figured it out until this week.’




1.6 (#ulink_460951b8-1cd9-5554-af34-d052f05473fe)


As the eye darts about, building up its story world for you to live inside, the brain’s choosy about where it tells it to look. We’re attracted to change, of course, but also to other salient details. Scientists used to believe attention was drawn simply to objects that stood out, but recent research suggests we’re more likely to attend to (#litres_trial_promo) that which we find meaningful. Unfortunately, it’s not yet known precisely what ‘meaningful’ means, in this context, but tests that tracked saccades found, for example, that an untidy shelf attracted more attention than a sun-splashed wall. For me, that untidy shelf hints of human change; of a life in detail; of trouble insinuating itself in a place designed for order. It’s no surprise test-brains were drawn to it. It’s story-stuff, whilst the sun is just a shrug.

Storytellers also choose carefully what meaningful details to show and when. In Revolutionary Road, just after Frank makes his changeful theory-of-mind mistake that throws his life in a new and unexpected direction, the author draws our attention to one brilliant detail. It’s an urgent voice on the radio: ‘And listen to this. Now, during the Fall Clearance, you’ll find Robert Hall’s entire stock of men’s walk shorts and sport jeans drastically reduced!’

Both believable and crushing, it serves to intensify our feelings, at exactly the right moment, of the suffocating and dreary housewifey corner that April has found herself backed into. Its timing also implicitly defines and condemns what Frank has become. He used to think he was bohemian – a thinker! – and now he’s just Bargain Shorts Man. This is an advert for him.

The director Stephen Spielberg is famous for his use of salient detail to create drama. In Jurassic Park, during a scene that builds to our first sighting of Tyrannosaurusrex, we see two cups of water on a car dashboard, deep rumbles from the ground sending rings over their liquid surface. We cut between the faces of the passengers, each slowly registering change. Then we see the rear-view mirror vibrating with the stomping of the beast. Extra details like this add even more tension by mimicking the way brains process peak moments of stress. When we realise our car is about to crash, say, the brain needs to temporarily increase its ability to control the world. Its processing power surges and we become aware of more features in our environment, which has the effect of making time seem to slow down. In exactly this way, storytellers stretch time, and thereby build suspense, by packing in extra saccadic moments and detail.




1.7 (#ulink_b54c082b-01ce-50c6-a302-0bae76ea8f68)


There’s a park bench, in my hometown, that I don’t like to walk past because it’s haunted by a breakup with my first love. I see ghosts on that bench that are invisible to anyone else except, perhaps, her. And I feel them too. Just as human worlds are haunted with minds and faces, they’re haunted with memories. We think of the act of ‘seeing’ as the simple detection of colour, movement and shape. But we see with our pasts.

That hallucinatory neural model of the world we live inside is made up of smaller, individual models – we have neural models of park benches, dinosaurs, ISIS, ice cream, models of everything – and each of those is packed with associations from our own personal histories. We see both the thing itself and all that we associate with it. We feel it too. Everything our attention rests upon triggers a sensation, most of which are minutely subtle and experienced beneath the level of conscious awareness. These feelings flicker and die so rapidly that they precede conscious thought, and thereby influence it. All these feelings reduce to just two impulses: advance and withdraw. As you scan any scene, then, you’re in a storm of feeling; positive and negative sensations from the objects you see fall over you like fine drops of rain. This understanding is the beginning of creating a compelling and original character on the page. A character in fiction, like a character in life, inhabits their own unique hallucinated world in which everything they see and touch comes with its own unique personal meaning.

These worlds of feeling are a result of the way our brains encode the environment. The models we have of everything are stored in the form of neural networks. When our attention rests upon a glass of red wine, say, a large number of neurons in different parts of the brain are simultaneously activated. We don’t have a specific ‘glass of wine’ area that lights up, what we have are responses to ‘liquid’, ‘red’, ‘shiny surface’, ‘transparent surface’, and so on. When enough of these are triggered, the brain understands what’s in front of it and constructs the glass of wine for us to ‘see’.

But these neural activations aren’t limited to mere descriptions of appearance. When we detect the glass of wine, other associations also flash into being: bitter-sweet flavours; vineyards; grapes; French culture; dark marks on white carpets; your road-trip to the Barossa Valley; the last time you got drunk and made a fool of yourself; the first time you got drunk and made a fool of yourself; the breath of the woman who attacked you. These associations have powerful effects on our perception. Research shows that when we drink (#litres_trial_promo) wine our beliefs about its quality and price change our actual experience of its taste. The way food is described (#litres_trial_promo) has a similar effect.

It’s just such associative thinking that gives poetry its power. A successful poem plays on our associative networks as a harpist plays on strings. By the meticulous placing of a few simple words, they brush gently against deeply buried memories, emotions, joys and traumas, which are stored in the form of neural networks that light up as we read. In this way, poets ring out rich chords of meaning that resonate so profoundly we struggle to fully explain why they’re moving us so.

Alice Walker’s ‘Burial’ describes the poet bringing her child to the cemetery in Eatonton, Georgia, in which several generations of her family are interred. She describes her grandmother resting

undisturbed

beneath the Georgia sun,

above her the neatstepping hooves

of cattle

and graves that ‘drop open without warning’ and

cover themselves with wild ivy

blackberries. Bittersweet and sage.

No one knows why. No one asks.

When I read ‘Burial’ for the first time, the lines at the end of this stanza made little logical sense to me, and yet I immediately found them beautiful, memorable and sad:

Forgetful of geographic resolutions as birds

the far-flung young fly South to bury

the old dead.

It’s these same associative processes that allow us to think metaphorically. Analyses of language reveal the extraordinary fact that we use around one metaphor for every ten seconds of speech (#litres_trial_promo) or written word. If that sounds like too much, it’s because you’re so used to thinking metaphorically – to speaking of ideas that are ‘conceived’ or rain that is ‘driving’ or rage that is ‘burning’ or people who are ‘dicks’. Our models are not only haunted by ourselves, then, but also by properties of other things. In her 1930 essay ‘Street Haunting’ Virginia Woolf employs several subtle metaphors over the course of a single gorgeous sentence:

How beautiful a London street is then, with its islands of lights, and its long groves of darkness, and on the side of it perhaps some tree-sprinkled, grass-grown space where night is folding herself to sleep naturally and, as one passes the iron railing, one hears those little cracklings and stirrings of leaf and twig which seem to suppose the silence of fields all around them, an owl hooting, and far away the rattle of the train in the valley.

Neuroscientists are building a powerful case (#litres_trial_promo) that metaphor is far more important to human cognition than has ever been imagined. Many argue it’s the fundamental way that brains understand abstract concepts, such as love, joy, society and economy. It’s simply not possible to comprehend these ideas in any useful sense, then, without attaching them to concepts that have physical properties: things that bloom and warm and stretch and shrink.

Metaphor (and its close sibling, the simile) tends to work on the page in one of two ways. Take this example, from Michael Cunningham’s A Home at the End of the World: ‘She washed old plastic bags and hung them on the line to dry, a string of thrifty tame jellyfish floating in the sun.’ This metaphor works principally by opening an information gap. It asks the brain a question: how can a plastic bag be a jellyfish? To find the answer, we imagine the scene. Cunningham has nudged us into more vividly modelling his story.

In Gone with the Wind, Margaret Mitchell uses metaphor to make not a visual point, but a conceptual one: ‘The very mystery of him excited her curiosity like a door that had neither lock nor key.’

In The Big Sleep, metaphor enables Raymond Chandler to pack a tonne of meaning into just seven words: ‘Dead men are heavier than broken hearts.’

Brain scans illustrate the second, more powerful, use of metaphor. When participants in one study read the words ‘he had a rough day’ (#litres_trial_promo), their neural regions involved in feeling textures became more activated, compared with those who read ‘he had a bad day’. In another, those who read ‘she shouldered the burden’ (#litres_trial_promo) had neural regions associated with bodily movement activated more than when they read ‘she carried the burden’. This is prose writing that deploys the weapons of poetry. It works because it activates extra neural models that give the language additional meaning and sensation. We feel the heft and strain of the shouldering, we touch the abrasiveness of the day.

Such an effect is exploited by Graham Greene in The Quiet American. Here, a protagonist with a broken leg is receiving unwanted help from his antagonist: ‘I tried to move away from him and take my own weight, but the pain came roaring back like a train in a tunnel.’ This finely judged metaphor is enough to make you wince. You can almost feel the neural networks firing up and borrowing greedily from each other: the tender limb; the snapped bone; the pain in all its velocity and unstoppableness and thunder, roaring up the tunnel of the leg.

In The God of Small Things, Arundhati Roy uses metaphorical language to sensual effect when describing a love scene between the characters Ammu and Valutha: ‘She could feel herself through him. Her skin. The way her body existed only where he touched her. The rest of her was smoke.’

And here the eighteenth-century writer and critic Denis Diderot uses a one-two of perfectly contrasting similes to smack his point home: ‘Libertines are hideous spiders, that often catch pretty butterflies.’

Metaphor and simile can be used to create mood. In Karl Ove Knausgaard’s A Death in the Family, the narrator describes stepping outside for a cigarette break, in the midst of clearing out the house of his recently deceased father. There he sees, ‘plastic bottles lying on their sides on the brick floor dotted with raindrops. The bottlenecks reminded me of muzzles, as if they were small cannons with their barrels pointing in all directions.’ Knausgaard’s choice of language adds to the general deathly, angry aura of the passage by flicking unexpectedly at the reader’s models of guns.

Descriptive masters such as Charles Dickens manage to hit our associative models again and again, creating wonderful crescendos of meaning, with the use of extended metaphors. Here he is, at the peak of his powers, introducing us to Ebenezer Scrooge in A Christmas Carol.

The cold within him froze his old features, nipped his pointed nose, shrivelled his cheek, stiffened his gait; made his eyes red, his thin lips blue; and spoke out shrewdly in his grating voice. A frosty rime was on his head, and on his eyebrows, and his wiry chin. He carried his own low temperature always about with him; he iced his office in the dog-days; and didn’t thaw it one degree at Christmas. External heat and cold had little influence on Scrooge. No warmth could warm, nor wintry weather chill him. No wind that blew was bitterer than he, no falling snow was more intent upon its purpose, no pelting rain less open to entreaty.

The author and journalist George Orwell knew the recipe for a potent metaphor. In the totalitarian milieu of his novel Nineteen Eighty-Four, he describes the small room in which the protagonist Winston and his partner Julia could be themselves without the state spying on them as ‘a world, a pocket of the past where extinct animals could walk.’

It won’t come as much of a surprise to discover (#litres_trial_promo) the interminably correct Orwell was even right when he wrote about writing. ‘A newly invented metaphor assists thought by evoking a visual image,’ he suggested, in 1946, before warning against the use of that ‘huge dump of worn-out metaphors which have lost all evocative power and are merely used because they save people the trouble of inventing phrases for themselves.’

Researchers recently tested this idea that clichéd metaphors (#litres_trial_promo) become ‘worn-out’ by overuse. They scanned people reading sentences that included action-based metaphors (‘they grasped the idea’), some of which were well-worn and others fresh. ‘The more familiar the expression, the less it activated the motor system,’ writes the neuroscientist Professor Benjamin Bergen. ‘In other words, over their careers, metaphorical expressions come to be less and less vivid, less vibrant, at least as measured by how much they drive metaphorical simulations.’




1.8 (#ulink_9d05ef65-b069-5949-a9d2-e2ee8af35c71)


In a classic 1932 experiment, the psychologist Frederic Bartlett (#litres_trial_promo) read a traditional Native American story to participants and asked them to retell it, by memory, at various intervals. The War of the Ghosts was a brief, 330-word tale about a boy who was reluctantly compelled to join a war party. During the battle, a warrior warned the boy that he had been shot. But, looking down, the boy couldn’t see any wounds on his body. The boy concluded that all the warriors were actually just ghosts. The next morning the boy’s face contorted, something black came out of his mouth, and he dropped down dead.

The War of the Ghosts had various characteristics that were unusual, at least for the study’s English participants. When they recalled the tale over time, Bartlett found their brains did something interesting. They simplified and formalised the story, making it more familiar by altering much of its ‘surprising, jerky and inconsequential’ qualities. They removed bits, added other bits and reordered still more. ‘Whenever anything appeared incomprehensible, it was either omitted or explained,’ in much the same way that an editor might fix a confusing story.

Turning the confusing and random into a comprehensible story is an essential function of the storytelling brain. We’re surrounded by a tumult of often chaotic information. In order to help us feel in control, brains radically simplify the world with narrative. Estimates vary, but it’s believed the brain processes around 11 million bits (#litres_trial_promo) of information at any given moment, but makes us consciously aware of no more than forty (#litres_trial_promo). The brain sorts through an abundance of information and decides what salient information to include in its stream of consciousness.

There’s a chance you’ve been made aware of these processes when, in a crowded room, you’ve suddenly heard someone in a distant corner speaking your name. This experience suggests the brain’s been monitoring myriad conversations and has decided to alert you to the one that might prove salient to your wellbeing. It’s constructing your story for you: sifting through the confusion of information that surrounds you, and showing you only what counts. This use of narrative to simplify the complex is also true of memory. Human memory is ‘episodic’ (we tend to experience our messy pasts as a highly simplified sequences of causes and effects) and ‘autobiographical’ (those connected episodes are imbued with personal and moral meaning).

There’s no single part of the brain that’s responsible for such story making. While most areas have specialisms, brain activity is far more dispersed than scientists once thought. That said, we wouldn’t be the storytellers we are if it wasn’t for its most recently evolved region, the neocortex. It’s a thin layer, about the depth of a shirt collar, folded in such a way that fully three feet of it is packed into a layer beneath your forehead. One of its critical jobs is keeping track of our social worlds. It helps interpret physical gestures, facial expressions and supports theory of mind.

But the neocortex is more than just a people-processor. It’s also responsible for complex thought, including planning, reasoning and making lateral connections. When the psychologist Professor Timothy Wilson writes that one of the main differences between us and other animals is that we have a brain that’s expert at constructing ‘elaborate theories and explanations about what is happening in the world and why,’ he’s talking principally about the neocortex.

These theories and explanations often take the form of stories. One of the earliest we know of tells of a bear being chased by three hunters. The bear is hit. It bleeds over the leaves on the forest floor, leaving behind it all the colours of autumn, then manages to escape by climbing up a mountain and leaping into the sky, where it becomes the constellation Ursa Major. Versions of the ‘Cosmic Hunt’ myth (#litres_trial_promo) have been found in Ancient Greece, northern Europe, Siberia, and in the Americas, where this particular one was told by the Iroquois Indians. Because of this pattern of spread, it’s believed it was being told when there was a land bridge between what’s now Alaska and Russia. That dates it between 13,000 and 28,000 BC.

The Cosmic Hunt myth reads like a classic piece of human bullshit. Perhaps it originated in a dream or shamanistic vision. But, just as likely, it started when someone, at some point, asked someone else, ‘Hey, why do those stars look like a bear?’ And that person gave a sage-like sigh, leaned on a branch and said, ‘Well, it’s funny you should ask …’ And here we are, 20,000 years later, still telling it.

When posed with even the deepest questions about reality, human brains tend towards story. What is a modern religion if not an elaborate neocortical ‘theory and explanation about what’s happening in the world and why’? Religion doesn’t merely seek to explain the origins of life, it’s our answer to the most profound questions of all: What is good? What is evil? What do I do about all my love, guilt, hate, lust, envy, fear, mourning and rage? Does anybody love me? What happens when I die? The answers don’t naturally emerge as data or an equation. Rather, they typically have a beginning, a middle and an end and feature characters with wills, some of them heroic, some villainous, all co-starring in a dramatic, changeful plot built from unexpected events that have meaning.

To understand the basis of how the brain turns the superabundance of information that surrounds it into a simplified story is to understand a critical rule of storytelling. Brain stories have a basic structure of cause and effect. Whether it’s memory, religion, or the War of the Ghosts, it rebuilds the confusion of reality into simplified theories of how one thing causes another. Cause and effect is a fundamental of how we understand the world. The brain can’t help but make cause and effect connections. It’s automatic. We can test it now. BANANAS. VOMIT (#litres_trial_promo). Here’s the psychologist Professor Daniel Kahneman describing what just happened in your brain: ‘There was no particular reason to do so, but your mind automatically assumed a temporal sequence and a causal connection between the words bananas and vomit, forming a sketchy scenario in which bananas caused the sickness.’

As Kahneman’s test shows, the brain makes cause and effect connections even where there are none. The power of this cause and effect story-making was explored in the early twentieth century by the Soviet filmmakers (#litres_trial_promo) Vsevolod Pudovkin and Lev Kuleshov, who juxtaposed film of a famous actor’s expressionless face with stock footage of a bowl of soup, a dead woman in a coffin and a girl playing with a toy bear. They then showed each juxtaposition to an audience. ‘The result was terrific,’ recalled Pudovkin. ‘The public raved about the acting of the artist. They pointed out the heavy pensiveness of his mood over the forgotten soup, were touched and moved by the deep sorrow with which he looked on the dead woman, and admired the light, happy smile with which he surveyed the girl at play. But we knew that in all three cases the face was exactly the same.’

Subsequent experiments confirmed the filmmakers’ findings. When shown cartoons of simple moving shapes, viewers helplessly inferred animism and built cause-and-effect narratives about what was happening: this ball is bullying that one; this triangle is attacking this line, and so on. When presented with discs moving randomly on a screen, viewers imputed chase sequences where there were none.

Cause and effect is the natural language of the brain. It’s how it understands and explains the world. Compelling stories are structured as chains of causes and effects. A secret of bestselling page-turners and blockbusting scripts is their relentless adherence to forward motion, one thing leading directly to another. In 2005, the Pulitzer prizewinning playwright David Mamet was captaining a TV drama called The Unit. After becoming frustrated with his writers producing scenes with no cause and effect – that were, for instance, simply there to deliver expository information – he sent out an angry ALL CAPS memo, which leaked online (I’ve de-capped what follows to save your ears): ‘Any scene which does not both advance the plot and standalone (that is, dramatically, by itself, on its own merits) is either superfluous or incorrectly written,’ he wrote. ‘Start, every time, with this inviolable rule: the scene must be dramatic. It must start because the hero has a problem, and it must culminate with the hero finding him or herself either thwarted or educated that another way exists.’

The issue isn’t simply that scenes without cause and effect tend to be boring. Plots that play too loose with cause and effect risk becoming confusing, because they’re not speaking in the brain’s language. This is what the screenwriter of The Devil Wears Prada, Aline Brosh McKenna, suggested when she said, ‘You want all your scenes to have a “because” between (#litres_trial_promo) them, and not an “and then”.’ Brains struggle with ‘and then’. When one thing happens over here, and then we’re with a woman in a car park who’s just witnessed a stabbing, and then there’s a rat in Mothercare in 1977, and then there’s an old man singing sea shanties in a haunted pear orchard, the writer is asking a lot of people.

But sometimes this is on purpose. An essential difference between commercial and literary storytelling is its use of cause and effect. Change in mass-market story is quick and clear and easily understandable, while in high literature it’s often slow and ambiguous and demands plenty of work from the reader, who has to ponder and de-code the connections for themself. Novels such as Marcel Proust’s Swann’s Way are famously meandering and include, for example, a description of hawthorn blossom that lasts for well over a thousand words. (‘You are fond of hawthorns,’ one character remarks to the narrator, halfway through.) The art-house films of David Lynch are frequently referred to as ‘dreamlike’ because, like dreams, there’s often a dearth of logic to their cause and effect.

Those who enjoy such stories are more likely to be expert readers, those lucky enough to have been born with the right kinds of minds, and raised in learning environments that nurtured the skill of picking up the relatively sparse clues in meaning left by such storytellers. I also suspect they tend to be higher than average in the personality trait ‘openness to experience’, which strongly predicts an interest in poetry and the arts (#litres_trial_promo) (and also ‘contact with psychiatric services’). Expert readers understand that the patterns of change they’ll encounter in art-house films and literary or experimental fiction will be enigmatic and subtle, the causes and effects so ambiguous that they become a wonderful puzzle that stays with them months and even years after reading, ultimately becoming the source of meditation, re-analysis and debate with other readers and viewers – why did characters behave as they did? What was the filmmaker really saying?

But all storytellers, no matter who their intended audience, should beware of over-tightening their narratives. While it’s dangerous to leave readers feeling confused and abandoned, it’s just as risky to over-explain. Causes and effects should be shown rather than told; suggested rather than explained. Readers should be free to anticipate what’s coming next and able to insert their own feelings and interpretations into why that just happened and what it all means. These gaps in explanation are the places in story in which readers insert themselves: their preconceptions; their values; their memories; their connections; their emotions – all become an active part of the story. No writer can ever transplant their neural world perfectly into a reader’s mind. Rather, their two worlds mesh. Only by the reader insinuating themselves into a work can it create a resonance that has the power to shake them as only art can.




1.9 (#ulink_13efb68f-18f6-59f5-80cb-c0543a17d551)


So our mystery is solved. We’ve discovered where a story begins: with a moment of unexpected change, or with the opening of an information gap, or likely both. As it happens to a protagonist, it happens to the reader or viewer. Our powers of attention switch on. We typically follow the consequences of the dramatic change as they ripple out from the start of the story in a pattern of causes and effects whose logic will be just ambiguous enough to keep us curious and engaged. But while this is technically true, it’s actually only the shallowest of answers. There’s obviously more to storytelling than this rather mechanical process.

A similar observation is made by a story-maker near the start of Herman J. Mankiewicz and Orson Welles’s 1941 cinema classic Citizen Kane. The film opens with change and an information gap: the recent death of the mogul Charles Foster Kane, as he drops a glass globe that contains a little snow-covered house and utters a single, mysterious word: rosebud. We’re then presented with a newsreel that documents the raw facts of his seventy years of life: Kane was a well known yet controversial figure who was extraordinarily wealthy and once owned and edited the New York Daily Inquirer. His mother ran a boarding house and the family fortune came after a defaulting tenant left her a gold mine, the Colorado Lode, which had been assumed worthless. Kane was twice married, twice divorced, lost a son and made an unsuccessful attempt at entering politics, before dying a lonely death in his vast, unfinished and decaying palace that, we’re told, was, ‘since the pyramids, the costliest monument a man has built to himself’.

With the newsreel over, we meet its creators – a team of cigarette-smoking newsmen who, it turns out, have just finished their film and are showing it to their boss Rawlston for his editorial comments. And Rawlston is not satisfied. ‘It isn’t enough to tell us what a man did,’ he tells his team. ‘You’ve got to tell us who he was … How is he different from Ford? Or Hearst, for that matter? Or John Doe?’

That newsreel editor was right (as editors are with maddening regularity). We’re a hyper-social species with domesticated brains that have been engineered specifically to control an environment of humans. We’re insatiably inquisitive, beginning with our tens of thousands of childhood questions about how one thing causes another. Being a domesticated species, we’re most interested of all in the cause and effect of other people. We’re endlessly curious about them. What are they thinking? What are they plotting? Who do they love? Who do they hate? What are their secrets? What matters to them? Why does it matter? Are they an ally? Are they a threat? Why did they do that irrational, unpredictable, dangerous, incredible thing? What drove them to build ‘the world’s largest pleasure ground’ on top of a manmade ‘private mountain’ that contained the most populous zoo ‘since Noah’ and a ‘collection of everything so big it can never be catalogued’? Who is the person really? How did they become who they are?

Good stories are explorations of the human condition; thrilling voyages into foreign minds. They’re not so much about events that take place on the surface of the drama as they are about the characters that have to battle them. Those characters, when we meet them on page one, are never perfect. What arouses our curiosity about them, and provides them with a dramatic battle to fight, is not their achievements or their winning smile. It’s their flaws.




CHAPTER TWO: (#ulink_bb450f8f-bced-5262-aefa-23f7b75f609b)

THE FLAWED SELF (#ulink_bb450f8f-bced-5262-aefa-23f7b75f609b)











2.0 (#ulink_2d231d1c-a385-5862-ac34-e727d6f34cc7)


There’s something you should know about Mr B. He’s being watched by the FBI. They film him constantly and in secret, then cut the footage together and broadcast it to millions as ‘The Mr B Show’. This makes life rather awkward for Mr B. He showers in swimming trunks and dresses beneath bedsheets. He hates talking to others, as he knows they’re actors hired by the FBI to create drama. How can he trust them? He can’t trust anyone. No matter how many people explain why he’s wrong, he just can’t see it. He finds a way to dismiss each argument they present to him. He knows it’s true. He feels it’s true. He sees evidence for it everywhere.

There’s something else you should know about Mr B. He’s psychotic. One healthy part of his brain, writes the neuroscientist Professor Michael Gazzaniga (#litres_trial_promo), ‘is trying to make sense out of some abnormalities going on in another’. The malfunctioning part is causing ‘a conscious experience with very different contents than would normally be there, yet those contents are what constitute Mr B’s reality and provide experiences that his cognition must make sense of.’

Because it’s being warped by faulty signals being sent out by the unhealthy section of his brain, the story Mr B is telling about the world, and his place within it, is badly mistaken. It’s so mistaken he’s no longer able to adequately control his environment, so doctors and care staff have to do it on his behalf, in a psychiatric institution.

As unwell as he is, we’re all a bit like Mr B. The controlled hallucination inside the silent, black vault of our skulls that we experience as reality is warped by faulty information. But because this distorted reality is the only reality we know, we just can’t see where it’s gone wrong. When people plead with us that we’re mistaken or cruel and acting irrationally, we feel driven to find a way to dismiss each argument they present to us. We know we’re right. We feel we’re right. We see evidence for it everywhere.

These distortions in our cognition make us flawed. Everyone is flawed in their own interesting and individual ways. Our flaws make us who we are, helping to define our character. But our flaws also impair our ability to control the world. They harm us.

At the start of a story, we’ll often meet a protagonist who is flawed in some closely defined way. The mistakes they’re making about the world will help us empathise with them. We’ll warm to their vulnerability. We’ll become emotionally engaged in their struggle. When the dramatic events of the plot coax them to change we’ll root for them.

The problem is, in fiction and in life, changing who we are is hard. The insights we’ve learned from neuroscience and psychology begin to show us exactly why it’s hard. Our flaws – especially the mistakes we make about the human world and how to live successfully within it – are not simply ideas about this and that which we can identify easily and choose to shrug off. They’re built right into our hallucinated models. Our flaws form part of our perception, our experience of reality. This makes them largely invisible to us.

Correcting our flaws means, first of all, managing the task of actually seeing them. When challenged, we often respond by refusing to accept our flaws exist at all. People accuse us of being ‘in denial’. Of course we are: we literally can’t see them. When we can see them, they all too often appear not as flaws at all, but as virtues. The mythologist Joseph Campbell identified a common plot moment in which protagonists ‘refuse the call’ of the story. This is often why.

Identifying and accepting our flaws, and then changing who we are, means breaking down the very structure of our reality before rebuilding it in a new and improved form. This is not easy. It’s painful and disturbing. We’ll often fight with all we have to resist this kind of profound change. This is why we call those who manage it ‘heroes’.

There are various routes by which characters and selves become unique and uniquely flawed, and a basic understanding of them can be of great value to storytellers. One major route involves those moments of change. The brain constructs its hallucinated model (#litres_trial_promo) of the world by observing millions of instances of cause and effect then constructing its own theories and assumptions about how one thing caused the other. These micro-narratives of cause and effect – more commonly known as ‘beliefs’ – are the building blocks of our neural realm. The beliefs it’s built from feel personal to us because they help make up the world that we inhabit and our understanding of who we are. Our beliefs feel personal to us because they are us.

But many of them will be wrong. Of course the controlled hallucination we live inside is not as distorted as the one that Mr B lives inside. Nobody, however, is right about everything. Nevertheless, the storytelling brain wants to sell us the illusion that we are. Think about the people closest to you. There won’t be a soul among them with whom you’ve never disagreed. You know she’s slightly wrong about that, and he’s got that wrong, and don’t get her started on that. The further you travel from those you admire, the more wrong people become until the only conclusion you’re left with is that entire tranches of the human population are stupid, evil or insane. Which leaves you, the single living human who’s right about everything – the perfect point of light, clarity and genius who burns with godlike luminescence at the centre of the universe.

Hang on, that can’t be right. You must be wrong about something. So you go on a hunt. You count off your most precious beliefs – the ones that really matter to you – one by one. You’re not wrong about that and you’re not wrong about that and you’re certainly not wrong about that or that or that or that. The insidious thing about your biases, errors and prejudices is that they appear as real to you as Mr B’s delusions appear to him. It feels as if everyone else is ‘biased’ and it’s only you that sees reality as it actually is. Psychologists call this ‘naive realism’. Because reality seems clear and obvious and self-evident to you, those who claim to see it differently must be idiots or lying or morally derelict. The characters we meet at the start of story are, like most of us, living just like this – in a state of profound naivety about how partial and warped their hallucination of reality has become. They’re wrong. They don’t know they’re wrong. But they’re about to find out …

If we’re all a bit like Mr B then Mr B is, in turn, like the protagonist in Andrew Niccol’s screenplay, The Truman Show. It tells of thirty-year-old Truman Burbank, who’s come to believe his whole life is staged and controlled. But, unlike Mr B, he’s right. The Truman Show is not only real, it’s being broadcast, twenty-four hours a day, to millions. At one point, the show’s executive producer is asked why he thinks it’s taken Truman so long to become suspicious of the true nature of his world. ‘We accept the reality of the world with which we’re presented,’ he answers. ‘It’s as simple as that.’




Конец ознакомительного фрагмента.


Текст предоставлен ООО «ЛитРес».

Прочитайте эту книгу целиком, купив полную легальную версию (https://www.litres.ru/will-storr/the-science-of-storytelling/) на ЛитРес.

Безопасно оплатить книгу можно банковской картой Visa, MasterCard, Maestro, со счета мобильного телефона, с платежного терминала, в салоне МТС или Связной, через PayPal, WebMoney, Яндекс.Деньги, QIWI Кошелек, бонусными картами или другим удобным Вам способом.


The Science of Storytelling Уилл Сторр
The Science of Storytelling

Уилл Сторр

Тип: электронная книга

Жанр: Современная зарубежная литература

Язык: на английском языке

Издательство: HarperCollins

Дата публикации: 16.04.2024

Отзывы: Пока нет Добавить отзыв

О книге: Humans have been telling stories ever since we came down from the trees.But do we really understand why?And if we did, would we be able to tell them better?We would be nothing without story. Story moulds who we are, from our character to our cultural identity. Story compels us to act out our dreams and ambitions, and shapes our politics and beliefs. We use story to construct our relationships, to keep order in our law courts and governments, to make sense of the world in our newspapers and social media. Even when we sleep, we dream in story. Storytelling is an essential part of what makes us human.There have been many attempts to understand what makes a good story – from Joseph Campbell’s well-worn theories about myth and archetype to recent attempts to crack the ‘Bestseller Code’. But few have used a scientific approach. This is curious, for if we are to truly understand the machinations of storytelling, we must first come to understand the ultimate storyteller – the human brain.In this original and surprising book, Will Storr takes a scalpel to story. Leading us on a journey from the Hebrew scriptures to Mr Men, from Booker prize-winning literature to box set TV, he demonstrates how master storytellers manipulate and compel us using a dazzling display of psychological research and cutting-edge neuroscience. With the help of world leading story-analysts and brain experts, he shows how we can use this science to tell better stories – and reveals the benefits this can have on everything from our creative endeavours and careers to our happiness and wellbeing.

  • Добавить отзыв