Are humans wired to deal with the kinds of existential threats facing society today?
This is not just an intellectually-interesting metaphysical question. The answer may well determine whether our species makes it to the next century or not.
As PeakProsperity.com readers know well, humans’ relentless pursuit of ‘ever more’ growth is finally slamming into the limits of a finite planet.
The energy fuels and natural resources necessary to continue to expand the global economy are becoming more scarce and more expensive to extract. And yet the global population continues to grow, now expected to hit 9.7 billion (or higher) by 2050.
Evolutionarily wired for immediate, visible threats (like a snarling lion ready to pounce), can we realistically expect our species to proactively pull together to deal with long-term, faceless emergencies like overindebtedness, Peak Oil, climate change, and overpopulation?
And if it’s possible we can, how can we help others wake up to these threats? How can we break through their existing belief system that “everything is fine”?
In this week’s podcast, Chris Martenson poses these important questions to Dr. Tali Sharot, professor of cognitive neuroscience in the Department of Experimental Psychology at University College London. Dr. Sharot is known for her research on the neural basis of emotion, decision making, and optimism. Her 2012 Ted Talk on the optimism bias received over 2.3 million views.
Optimism is good for many reasons. But it can be quite negative, too, if you’re underestimating risk.
If we underestimate risk, then we’re less likely to take precautionary actions. So if you think, well, I am going to be so healthy that you don’t even go to medical screenings or you don’t buy insurance when you actually should, right, that’s a problem. You don’t prepare for the worst case scenario, that’s a problem.
We have an avoid/approach instinct where we approach the good things, whether it’s chocolate cake or money or love. We move forward, we approach. If you see a photo with a smiling person you approach.
When we see the bad stuff, we try to go back. We avoid it, whether it’s poison or someone frowning or any kind of danger — we just stay away.
So in order to get to action on big threats, we need to refrain the problem not as: we’re going into extinction, we’re going into a catastrophe, we need to do something now to avoid it. But more like: what can we do to make our planet as good as it’s ever been?
What can we do to protect it and make it a place that is flourishing?
We need to insert a positive message rather than a negative in order for people to want to approach and take action and to be involved. Otherwise people say ‘I don’t want to think about this bad stuff because it’s not something nice to think about’.
And I think that’s something that hasn’t been done so much. The focus has mostly been on trying to scare and trying to cause fear, rather than trying to enhance the sense that we can create something that’s great for future generations.
Click the play button below to listen to Chris’ interview with Tali Sharot (50m:21s).
Chris Martenson: Welcome, everyone, to peakprosperity.com featured voices podcast. It is July 16, 2019. I am your host Dr. Chris Martenson.
In a recent piece titled Do You Have Free Will? I wrote about all of the data piling up showing how we tend to behave as a function of both nature and nurture, understanding how we're wired to respond to various stimuli or data or information is especially important today. This is a vital question, of course, to explore for those of use interested in introducing ideas about reality that might be troubling in nature, let's say.
So we need to know, are humans, on average, wired to receive the kind of information that we think needs to be out in the world, or are we wire against it? And if so, is there anything we can do about it?
Well, today's guest is an absolute center of the bowling alley strike on this one. Dr. Tali Sharot is a professor of cognitive neuroscience in the Department of Experimental Psychology at University College London. She received her PhD in Psychology and Neuroscience from New York University. Dr. Sharot is known for her research on the neural basis of emotion, decision making, and optimism.
Her book, The Influential Mind, What the Brain Reveals About the Power to Change Others, was published highlighting the critical role of emotion in influence and the weakness of data. Perfect guest for today. I'm especially interested in that topic, of course, as someone who is constantly trying to reach people oftentimes, using data.
Her 2012 Ted Talk on the optimism bias has received over 2.3 million views. Welcome to the program, Dr. Sharot.
Tali Sharot: Thank you for having me. Glad we could join.
Chris Martenson: Dr. Sharot, tell us just a little background. How did you get interested in science and then gravitate towards your current area of expertise?
Tali Sharot: I was really interested in people, which I think is quite natural. I was just interested in why people do what they do, and why do I do what I do? And because every kind of action or thought or feeling that anyone has really is triggered by the brain. It's generated by the brain, and so to understand human behavior, understanding the brain seemed the path to go. So that was kind of the reasoning or the motivation.
Chris Martenson: Right up front. What are the chances that I'm a rational individual operating with complete freewill?
Tali Sharot: Zero.
Chris Martenson: [Laughs] You seem so confident. I want to talk about this because I think there is a concept out there in a lot of people that we're rational beings. Of course, economics is founded, maybe in a flawed fashion, on the idea that we are rational actors. But everything that I've done and studied and the more I've gone down this path of looking at how humans are actually wired up, we come with a lot of preconditioning, a lot of wiring that nature's bestowed upon us. And the understanding seems to pile up that maybe we're not as rational and as full of freewill a we thought.
Let's go into that a bit. What is your expertise – sorry, I jumped ahead a question. What is it in human wiring, Dr. Sharot, that pushes us away from being rational?
Tali Sharot: So really, you know, I have to kind of step back for a second. When I said you're not rational, I was kind of jumping ahead and gave a simple answer. It's not that simple. It really depends on what you mean or what a person means when they say rational. Because, you know, in economics when people say people are not rational, it usually means that they are making decisions that are not necessarily going to gain them the most monetary rewards.
But, in fact, all these kind of incidences of biases and heuristics and decisions that seem irrational, if you look at them from a different angle, they could be rational.
So there's two things to consider. First of all, what it is that you're trying to maximize. Because for economists, they usually think well, we're trying to maximize material outcomes. And if you look at that at that angle then, yes, we're not rational.
But from a point of a psychologist, if what we're trying to maximize is, for example, effective wellbeing, emotional wellbeing, then many times these decisions that seem irrational are actually rational because we're trying to minimize anxiety, for example, or maximize anticipation. And the cost may be material, but that's fine. That's a decision that we make.
And the other kind of thing to keep in mind is that those behaviors that seem irrational are actually because we're using some rules that are very good rules to use in updating our beliefs or making decisions, but they're not, of course, going to give us the best decision or the best outcome 100 percent of the time. So they're good rules to use because they are the beneficial rule 80 percent of the time, but 20 percent of the time it isn't the best thing to do. And we, kind of as behavioral economics, focus on those 20 percent.
So, I have to step back and say our brain and our minds are actually quite extraordinary, and it's quite amazing that we can make so many decisions, you know, in just a single day and get along in this world relatively well. And we do it because the brain is set up in a way that usually gets you where you're going and gets you to the best decisions.
So, we really shouldn’t look at ourselves as, oh, we're just making stupid decisions all the time. We should look at our behavior and say, "Well, why are we making these decisions that, from our angle, doesn’t seem the best ones?" Are we making them because we don’t really understand what people really want? Maybe we're making it because we think we want one thing, but really, it's another thing that we're trying to get? Is it because we're using a rule that's quite good, but it's not good for this instance, and that's okay.
I didn’t answer your question, but what was the question now?
Chris Martenson: I want to get at this idea of how we're wired up and so that we can understand - I truly believe that if we can become conscious and knowledgeable about the things that are actually driving us, that we have opportunities then to be aware of those, and sometimes they're very helpful and sometimes they're pitfalls for us. And so knowing when is which would be a good place to start.
One of my sayings is humans are not rational, we're rationalizers. And I do this all the time. If I become emotionally attached to a car and I want the car, I'll come up with all the rational reasons. Look at its safety. Look at its mileage ratings. Look at all this stuff. But the truth is, the decision got made somewhere deeper in my limbic system, I think. It was made emotionally.
And so that's the part I'm trying to tease apart is where are we rational and where are we making decisions driven through some other processes that are a little bit more subterranean, not invalid – I'm not here to say good, bad, or put any judgements on. I just want to understand how people make decisions and with the idea that maybe we could influence those decisions.
Tali Sharot: We certainly can influence those decisions is many ways. But first, the idea that we have two systems in our brain is a good metaphor, but it's not actually how the brain works.
And so what we call emotional system or so on and what we call our rational brain, it's not two systems. These regions interact with each other all the time. They actually create subsystems together. And each of these so-called systems, like an emotional system, in fact, can do very sophisticated calculations and vice versa.
The system that neuroscience may call the rational system, the frontal cortex, can actually produce lots of biases. So we shouldn’t look at it as two separate systems, as one is more biased than the other. All regions are contributing to our decisions, and these different regions or collection of regions can do both very sophisticated and very superficial – use superficial rules to make decisions.
And our emotions are there for very, very, very good reasons. Emotions are very important for our decisions because emotions convey a lot of information that we can and should use for making decisions.
So if you're making decisions because maybe what you call an emotion, that's not necessarily a bad thing because why would our brain be given this thing that we call emotion? It's given emotion in order to help us survive because emotion tells us what's good, what's bad, how urgent an action should be. So it's not something that should be kind of frowned upon and looked like something that we shouldn't include in our decision making process.
In fact, people that can't experience emotion or have impairments in those regions in the brain have a very time making decisions, and they don’t make optimal decisions.
And again, I think I didn’t quite answer the specific question which was – can you repeat it for the last time?
Chris Martenson: It's perfectly fine. We're going in the right direction. I'm trying to build up to this idea of how we're wired and how it is that we make decisions. And again, I'm not here to say that emotional decisions are bad and rational are good. Far from it. However, I am interested in the idea that as a collective human species, when we're presented with some really big challenges, that getting that information across can be really challenging.
And so your work just is absolutely fascinating because it's revealing the ways in which we actually go about making those decisions. And again, not to say good ways, bad ways, or any of that. But if we understand how we go about making decisions, then we have a chance to influence those in ways both for our personal lives and then maybe more collectively.
So you started with this idea that these aren't necessarily separate brains. I'm aware of the idea of the triune brain, that we have stacked sort of different brain functions through evolution. Is that still current knowledge? Is that how we look at the brain now, that it consists of various regions of evolutionary design and that they basically interact with each other that way?
Tali Sharot: It is true that as you go deeper those brain regions are evolutionarily old. And of course, the newest ones are up there, our frontal cortex, and they make us more human in the sense of that it makes us uniquely human, like what makes us different from other animals. So, a greater ability to plan ahead, a greater ability to do calculations and analysis and language and things like that. So that is true, and all these systems are connected.
There are a few functions that are very much specialized. We know that we have specific regions for face perception. We know that there are specific regions that are extremely important for language. If those were impaired that we're going to have a problem.
But when it comes to decision making and assessing risk and so on, you can't really pinpoint it into one region. It's really processes that happen all over in quite a few regions in the brain in order to come up with a decision, whether you want to choose a or you want to choose b and definitely the sort of financial decisions and personal decisions that we make will involve a whole host of regions kind of working together in an orchestra to come up with the action and the choice that you're going to make.
Chris Martenson: Let's talk then about – I'm really fascinated by your Ted Talk 2012. What is the optimism bias?
Tali Sharot: The optimism bias is people's tendency to expect the future to be better than the past and the present and our tendency to overestimate that the positive events in our lives, like a successful marriage or having talented kid or successful in your profession, and to underestimate the likelihood of negative events like the likelihood that you would get divorced or have cancer or be in an accident. And we find that about 80 percent of the population has this optimism bias.
And there's a few things to remember about the optimism bias. First of all, it's very much about your personal future, your future, perhaps the future of your family and very close friends and kids. But it's not about the future of the world or society or the country. In fact, we often find that there's something called private optimism but public despair where people are quite optimistic about their own future but not necessarily about the future of humanity or the future of citizens in their country in general.
One reason for that is a sense of control. So people have a sense that they have control over their own life, and therefore they can steer the wheel in the right direction. And if we have control over own life we think, well, we're going to steer the wheel towards the good future, and that makes us optimistic. When we feel we have control we're more likely to be optimistic because we say, "Well, I'm going to be healthy because I'm going to do the right things to make me healthy."
But we don’t necessarily have a sense that we have control over the future of our country or the world and so on. And so in those cases, we don’t necessarily have as much optimism as we do about our own future, which is why you often see people being very pessimistic about a host of different things, such as politics and so on.
Chris Martenson: And that comes from, if I got that right, this sense of agency or control. And how does that feed into, that sense of agency and control, how does that feed into the optimism bias?
Tali Sharot: As I mentioned, if you believe that you have control over your future – you start a company and you think, well, whether the company succeeds or doesn’t succeed, have a lot to do with my own action. If you believe that, you basically believe that you have control over the destiny of your company, then you will be optimistic because you think, "I can do what is needed to succeed."
But let's say you are someone who is starting a company and you think, "I don’t have any control over where the company will go." And therefore there's less reason for you to be optimistic, yeah? So that's why the two things are related.
Chris Martenson: And let's talk about, say, I know a guy, a serial entrepreneur, he's managed to not be successful at every company he started. He's very optimistic. So let's talk about how data or information feeds back into the optimism bias.
Tali Sharot: Right. So far we've talked about why a sense of control can make you optimistic, but there's other mechanisms that generate optimism. And one of the mechanisms is one that we discovered back in 2011 which is that you learn a little bit better from information that suggests a positive future versus information that suggests negative things about the future.
For example, you would think my likelihood of succeeding with this company is 70 percent and I would look at all the data and I would say, listen according to what I'm looking, maybe it's 50 percent. So I'm giving you bad news. I'm telling you it's less likely that you would succeed than you thought.
What we see is that people update their belief a little bit but not a lot. They might say, "Maybe my likelihood is 65.'" So you thought it was 70, but maybe it's 65 percent now.
However, if I told you, I looked at your data and I said, "Look, I think your company is not 70 likely to succeed, it's 90 percent likely to succeed." At that point, I'm giving you good news, and what we find is that people take good news are really change their beliefs quite quickly. And you would say, "Well, okay, maybe it's like 87 percent." So you're much more likely to alter your belief and change it when I'm giving you information that's good about the future that you didn’t expect versus information that's bad.
And when we looked at how the brain responds to that kind of good and bad news, we found that the brain, especially parts of the frontal cortex, encode information that suggest good news better than information that suggests bad news. So we encode good and bad news, but most of us, on average, encode the good news a little bit better than the bad news. And so we use it alter our beliefs.
And, in fact, we could change how people process information by interrupting with these brain regions that include information. So we could use a method that we have where we pass a little magnetic pole for the scalp of the participant into a certain region of the brain and the frontal cortex and then we interfere with that brain region.
And by doing that, we can actually interfere with the way that people process information, and we can do it where we only interfere with how you process the good news or mainly how you process the good news. And we can also do it in a way that we interfere mainly with how you process the bad news.
Chris Martenson: Well, that's fascinating. So what happens when you interfere with the region where somebody's processing good news?
Tali Sharot: Right. So they would process good news a little bit less, and then interfere with the part of the brain that we, in this specific incident was processing bad news, they would encode it a little bit less.
Chris Martenson: And so that bias you talked about, where they would adjust their percentages, they wouldn’t adjust them quite as far. Would that be the…?
Tali Sharot: Yeah. You can play around with the bias. If your typical person learns more from good news than bad news, and then you interfere with the part of the brain that was encoding good news, then of course, the amount that they would learn from good news goes down. And therefore, now, you don’t have as much of a bias or a bias at all.
Chris Martenson: Now, is this a very specific region? Like you would say like facial recognition or the ability to speak or something like that? This is a part of the brain – is it dedicated to this task?
Tali Sharot: Yeah. I mean, in our case, we knew exactly what brain region was including the type of information that we were giving a person in a specific task, and so we could interfere with that region. It's not necessarily the only region that's doing it. And again, it's part of a system, but it's one that our brain imaging study suggests it is very important, and so we can interfere with that specific one.
But I wouldn’t call it a good news region or a bad news region. Whether it's a specific region that is doing that job does matter in things like, oh, how am I presenting the information, for example. I presented numbers, or I verbally told you something or what you had to do. So, we can identify specific regions that are really important in specific context in situations, but I wouldn’t then go ahead and say this region is a good news region in the brain or that region is a bad new region in the brain.
Chris Martenson: Fascinating. But evolution has decided this is a thing worth giving us, and there must be a lot of pros to the optimism bias. What are those?
Tali Sharot: Right. If you think the future is bright, then it reduces your anxiety and that's really good for both your mental health and your physical health. In fact, we find that people that don’t have an optimism bias, a lot of those people tend to have depression. And people have also found that optimists are more likely to succeed in different professions, whether it's in business or sports or politics. Because if you are an optimist and you think, well, I'm likely to succeed then what you do is you put more effort into it. If you think, well, my company is definitely going to succeed or is likely to succeed, then it motivates you to work hard.
But if you think, well, nothing I do is going to help. This is a lost cause. I'm going to not – the company is not going to succeed, then you're less likely to put effort into it and then it becomes a self-fulfilling prophecy. So to be clear, it doesn’t become a self-fulfilling prophecy just because we had a thought, but your thought will change your actions and your actions can, of course, change outcomes in the world and that's why optimism is related to success because it enhances motivations.
Chris Martenson: So health and wellbeing. Any downsides?
Tali Sharot: Yeah. If we underestimate risk, then we're less likely to take precautionary actions. So if you think, well, I am going to be so healthy that you don’t even go to medical screenings or you don’t buy insurance when you actually should, right, that's a problem. You don’t prepare for the worst case scenario, that's a problem.
So there's two things here to say. One is that most people are mildly optimistic. So they don’t necessarily think, oh, this is never going to happen to me. It's just that they underestimate the likelihood. That's first thing.
And the second thing to say is that we found something interesting quite recently. We published this last year. The problem that we started talking about here is that, okay, optimism is good for many reasons, but then it could be quite negative, right. You're underestimating risk.
In fact, for example, people have said that the financial collapse of 2008 is partially because of the optimism bias. People overestimating how the economy will do, people overestimating their ability to pay mortgages and so on and so forth. And so optimism bias of a lot of people was one of the reasons creating the downfall. So there's negative things to hear.
And so it was commonly thought by psychologists and philosophers that okay, there are advantages to optimism bias. There's disadvantages, but probably if you probably look at everything together, probably the advantages are simply more than the disadvantages. And that's why we have the optimism bias.
But, what we found recently is that the optimism bias can disappear quite quickly in environments where it should disappear, where it's probably advantageous not to have it, and then it comes back in other environments.
So, in relatively safe environments, like the one that you and I are in today, on average the optimism bias is probably quite helpful. It keeps your mind at rest, it enhances your motivation. But if I put you in a really threatening environment – I'm going to put you out – there's lions around and so on, you really don’t want to underestimate your risk at all. And so in those
environments, it might be better not to have an optimism bias at all. And we found that is likely what happens.
And so we thought, well, we're going to take people – and of course, we can't put them in a frightening environment with hungry lions around, but we will put them in a threatening environment where they feel quite stressed, and we're going to test whether under those environments the optimism goes away. And so what we did is we told people that they're going to have to give a speech in front of everyone else on a surprise topic that we were going to give them, no time to prepare, we would videotape them, we would put it on YouTube. The idea was just to create a threatening environment where people are stressed.
And the idea was would this stress change the way you process information and therefore make the optimism bias go away? And that's exactly what we found, that under threat, under stress, people started learning from unexpected negative information much better than they did before. So if you told someone you think the likelihood of your company succeeding is 70 and we told them, oh, it's only at about 50, they would learn a lot form that under stress. They would change their belief quite a lot.
But, if you didn’t it, it you just put them in relaxed environment, back to the optimism bias. Learn more from positive than negative.
We did the same with firefighters in the state of Colorado. The idea is that firefighters have a quite diverse days. So some days they're just sitting in the station – it's quite relaxed and quite safe. But some days they actually have to go out and there's life threatening events. And we found that when the firefighters – we had the firefighters do our experiments. And we found that they were under stress, the optimism bias went away. They learned quite well from negative information, equally well to positive. But when they were relaxed, they learned less well from negative than from positive and the bias was there.
So this means that the optimism bias is not just adaptive because it has more advantages to disadvantages on average; it's adaptive because it can come and go in different environments in a way that makes it adaptive.
Chris Martenson: So this raises a couple of things. First, we've noticed this bias ourselves, for instance, the worst time to try and buy a generator is when a hurricane is coming, and the best time is about a month after its past. It takes about 30 to 60 days, and people forget, and they don’t need these things anymore, and they don’t want to be prepared for the nest one. And so it goes away very quickly.
But the important point here during times of calm, and so this is getting the heart of whether this is evolutionary helpful or not. Humans have come through an extraordinary arch where we're up at 7.6, 7.8 billion people now. There's not fresh continents. We've kind of run through things. We are and the animals we eat, 96 percent of the biomass of animals on the surface of the planet. So we're now at the edge of our cradle, as it were, and we need to start making some really big, collective decisions around things like climate change or the dwindling of fossil fuels eventually or soils or ecology, things like that.
Given the fact that we have the optimism bias, given that there are people out there like myself who think we need to begin motivating toward some fairly large, long distant thing which we might not feel a lot of agency around, to combine a few things we've talked about. What would be the does and don'ts of beginning conversations where you say, for example, interested in alerting people that climate change in a way that would actually lead to action?
Tali Sharot: So climate change is an extremely problematic issue because the only way for us to understand it is kind of to think about things that are not in front of us. Yes, we can see the little bit hotter summer or more snow than usual, but our environment feels quite safe. It feels quite pleasant, and so there's not an immediate danger in front of us that we can actually see and feel, that our brain can actually process as stimuli coming in. And so we feel like we are in a safe environment. We're quite relaxed.
And so the only way for us to understand the dangers is try to grasp us then in this abstract way to try and think about these numbers of what could happen in the future. And that's really problematic because those numbers, on their own, are not necessarily going to change the way that we process information and stress us out and cause us to change action. It's something that we have to imagine in our mind, and imagine something that we've never seen before. This is extremely difficult.
The other difficulty is that most of the real, real negative effects are not going to happen to us in our lifetime, maybe to our kids. But that's another problem that people think, well – especially depending on how old you are – but it's sort of you think, "Well, it's not going to affect me directly." And that's another problem. You're asking people to make decisions on things that are going to affect future generations. That's another problem.
And finally, in order to think about climate change, you are really asking people to think about the bad stuff. We're asking them to think about the dangers and all the catastrophic that could happen in the future. People don’t like to go towards the bad stuff. We have this kind of avoid approach instinct where we approach the good things, whether it's chocolate cake or money or love. We kind of move forward, we approach. If you see a photo with a smiling person you approach, or if you see a smiling persons, you approach.
When we see the bad stuff, we try to go back, avoid it and not get close to it, whether it's poison or it's someone frowning or any kind of danger. We just stay away. And, in fact, it's been shown, for example, that if you try to get people to donate money, let's say on one of those Go Fund Me Things or that kind of sort, if you have someone that is smiling and showing positive valance, people are more likely to put money for that individual than someone that looks sad or distressed, even if the money is to help someone who's sick.
But showing a person that has a positive balance is more likely to get people to contribute than a negative. So that's another problem with climate change.
And finally, we have the problem of control because we don’t really feel that we have much control over climate change as an individual. It's really hard to convince people that they have control.
So those are all kind of the problems. And that is ignoring for a second that the whole thing, that some people don’t believe in it in general for different reasons, as well.
So I think in order to get to action, we need to refrain the problem not as we're going into extinction, we're going into a catastrophe - we need to do something now to avoid it. But more like what can we do to make our planet as good as it's been. What can we do to protect it and make it a place that is flourishing, to kind of try to insert a positive message rather than a negative in order for people to want to approach and take action and to be involved, rather than people say I don’t want to think about this bad stuff because it's not something nice to think about.
And I think that's something that hasn’t been done so much. The focus has mostly been on trying to scare and trying to cause fear rather than trying to enhance the sense that we can create something that's great for future generations. It's something that at least is worth examining whether that kind of message…
Chris Martenson: That's a great point I learned from George Lakoff. He wrote Don’t Think of an Elephant, other pieces like that. And we were discussing how to - something along these lines. He uses a great metaphor. He said, "Look, when Obama releases the idea of the Affordable Care Act he goofed because he acted as the COO rather than the CEO. He came out and he said we need to control rising healthcare costs." So that's a tactic and its operational.
Instead, if he had reframed as the CEO in terms of vision and put it in a moral term, so if he said, "People deserve access to healthcare when they need it," Then it's harder for the sides to get out their machine guns and go at it.
But as soon as you put a tactic out there, like you say we have to control the temperature and it needs to stay under two degrees. I don’t know what that means personally, and I've studied this, what that would mean to me personally. So what would your research say? What's happening when we frame something in terms of data versus at the moral level, if I could use that?
Tali Sharot: Well, so the moral level is more highlighting our motivations and goals. Right? You don’t even have to think about it as moral, but just what goal are we going towards, right. How do we want things to be? That's something that very much motivates people. Yes, it's still problematic when the goal is way in the future and perhaps not in our lifetime, but you can have some goals that can be in our lifetime. A lot of them are. So if action would be taken now, we will see consequences in the near future.
So goals is definitely something that motivates people. I mean, it's kind of playing one of those online games – there's a goal, and even though the goal doesn’t really matter for anything, people get so involved in it and want to reach it. So that's definitely a great way, a great way to motivate people.
Another great way is to highlight the immediate rewards. So what I talked about before is framing it as something that is trying to make good in the world that's better for everyone and future generations and so on. But we can also think about it – what is the immediate rewards that we can get by taking action now? How would it improve your life now or in the next few weeks or few months or a year?
And I'm sure that there are some rewards that people could get if they take action now. Now, those rewards are also some things that you can make up. So, for example, I know that in our department in the university, departments that are more green get little extra points. At the end they win best green department or whether and they get some kind of prize. And so this seems a little bit silly but it's hugely motivating. A lot of people in the department, the whole administrative group and so on to take action, putting in more recycling bins and so on and so forth because those things are just another – you know, we are humans and we're kind of designed in a certain way to pursue goals and to pursue ones that are closer in time.
So that's another thing to think about – how will it be advantageous for us, not only for future generations, because I think that really is another motivator.
Chris Martenson: All right. Well, that all makes sense. Now, the question that springs to mind – I was quite taken with the whole idea of the backfire effect. It seems to be pretty robust, and I think I've observed it in action quite a bit. So, is the backfire effect a real thing as I've come to understand it that when presented with information that counters a cherished belief, people will often use that countervailing information to actually strengthen their positions rather than reduce them?
Tali Sharot: Yeah. One thing just to say before we kind of talk about it is the brain is, our behavior, we would say, is an outcome of a lot of, lot of rules. But we don’t have one strong rule like gravity, you know. Physics you have, like for gravity, gravity is something that is true in all these different contexts and so on. In Psychology, things are a little bit more subtle and there's a lot of different ,pushes and pulls on our brain and our behavior.
And so the backfire effect is something that you can definitely observe a lot of the time, but just to say that it doesn’t mean that every single time for every single person under every single situation you would see that.
So that being said, yeah, so there's a lot of demonstrations of it, and as you said, people can see it in their own life and interacting with other people that if someone has a very strong position or a very strong belief, and you come at them with something new, and say you're wrong and here's the reason why you're wrong, our automatic reaction is to try to protect our belief, especially if that belief is very much tied to our identity. But not even, even if it's just a choice that we made.
We have a study that we've done quite recently where we wanted to see what happens in the brain when you encounter an opinion that doesn’t really conform to your own. And we wanted those opinions to really not be part of you identify. We wanted it to be something super simple. And so we had people make decisions together about the value of real estate. We had them look at different real estate and evaluate it. And while they were doing that, we also scanned their brains in two brain imaging scanners, but they could interact over the wi-fi. So they could see the opinions of the other person and they could respond using button boxes and so on.
And what we found is that when a person saw that the other person is agreeing with them on the price of real estate, each person's brain really encoded the information coming from the agreeing partner quite well. So using statistics, we could see that the activity in the brain suggests that you're really taking in what the other person is saying when they are agreeing with you. And people's confidence in their own decision went up, which makes sense because if someone agrees with me I become more confident.
But then, what was interesting was when two people disagreed, it basically looked like the brain was kind of shutting down, and it wasn’t even including information coming from the disagreeing partner. You thought the value of the real estate is very high and the other person says, no it's low, giving you extra information. At that point you're not even encoding it anymore. You're saying they're just wrong and so I'm not even going to encode this information. Not necessarily doing this consciously, but that's what happened.
And then what happened to people's confidence when someone disagreed with them, it didn’t change much. There was s a tiny, small reduction, but not even a significant one. So that really shows us that coming in with reasons to defend your own position and suggest the other person is wrong may just be ignored all together; I might just not listen to you all together. Or I may, at the same time while you're even talking, come up with other reasons to strengthen my position and my choice, my assessment.
Chris Martenson: Well, that's fascinating. These are with lightly held beliefs. In the case of is we sent to something fairly strongly held where it was, I don't know religious based or climate change really for or against, either way, but if we took somebody with a really strongly held belief, would you say that the encoding function, would you guess it would disappear there, as well? And would it actually go further and be resisted?
Tali Sharot: Exactly. We on purpose chose something that wasn’t kind of a strong, emotional belief that you had, but you just made some kind of assessment about real estate. Definitely, if we went for something that is ideologically important for people, we would see, I think, much stronger effects, as well.
Chris Martenson: Interesting. Tell me the evolutionary advantage of this.
Tali Sharot: This goes back to what's known as the confirmation bias. And the confirmation bias is our tendency to take in information that confirms what we believe and to be a bit critical of information that doesn’t confirm what we believe.
On average, this is exactly the right way to alter your beliefs because, on average, if you have strong beliefs, those beliefs are likely to be correct, on average. And if I come and tell you now I saw pink elephants flying in the sky then you would just ignore me altogether, think I'm delusional or I'm lying, as you should, because you have a strong belief that elephants don’t fly in the sky.
And when you define whether you're going to update your beliefs or not, it depends on the belief that you already hold and your confidence in that belief. And it would be a very bad idea if you went around the world every day, and every time that you saw a piece of information or opinion that was counter to what you believe you change your belief. That would be a very bad way to go about life and it would create chaos.
And so our brain is called Bayesian. Whether we update our beliefs or not depends on what we already believe, and that's a good way to go. And in most cases, it gets us to the right place. And again, it doesn’t mean that we don’t update at all, it's just that we update much less when it counters to what we believe.
But, of course, the problem is those situations when our belief is very strong but it is false. And in those situations, it's really hard to change people's false beliefs because the same mechanism that we use to update our beliefs about everything in the world is the one that we're going to use for these false beliefs, as well.
And I think that's kind of the cases that a lot of times psychologist concentrate on our economists concentrate on those kind of instances where it seems very suboptimal, but 80 percent of the time we're doing it right.
Chris Martenson: Interesting. So the question that emerges for me then would be let's assume that there's some people out there, maybe myself, who are interested in – who completely understand that you can't just go around revising your beliefs with every new piece of information. But if you wanted to have a more flexible approach or maybe have these things be more at the surface area, practices like mediation or some of the really fascinating stuff that's emerged with the ability psilocybin to really shape people's belief structures and move them, what are some of the ways that people can go about developing a more flexible or balanced ability to process and take in information?
Tali Sharot: I think, first of all, just being aware is helpful to some degree, so being a bit critical of how you are updating and so on. There's two points where confirmation bias comes about.
First is what information we even see, which information do we even encounter regardless of whether we're believe it or not? What information are you actually evaluation because we have a bias that were are more likely to seek the information that already confirms what we believe? And that creates a bias because information that is in front of you is already bias because you're more likely to try to ask the opinions of people that you know are probably likely to agree with you, more likely to have a Google search that will actually generate the information that you already agree with you.
And so kind of the bias in information that is in front of you is a little bit easier to alter because you could do things like follow people that you know are on the other side of a lot of beliefs that you have on social media and so on. So people that you respect but are not necessarily going to agree with you. Have people in your team that you know hold different views and theories that are relevant to whatever the profession is. Disable the Google features that is more likely to give you what you kind of already believe and so on.
So that's the first step. The first step is to try to at least have diversity of information in front of you as a first step.
And then, of course, the second step is which information you're going to use to alter your beliefs. I think it's very difficult to do this on yourself, on your own, i.e. to cause yourself to be open to information that is not confirming what you believe.
But I think what is a little bit easier is to try to present information to people that disagree with you in order to have them be more likely to consider it. And I think the way to do is that even when you disagree with someone, whether it's a political argument or an opinion on a personal argument with a spouse or something on something that we disagree on, I think one way to go is to try, first of all, to figure out what actually you have in common. What beliefs do you already have in common? What motivation do you have in common and start with that.
Because if you present yourself as an agreeing partner, well, our study suggests that the other side is going to be more likely to open and listen and encode, right. So if you come is as well, you know, I am also a parent, I am also whatever, I also want the best for our kid or something like that, well then you are starting off with things that you have in common, and then the other side is more likely to listen when you then give information when it doesn’t agree as much with some kind of beliefs that they have. So I think that is something that we can use in order to get our message across in a way that is less likely to be disregarded.
Chris Martenson: So, if I can synthesize what I've heard pertaining all the way from the climate change to here is that because of your inherent confirmation biases and optimism biases that are built in, that it's important for us to have messages that are positively framed, that speak to the goal involved, that avoid the fear, that avoid the images of sickness in the case of the Go Fund Me page. But that people instinctively are going to respond to those positive messages much more rapidly and possibly better than they would a negative message, even though you're still trying to get to the same place with both sets of messages.
Tali Sharot: Yeah. And just to be clear, because a lot of times people kind of misunderstand what is meant when I say positive messages. It doesn’t mean to say everything's fine or everything's going to be okay. Not at all. It doesn’t mean to sugarcoat anything. But it's simple to say what needs to be done to get better rather than to say if we don’t do this, then things will get worse.
So, for example, instead of telling someone, if you take route A you will use lose time and money, you would say if you take route B you will gain time and money. So highlight to people what has to be done in order to get to the good place that you want to get to rather than focusing on the disaster of, oh, this is the bad place that we're in.
Or you might say to a kid, you might try, maybe even as a second approach instead of telling the kid, if you smoke you get cancer, and they're like, "Well, cancer, that's way in the future. It's a bad thing. I'm probably less likely to have it." Instead of that, you might say, "Well, if you don’t smoke you're more likely to get on the basketball team."
So highlight how we would get to the goals that we want to get, which is actually our intention. When we try to scare people and so on, we're trying to do that because we want them to get to the good place. But for some reason, we're not highlighting what this good place is.
Chris Martenson: Fascinating. That raises a bunch of questions, however, that is our time for today. So Dr. Tali Sharot, thank you so much for your time today and your fascinating insights. Please, if you would, tell people how they can follow your work, find your books if they want to dig deeper.
Tali Sharot: Sure. So the books are on Amazon or anywhere else where they sell books. So it's The Influential Mind and The Optimism Bias. And our website also has both articles, scientific articles, but also articles for the wide population, so it's affectivebrain.com.
Chris Martenson: Affective with an A .com. Thank again. Thank you so much for your time today. And for everybody else listening, this brings us to the end of this featured voices podcast. Thank you all for listening.
Tali Sharot: Thanks for having me.