podcast

podcast


JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Cate Hall about human nature, political polarization, mania and psychosis.

SPENCER: Cate, welcome.

CATE: Hey, Spencer, thanks so much for having me. I'm excited about this.

SPENCER: Me, too. The first topic I want to talk to you about is one that touches on something that's been in the news a lot, which is misinformation. And it also relates to what rationalists like to think about a lot, which is why do people come to false beliefs? How do we come to truer beliefs? But I think you have a really interesting take on what's going on with so many false beliefs spreading in society. Do you want to jump in there?

CATE: Yeah, that sounds great. Basically, my theory of a lot of phenomena can be pretty neatly summed up as, I think that people will adopt whatever beliefs allow them to be the hero of their own story. And I think that that narrative dictates people's beliefs, which dictates their versions of facts, and not the other way around, which is a model of mind that I typically see brought up.

SPENCER: So you're saying that normally people think of it as…someone has a bunch of facts, and they're going to derive their theory of the way the world works from those facts or from that information. And you're saying that they're starting with, "Well, I must be the hero," and then emanating from that is what they believe is true about the world?

CATE: Yeah, exactly. People have this sense of identity that is dictated by the narratives about their lives that are available to them, that make them the protagonist in an important story. And that version of identity dictates people's group affiliations, which in turn dictates their beliefs about the world, which in turn dictates the version of facts that they accept. And so it can be kind of mystifying to try to understand why people have these wildly incorrect perceptions about facts and seem not to care. But I think that that's really based on a mistaken model about the role that facts serve in most people's lives.

SPENCER: So would you be talking about things like QAnon, for example?

CATE: Yeah, I think that QAnon is a great example of it. I do think that this is a phenomenon that you see on all sides of the political spectrum. It's a fundamental truth about how people engage in politics and why. It just so happens that people on the left tend to derive their personal sense of heroism from values that more closely align with truth. So supporting science, having a sophisticated understanding of the world…I think that the same factors can lead to mistaken beliefs, opinions and factual beliefs about the world on both sides.

SPENCER: So one thing I'm trying to understand about this is the connection between identity and being the hero of the story. Could you elaborate on that? How does being the hero of the story connect to what group we end up identifying with?

CATE: I think that people have a fundamental need to believe that they matter and their choices matter. And I think that that is an inescapable fact about the human condition. I think that the identity people build for themselves has a lot to do with what narratives are available to them, that preserve that role in their own lives. To give an example, if you are a poor White person living in the Midwest who is having trouble finding work, and feels like they are constantly condescended to and attacked by people on the left, your identity that allows you to maintain your sense of dignity is one where leftists are irredeemable elitists. It's one where they have basic facts about the world wrong. Taking the example of QAnon, I think what happens there is people who have a general absence of meaning in their lives, one of their last refuges for maintaining this self-story about heroism is to think, "Well, at least I figured something out that other people haven't. I've grasped some truth," and that's what makes me special in my own eyes.

SPENCER: So generalizing this idea a little bit, there's this idea I've been working on; I call it anchor beliefs. An anchor belief, the way I define it, is something that you're totally unwilling to stop believing, or that you implicitly know that the cost of not believing it is too high to be willing to stop. And therefore, when you have one of these beliefs, you must warp all evidence and data and information you get, to keep that belief being true. So you can't give up that belief by definition and, when you get evidence that's wrong, you have to find some other way to interpret that evidence, and this can create a lot of strange ripples in people's belief systems. I'm wondering, is the claim essentially that we have this anchor belief that we are sort of the hero of the story?

CATE: Yeah, I think that that resonates a lot with the way that I think about this. There are more specific beliefs that become so central to one's overall narrative about the world, that it is really difficult to even question those beliefs without shaking the foundations of one's overall worldview in a way that most people are not willing to.

SPENCER: Yeah, so some beliefs like this (I think are pretty common) are religious beliefs, right? Like, if you're born into a certain religion, you probably grow up thinking of it as something you're not willing to let go of. Of course, some people do convert or switch religions, but for a lot of people, that belief will just persist for the rest of their life. Another example, I think, is that the thing that we spend our life devoted to is good. It's kind of related to heroism, to some extent, but I don't think it has to go as far as heroism. Let's say, someone works in the finance industry their whole life; it might be an anchor belief that finance is not evil (or something like this), that they didn't devote their life to something that actually harmed people.

CATE: Yeah, both sound like great examples to me. I should clarify that maybe I use the term 'hero' a little bit less strongly than it might sound. It's more like being the protagonist, being someone whose choices matter to the overall story, and who is fundamentally trying to do the right thing. But it doesn't imply an exalted sense of self.

SPENCER: Got it. So how do you think about, for example, depression in this context, where it seems like depressed people can believe that they're sort of fundamentally worthless, or believe they're bad, even though they're actually good people?

CATE: I think that I want to say that depression potentially arises from the inability to find compelling narratives about oneself in this context.

SPENCER: So is the idea that a lack of these narratives could manifest as being depressed?

CATE: Yeah, I think that's right. I view this urge to find good narratives about oneself as an alternative to feelings of despair and lack of meaning. I think that there is, in general, a real crisis of meaning in America. Even prior to COVID, life expectancy in the US declined for three straight years, which is completely insane. And most of that was driven by overdose deaths, deaths from suicide and alcohol-related disease. So I don't know if I think it's entirely coincidental. But at the same time, as we see life getting really bleak for people, we have a rise in people's willingness to adhere to systems of belief that seem sort of crazy from the outside, because they do offer this narrative about why one actually matters, and emphasize the dignity of people who feel like they're being left behind.

SPENCER: One thing I find interesting about the narratives right now is it seems like everyone kind of thinks everyone else is crazy. The right often will have videos or articles about how the left has gone crazy — like with critical race theory, or with various different viewpoints on the left, philosophies on the left — whereas people on the left think people are crazy for voting for Trump. They think people are crazy for supporting policies that they think are against people's own interests, and so on. I wonder if that is part of what you're describing?

CATE: Yeah, I think definitely, that's part of what I'm describing. As I said, I see this as a phenomenon that happens on both sides of the political spectrum. It just feels more apparent to us, coming at it from one particular viewpoint. I think that the general social response to the dynamic of polarization tends to be a self-reinforcing cycle in a way that's really dangerous. Because, as you were saying, you have to have these anchor beliefs. People on the right need to believe that people on the left improperly view everything through the lens of race, and are fundamentally seeing people as evil who aren't. And you need to interpret everything that you see happening through that lens in order to maintain self-consistency. And the version that people have in their heads of people on the other side of the political spectrum becomes more and more extreme as a result of that.

SPENCER: I listen to some podcasts where people will interview people that they strongly disagree with. For example, they'll bring on people that believe in Flat Earth Theory and interview them, or bring on people that are parts of cults and interview them. And sometimes I find these interviews really interesting, and sometimes I find them really cringey. And I was trying to figure out what the difference was. Why do I sometimes find them really cringey? And I think it's because sometimes the interviewee is not adopting the perspective of the person they're interviewing. So they're kind of imposing their worldview during the interview, and they're contradicting the person in a way that is for their audience, but doesn't make sense from the internal narrative of the person being interviewed. Whereas when they're talking to the person from the internal perspective of the person being interviewed, it feels to me like just a much better conversation and much more productive. Just as an example, let's say someone came from a cult; if you're gonna say things like, "Well, clearly, this can't be true," about something they believe, that's going to just be extremely unproductive. Whereas if you're going to say, "Oh, could you explain to me, how did you come to believe this thing?" That seems to be much more productive, because you're kind of operating within their worldview in the conversation.

CATE: Yeah, I think that that's a really important dynamic for having productive conversations. I'm reminded of a conversation that I had recently with a friend who has become sort of a COVID skeptic/vaccine skeptic, doubts the seriousness of the virus, and a variety of other beliefs that are pretty foreign to me. And we had a good conversation about that, that resulted in him being more willing to engage with evidence that refutes his worldview, I think, because I was willing to meet him halfway and really listen to what he had to say, listen to his sources of evidence, and treat it as a conversation rather than a lecture. And I think that that's a really important dynamic that has become really de-emphasized in political discourse, where interactions with people on the other side of the spectrum are primarily derisive, mocking, intended to bring shame on to other people. I think that that activates the psychological immune system in a way that makes coming to better understanding pretty impossible.

SPENCER: If you're right that people have an anchor belief that they're the hero of the story, or at least on the good team, then it implies something about when you're having conversations with people and your goal is actually to change their mind — not just to feel like you won, but to actually change their mind -– that you should be operating in that conversation as though they believe they're the hero of the story. I know that you're an expert in poker and so it reminds me of that a little bit. You don't need to model just what cards you have; you have to also model the mind of the other person that you're playing against.

CATE: Yeah, I think that that's exactly right. And I think that this is a mistake that I see in most political discourse. People want to try to persuade other people — or just mock other people — on the basis of facts. And I don't think that that really ever succeeds in convincing people of your argument. I think, in order to be successful at changing somebody's mind, you have to offer them an alternative narrative, where they see how your beliefs about the world and your understanding of facts about the world are actually in line with their story about themselves.

SPENCER: Reminds me of people who have worked to try to rephrase values from one side in terms of values of the other, like talking about how you would pitch environmentalism to a conservative, or how you would pitch family values to a liberal. And then you could start to see that there are ways that environmentalism could play to conservative values like, "Well, the world has been this way for a really long time. Do we really want to mess with it? Isn't that not a very conservative thing to do, to destroy the environment we've always lived in?" So it's kind of interesting in terms of reframing things in different value sets.

CATE: Yeah, that makes total sense to me. I suspect that that is true of a lot of politicized beliefs, that people could find reasons to feel otherwise, if they had different stories, and that (for most things) you can find a story that will be conducive to somebody aligning with your values, but also preserving their sense of self.

SPENCER: So we see all of this polarization occurring in societies today. There are these interesting charts that show, in the US, how much more the left and right have become divided over the years. And I think it might be at its peak now (I'm actually not sure), but at least it has reached its peak in the last five years or so. I'm wondering, how do we take these ideas and make progress on combating false narratives or just reducing polarization broadly?

CATE: It's an interesting question, and one that I confess I don't have a good answer to. I think the first step is developing a better understanding of the psychological factors that motivate people to hold certain beliefs, why they hold those beliefs, how they come to hold them. And despite there being a lot of work research devoted to describing the problem of polarization, I find that there is a really surprising lack of interest in understanding why it happens, why people come to believe in conspiracy theories. And I think it would be really useful to devote more resources to that understanding because I think that the solutions have to come from that space.

SPENCER: One thing I'll point to is, it seems to me that people trust institutions of power a lot less now than they used to in the past. And maybe that helps these kinds of strange narratives breed, because if you have the sense that the institutions of power are not trustworthy or are lying to you, you're going to look for alternative narratives, and then those might end up being what just some random person on the internet posts on a message board or something like that. So if that's true, it suggests that one intervention is around the institutions of power. How do we make them more credible? How do we make them actually be trustworthy, to lie to people less, to get the right answer more often? And if we were able to restore more of their credibility, maybe that would actually pull down on some of these false narratives and let them spread less quickly and less widely.

CATE: That makes sense to me. I definitely agree with you that part of the problem is distrust of political institutions. And I think that the institutions themselves are responsible for that to a large extent. I remember this conspiracy theory that arose with other theories about COVID's origins, that COVID had been basically created by people working for Anthony Fauci, and that this was like some totally insane right-wing conspiracy theory. The story that has emerged in recent months is that it is relatively likely that COVID either came from a lab or came from field work with the goal of identifying coronaviruses in the wild, and sort of sampling them. And those types of work — both collection and manipulation of related viruses in the lab -– it turns out were sponsored by NIH with companies that were working in Wuhan (the labs that were working in Wuhan). It is unclear exactly what Fauci knew about the work that was being done, but I think it is fair to say that he knew of gain-of-function work involving coronaviruses happening in that region in China, and that the NIH sponsored a lot of that work. That connection was downplayed and waved off by Anthony Fauci in congressional testimony and many other forums for a long time before it slowly emerged that the NIH really had a role in some of the research that may have contributed to the release into human populations of COVID.

SPENCER: It didn't help that, early on, the media on the left basically referred to this as a nutty conspiracy theory. (It isn't a good look right now.) Now personally, I haven't followed the lab leak theory very closely. I think early on, I assigned like a 20% chance to a lab leak; now I'm more like 50/50 but I don't have all the newest information. How confident are you that the lab leak theory's gonna turn out to be true?

CATE: I think I am higher than 50%, probably less than 75%. There are a couple of different scenarios. I think that it could be the case that this actually escaped from a lab where research was being done on the virus directly. Or it could be the case that the virus made its way into human populations through the sampling efforts of the EcoHealth Alliance. I would sort of put those two things — even though the mechanisms are different — into the same bucket of, it happened through human intervention. And I think that one of those explanations being true is pretty likely. I guess I would say, somewhere north of 75%.

SPENCER: If you combine them rather than separate them, you mean?

CATE: Correct, yeah. And the fact that it is not higher than that, I think, mostly comes from the fact that there are respected epidemiologists on the other side of the issue, and the lack of consensus in the scientific community.

SPENCER: So just epistemic humbleness around it?

CATE: Yeah, exactly. I think that all of the evidence I've seen seems to indicate a human intervention type of origin. And I think that the inability to find a species that would allow the virus to cross over into humans as an intermediary step, I think that that's something that — with every month that goes by, and we're not able to identify that — the lab or human intervention hypothesis grows stronger.

[promo]

SPENCER: One thing I think about is that, if people want to lie successfully, it's much easier to do that if you say something that's 98% true than if you say something that is just totally made up. And you mentioned how the sort of seeds of the lab leak hypothesis (that now is looking much more likely than people claimed initially) that this actually has gotten kind of wrapped up into conspiracy theories, right? And I think another interesting example of this is molestation of children, because you have a case like Jeffrey Epstein, where clearly there was molestation going on just on an incredibly large scale. And it's clearly different from what QAnon claims, right? QAnon claims that there's this pedophile ring, where they take the (I don't know) adrenochrome from babies or something and use it as a drug, which is 100% false. But you can see how something like Jeffrey Epstein existing — it's not completely different, and it can back the view that there really are these nefarious powers that are molesting children — kind of lends support to this idea, even though QAnon itself is totally full of shit.

CATE: Yeah, I think that's exactly right. I think that conspiracy theories are most likely to take hold when there is some form of evidence that people can use as a hook to sort of hang their beliefs on, even if the beliefs that are promulgated from that are not themselves supported by evidence. There's an overall story that they are able to tell, that is consistent with select evidence that they see in the world. That overall story implies a bunch of other beliefs that are not supported.

SPENCER: I think one of my biggest beefs with the way a lot of institutions of power have behaved in the last five or 10 years is that there seems to be this thing where they want to tell us what's true, in a way where they say, "Ah, this is the truth. If you don't believe this, you're stupid or crazy." But then it turns out, they didn't really have enough evidence to believe that thing. And then, sometimes they might be right, but sometimes they're just wrong. And it sometimes even turns out that they had evidence against the thing being true, or that the way that they communicated the truth had itself falsehoods in it. I think we've just seen a bunch of examples of this with COVID in particular; there was a big one around the whole mask. Some large organizations were telling people not to buy masks because they don't work, which we now know is probably not even what they believed at the time. Really, probably what they were trying to say was something like, "Don't buy them because we want healthcare workers to have them." So I just think that that has done so much destruction to their credibility, because people notice when an organization lies to them, and then they don't believe it anymore. And then when they stopped believing it, then, okay, where can they get information from? From their social network, from websites, whatever.

CATE: Yeah, I think that certain decisions by public health officials during the pandemic have done a tremendous amount to erode people's faith in institutions as sources of truth. As you say, you can point to a number of examples where there was some story being offered by public health officials that they either knew was false, or had sufficient evidence to think might not be true. And that was justified pretty explicitly on the basis of, "Well, if people believe this, it will have X effects that we want to encourage," and I think that that's really short-sighted thinking. And it's probably going to do lasting damage to people's faith in science generally in this country. And you see now a push by the right not only to downplay COVID, resist vaccines, but to repeal requirements of other types of vaccines, which is just potentially a huge step backward for the country and humanity. And I think that, if public trust in institutions were not where it is currently, that would not be happening.

SPENCER: It seems to me, a part of the problem is an attitude one. I mean, there's literally a clip of Anthony Fauci on TV saying that he lied to us, not in a "I'm so sorry," apology kind of way; he basically just says he didn't tell the truth about the percentage of people that would need to be immune to COVID in order to have herd immunity. And he basically says that the reason he didn't tell the truth about this is because he thought people weren't ready to hear the truth. And so he just kept upping the number as he felt people were more ready to hear the truth. And it's like, "What? That is the attitude you have? That your job is to lie in order to trick people into believing things that you think are beneficial for them?" I mean, I just think it's such a toxic attitude for people in power to have, and it clearly just isn't working; it's completely backfiring. And yeah, I think we're gonna be seeing the repercussions of it for a long time, this kind of attitude.

CATE: Yeah, I think that's one of the prime examples that I see coming out of pandemic is the herd immunity — the representations around herd immunity — that you were just discussing. And there's a sense in which I am sympathetic to somebody like Anthony Fauci who was in a very difficult position for the first half of the pandemic, working with an administration that was really digging its feet in, opposed to public safety measures. And I can see the temptation where it comes from, to manipulate truth to serve particular ends. And those ends seem very important because of the scope of illness and suffering the pandemic has unleashed. But I think that this is a really good illustration of why that strategy doesn't work, why it backfires and causes more harm (I think in the long term), than simply being honest about uncomfortable truths.

SPENCER: So my understanding is that you think politics is more important than a lot of people that are around or adjacent to the Effective Altruism and rationalist communities are. Do you want to talk about that, like how you think about politics?

CATE: Yeah, I do think that it's true that I think politics is a lot more important than the average person in the EA community, or maybe your audience. I sort of have this fear of being 100 years in the future, and looking back on the smoldering ruins of civilization, and thinking in retrospect, it was so obvious that this was likely to happen and the best minds of our generation dismissed politics — which drives outcomes in the real world — as just sort of a sideshow that didn't need to be addressed.

SPENCER: Do you think that people write it off just because of sort of the ickiness of it, where it's like, "Well, so much of politics is just marketing and people saying what they need to say to get elected." A lot of the people voting haven't really researched the topics and don't really understand what they're voting on that well, and they're being manipulated and so on?

CATE: Yeah, I think that there's a sense both that the problems are intractable, and that they're not neglected overall. And I think that, with respect to neglectedness, I generally agree, but I think that there are (as I've said) subtopics within politics that are not being addressed anything near adequately, and that's a real blind spot for us.

SPENCER: Well, you mentioned neglectedness and intractability but I was actually referring to something slightly different, which is more just, it seems like a very non-rationalist undertaking, right? So little about politics is about making the best argument.

CATE: Yeah, I think that that's true.

SPENCER: It's like an aesthetic aversion, I mean.

CATE: Yeah, I think it's an aesthetic aversion. It's sort of a fundamental lack of understanding as well. I think of the role that politics plays in people's lives, and why people believe what they do. I think, within the rationalist community, it's very easy to build up a view of human nature where facts dictate beliefs, dictate ideas about how the world should be run. And it can be really confusing if you're coming from that perspective, to try to figure out why people behave the way that they do. And so it just seems a little bit chaotic and very emotionally-driven and icky, as you say.

SPENCER: So what would you like to see people bringing a kind of effectiveness mindset to? What sort of sub-areas of politics? Or how would you want to see them behaving?

CATE: I think that I would like to see more resources devoted to understanding the mechanisms of polarization. I think that this is especially important because of emerging technologies that are likely to make it much easier to effectively manipulate people on a mass scale. I think that there's ample evidence that the world is not prepared for that, because even haphazard, non-directed manipulation, has had incredibly destabilizing effects on our society. And it can get much worse than that.

SPENCER: One thing I've been thinking about lately is, we really just don't have an algorithm for truth. All of these platforms that want us to block false information and this kind of thing, what they end up being forced to do is just fall back on some kind of authority figure like, "Oh, well, this disagrees with the CDC so we're gonna block it, or we're gonna put a note next to it saying it's been disputed" or whatever, because they don't have a way of algorithmically deciding that, "Oh, this is likely to be correct." Whereas it's a lot easier to algorithmically get the sentiment of something, saying, "Oh, this has a negative sentiment" or something like that. But we don't have that equivalent for truth.

CATE: I think, though, that I would add that, to a large extent, platforms' efforts to indicate what is truthful by relying on authorities have been a bit of a disaster. I think that the deplatforming of people for theories about COVID that turn out to (very likely) have been true, is a sign of how that kind of moderation can go poorly, and how it can end up corroding the very values that it's intended to support.

SPENCER: So I downloaded the Parler app out of curiosity, because a lot of people on the right moved to that app after they felt Twitter had turned against the right. So I was like, "Okay, I want to see what's going on here. This is interesting sociologically." The very first ad that I saw on Parler (just in the stream) was for this thing called the Unmask. Can you guess what the Unmask is?

CATE: Oh, I have no idea. I'm super curious.

SPENCER: So it's supposed to look exactly like a mask but has no functionality. It provides you and others with no protection.

CATE: Wow. [laugh]

SPENCER: So the purpose is, I think, to own the Libs by, you know, if you're forced to wear masks somewhere, you can just pretend to wear a mask and have it provide no protection. I get why, if people are spreading misinformation, people want to say, "Oh, ban it," right? But because we don't have an algorithm for truth, this means that we end up falling back on authorities, and authorities are not that trustworthy. And then we kick people off the platform, and then they create their own echo chambers where things are even more intensified; the voices actually are much more uniform, and much more likely to radicalize each other, I think. I'm not sure. Pick an example where I think almost everyone can agree that it's really bad, like Neo Nazi-ism. Actually, I'm pretty conflicted on the question of, if you kick Neo Nazis off of the major platforms, they probably just talk to each other in their other channels that are just Neo Nazis. Whereas if they're on the major platforms, at least they get regularly criticized, and they're kind of mixed in with other views. And I definitely see the instinct to kick them off — and maybe that is the better thing to do — but I actually feel pretty conflicted about whether that ultimately is better for society. Where do you fall on that?

CATE: It's something that I don't think there's a clear answer to, and I wish there were because it seems incredibly important. I, by default, and for a long time, was a proponent…sort of a free speech absolutist. I actually wrote my law school thesis on how to condition Section 230 on provision of certain information that would allow sites to publish everything freely while also providing some protection against misinformation. And I was very much in the hemp of sunlight is the best disinfectant. And I think that that belief has really been challenged for me in the last few years because it does seem like there are really insidious elements in society that have flourished under the conditions of the internet where fringe theories can find a community; they can build on one another. And that wasn't really possible before decentralized technology the way that we have now. So I honestly don't know. I think that there is an obvious danger in allowing anything goes, but there are also obvious dangers in not allowing that. And one thing that I always think about in these contexts is, right now, the gestalt (I guess) of social media favors liberal political beliefs. There is nothing that is inherently true about that. And I worry about any sort of movement toward restricting access to technology, because I think about…well, in the future, maybe this will be applied to beliefs that I think are really important, and I don't want to embrace principles that I think are dangerous when applied to certain groups, just because I would like to see them applied to other groups.

SPENCER: I think it is a really important thought experiment. If you think a tech platform should be able to do X, or you think the government should be able to do Y, would you feel the same way if your group is not in power? Let's say, if your outgroup is in power, the large group that you most oppose. I always try to do this thought experiment, because I think we have to make these decisions assuming that the powerful group you least want to be in power is eventually in power, right? And I really think that changes the equation. Thinking that way pushes me towards thinking things like, "Okay, there should be a certain set of things that are unacceptable to say, and will cause someone to get banned or kicked off the platform. But they should be things that both political tribes agree are unacceptable, or most people in most political tribes should agree." Like you shouldn't be able to call for someone else to be killed; clearly, I think everyone from every stripe can agree that that's just unacceptable, right? But then, by and large, there should be a lot of leeway to say things. And it's just because I don't trust people to moderate the truth; I don't trust the institutions to say what's true. Time and time again, they've proven that they're not that reliable so why would we want them to do it? And then especially so if the institutions are run by groups that you inherently don't trust or are groups that are in your outgroup?

CATE: Yeah, that seems totally right.

SPENCER: But I do want to add though that there are costs to that; there absolutely are costs. People spread misinformation and people spread harmful ideas and that is a real cost. And then I think we can talk about, "Okay, how big is the list of unacceptable things?" And that gets into some hard trade-offs.

CATE: Yeah, I think that that's right. As I was saying, I think that emerging technologies have the potential to really exacerbate the harmfulness of some uses of social media. And I think that we need to take that risk of harm really seriously as a result. Because it could be the case that social media becomes like a very good vector of misinformation and the harm from that is just overwhelming in scale. And so you'd be more inclined to accept some trade-offs in terms of free speech. It's an interesting thing to think about, because I feel like there should be a principled answer to this — there should be a principle that I apply and get an answer from — and I just don't see that. I see really conflicting concerns on both sides, and there's no firm ground on which to stand as a result.

SPENCER: Yeah, I think one interesting thing to think about with regard to politics is that there are people who are trying to weaponize persuasion. There are people who are trying their hardest to persuade you of all different sorts of things — sometimes because they just want you to vote for them, sometimes because it's in their own selfish interests (like monetarily), sometimes it's just because they're true believers, and they think everyone should believe — so that's a given. And then in addition to that, there are memes that are out of control that nobody's even trying to convince everyone of them but they kind of spread through the social networks. And they're so clickbait-y and so viral that people share them and so on. So it kind of happens in spite of anyone wanting it to happen. And so if we live in this world, where these two things are happening, what does it look like to do, let's say, ethical persuasion on the other side where, without engaging in unethical behavior, you're trying to point people to the good ideas, or the things that are actually going to be in people's self-interest, or the things that are actually gonna keep people grounded in truth. It seems like you can't be completely free of persuasive methods unless you're okay with just losing against the much more persuasive part. So it seems like some elements of persuasion have to be brought in. But to keep it ethical, you have to very much limit the scope of the persuasion; there are many tools that are out of bounds if you're gonna stay within an ethical realm.

CATE: Yeah, I think you're totally right that this is a serious problem for any efforts to combat weaponization of persuasive technology, the ability to create memes that spread very effectively, and ways of presenting information that is very effective at manipulating people. It is really difficult to see a way to combat that without also engaging in those methods yourself. And that's something that I wonder about is, what is the right line to draw there? The logic of 'the ends justify the means' can feel really persuasive. But as we've discussed in general, I think that that's a way of reasoning about these situations that is fraught with a lot of hazard.


Report Page