Tracking Political Manipulation Through Social Media - Samantha Bradshaw

by Y Combinator1/23/2019

Samantha Bradshaw is a researcher at the Computational Propaganda Project and a doctoral candidate at the Oxford Internet Institute. She’s been tracking the phenomenon of political manipulation through social media.

You can find Samantha on Twitter at @sbradshaww.


Topics

:30 – What is a bot?

2:30 – When computational propaganda began

3:30 – Changes in bot tactics since 2016

5:30 – Using bots for content creation

7:05 – WhatsApp and the upcoming Indian election

9:00 – Trends in computational propaganda

10:30 – How bots integrate into platforms

13:00 – Responsibilities of platforms to remove fake accounts

14:30 – The role of governments in media manipulation

17:55 – Fake news and selecting news that aligns with your beliefs

19:30 – Are platforms getting better or worse?

21:10 – Samantha’s personal internet habits

22:40 – Sentiment around tracking in the UK vs the US

24:00 – The Mueller report and US midterms

28:45 – Canadian elections

29:55 – 2020 US elections

30:30 – Deepfakes

31:25 – Optimistic thoughts for the future

32:45 – How to help against computational propaganda



Subscribe

iTunes
Breaker
Google Play
Spotify
Stitcher
SoundCloud
RSS


Transcript

Craig Cannon [00:00] – Hey, how’s it going? This is Craig Cannon, and you’re listening to Y Combinator’s podcast. Today’s episode is with Samantha Bradshaw. Samantha is a researcher at the Computational Propaganda Project and a doctoral candidate at the Oxford Internet Institute. She’s been tracking the phenomenon of political manipulation through social media. You can find Samantha on Twitter @sbradshaww. All right, here we go. I want to talk about two separate things. One is the responsibilities of these platforms and the responsibilities of governments. Then the other thing is bots, because a lot of people hear the word bots but don’t necessarily know what that means. Maybe we should start there, and you can contextualize what bots are actually doing right now, and then we can talk about how these corporations are interacting with their users and with governments. How would you define a bot?

Samantha Bradshaw [00:52] – A bot is essentially just a script or a piece of code. Bots can do a whole bunch of different kinds of things. Let me start here. A bot, for example, could be a web scraper like Google’s search engine is essentially a bot. What it does is it goes out and it crawls the internet in an automated way and then stores information about different pages. That’s a really good bot that we need for the internet. Because if Google wasn’t able to scrape and crawl pages in an automated way, it wouldn’t be able to index them, and we wouldn’t have Google Search. Other bots, and the bots that are being talked a lot about in the media and around all these questions about Russian foreign interference in elections and manipulating the media, these are bots that people design, and they’re designed to mimic human behavior in an automated way. They might plug into Twitter or into Facebook, and then they might like, share, retweet

Samantha Bradshaw [01:57] – a whole bunch of different stories very quickly to give something a sense of popularity, like, “Ooh, lots of people are really engaging with this content.” They might follow people just to, again, sort of amplify that popularity, maybe around a particular person rather than an idea. Some of the more sophisticated bots might actually interact with real people, so they blend chatbot technology, and we all have been to those customer service pages where the thing pops up at the bottom and it says, “Hi, can I help you today?” That’s not actually a real person, that’s another kind of bot, but it uses a lot of natural language to interact with people, and so that kind of technology can be plugged into some of the bots that we’re seeing on social media platforms to actually respond to users in comment threads or post a little comment about a story that’s being shared.

Craig Cannon [02:54] – When did it become clear that governmental agencies or non-governmental agencies were using bots to manipulate thoughts around politics?

Samantha Bradshaw [03:04] – This phenomenon really came into media and public attention during the 2016 election. From some other research that I’ve done here at the OII and on the Computational Propaganda Project, we know that these techniques have been experimented with by governments for a long time. We had evidence of them going back to even the early days of social media platforms and using these kinds of tools to shape discussions, mainly at home and in more authoritarian regime contexts as another tool of social control.

Craig Cannon [03:43] – Now that you’ve been studying this for two years, so you came on in 2016? Then pre-election, you started doing this research, and now, there’s a lot more attention paid to it. How have bot tactics changed?

Samantha Bradshaw [03:59] – It’s been fascinating to study this from the sort of inception– of attention to this issue– Because even when I started this research agenda, the whole focus was on Russian interference in the U.S. election, but I just had this inkling that it was a much broader phenomenon than, you know, there’s more, all kinds of these digital techniques being used that make use of automation and algorithms and trying to game them to manipulate public opinion. In terms of how I’ve seen this change over time, we’re definitely seeing a lot of new entrants starting to experiment with this kind of technology.

Samantha Bradshaw [04:48] – In particular around elections, usually these new entrants might experiment with some of the more crude bots, so things that might just like, share, and retweet certain kinds of stories or follow a politician, so they don’t really engage with users very much.

Craig Cannon [05:06] – That’s okay.

Samantha Bradshaw [05:06] – Usually pretty easy to tell that these are bot accounts.

Craig Cannon [05:10] – Yeah, so it’s trying to give the algorithm signal. To show that something might be good.

Samantha Bradshaw [05:14] – Exactly.

Craig Cannon [05:14] – A person or a piece of content.

Samantha Bradshaw [05:15] – Exactly, exactly. But they’re not very sophisticated with them yet, so we’re seeing a lot of that evolve over time. We’re seeing a lot more gaming of the algorithms, as well, and more of these kind of sophisticated techniques, so not just liking and sharing, but using specific keywords to get content trending to try to get things at the top of Google’s search algorithm, trying to get things at the top of YouTube to be recommended next, and trying to get some more of this organic reach and what the search engine optimizers have been doing for decades.

Samantha Bradshaw [05:55] – We’re just seeing these tools now being applied to politics.

Craig Cannon [05:59] – Is it reaching into full-length content creation at this point?

Samantha Bradshaw [06:03] – We’re definitely seeing content creation, especially in more of the sophisticated operations. A lot of the stuff that’s come out about Russia’s actions in the U.S. have shown a lot of content creation. It seems to be throwing stuff at a wall and seeing what sticks, but they’re putting a lot of resources into finding out exactly what the buttons are to push in society. Then creating content around those issues.

Craig Cannon [06:31] – Is that what’s happening with WhatsApp, or are these individuals that are just creating content and then getting, then posting them to groups?

Samantha Bradshaw [06:39] – It’s a little bit of both. On the one side, you do definitely have organized state actors who are involved in creating messages, figuring out what sticks, and then spreading them. In other cases, it’s just individuals who might have a particular political ideology.

Samantha Bradshaw [06:58] – They just want to get messages to spread because that’s what they believe in. Other times, there’s a whole economic incentive behind getting content to go viral, which is kind of what I talked about at the beginning. If you can get people to look at your content on a website, you can generate advertising revenue, the more people that click through, so–

Craig Cannon [07:20] – That’s kind of the more pure form of using bots. Van you contextualize how WhatsApp is being manipulated? Because this is a very non-U.S. trend. Specifically with the Indian election coming up.

Samantha Bradshaw [07:34] – This is another interesting point about disinformation, too, is that it’s different everywhere you go– because, the people who want to manipulate public opinion, they’re going to go to where the people are and the platforms that the people are using. In the U.S., we tend to see a lot of these campaigns on Twitter and Facebook because that’s the platforms that the majority of people use. In contexts like India, more people are on WhatsApp, and they use that more than other platforms. That’s where we’re seeing a lot of these campaigns. In terms of how WhatsApp is actually being used, there’s still not very much known about it from an academic standpoint. Just because of the nature of the platform, it’s really hard to study because it’s essentially a closed platform. You can’t actively monitor a lot of the communications

Samantha Bradshaw [08:23] – that happen on WhatsApp. There’s certainly public groups that you can join and follow, but that doesn’t give you a whole good look of what’s happening on the platform. Sometimes, it can be hard to find the groups that you need to join– to study them and stuff. Overall, people, we looked at WhatsApp in Brazil, for example. In that study, we joined a bunch of different groups. We’re looking at the kinds of images and the kinds of conversations that were being shared. There were a lot of memes that were being used to push disinformation to people that were in that group. In that way, it’s kind of similar to Facebook and Twitter.

Craig Cannon [09:06] – It is kind of like the Reddit Pepe sort of stuff we saw in 2016.

Samantha Bradshaw [09:10] – Exactly.

Craig Cannon [09:10] – All right.

Samantha Bradshaw [09:12] – A lot of mobilizing images to get people riled up and getting them to feel certain kinds of emotions.

Craig Cannon [09:19] – What are other creative forms that you’ve seen, because in your recap of what you saw in 2017, you said that there, something like 48 countries you found this being used in, in 10 languages or something like that? How else is it being done?

Samantha Bradshaw [09:36] – We’re definitely seeing a lot more memes, a lot more bots on platforms, search engine optimization tactics. We’re also seeing more videos on YouTube and more pictures on Instagram. A lot of these platforms that haven’t necessarily been focused upon in the media, but they’re very powerful platforms because of the way that they deliver content. Images and video can have a much more powerful effect on our psyche and how we, I guess, digest information. What we see opposed to what we read tends to stick with us longer. If someone was more likely to see some, see a piece of fake news, they’ll more actively remember it, than if they read it.

Samantha Bradshaw [10:24] – We’re seeing a lot more disinformation on these kinds of platforms, and it’s not very well-studied yet, but certainly needs to be.

Craig Cannon [10:32] – Gotcha. In terms of how bots actually work, could you break down how they are entering in these platforms at all? Many people are curious about, when you hear bots, but then you think about, “Well, when I sign up for Facebook, I have to give all this personal information.” How are these systems engineered to infiltrate platforms with such scale, and further, do they even need that degree of scale to be effective?

Samantha Bradshaw [10:59] – Right. When they’re talking about the platforms and how bots integrate, there’s quite a big difference between even Facebook and Twitter. Starting with Twitter, for example, it’s relatively easy to plug into the API to scrape information. To create an account on Twitter, you also don’t need a real name. You can use any kind of fake identity to create an account, which is why Twitter tends to have a lot more of these fake accounts that use some kind of automation. There are also a bunch of tools that allow you to automate your activity on Twitter, things like Hootsuite and whatnot. Tou can set timers on tweets and things like that.

Craig Cannon [11:47] – That was a signal you guys used, right, it was like over 50 posts a day, therefore bot, or more likely to be a bot?

Samantha Bradshaw [11:53] – Exactly. That was when we first started looking at this phenomenon back in 2015. We just set a very crude measure of, you know, over 50 tweets a day, you’re probably an automated account because most people don’t write that many tweets during a day. Maybe if they’re at a conference–

Craig Cannon [12:11] – Teenagers.

Samantha Bradshaw [12:12] – Or have nothing else to do, but that’s still quite a large number of tweets. So that’s Twitter, whereas Facebook, you still need to have a real name to create an account. Sometimes, Facebook actually verifies your identity. I know this because I actually set up a fake Facebook account once, and this is a while ago, and I just wanted to log into it recently, and I tried, but they wanted me to send them a piece of ID, like a passport–

Craig Cannon [12:38] – Oh.

Samantha Bradshaw [12:40] – or driver’s license, things like that, to verify that this account was real. I was like, “Oh, I don’t have anything with a fake name on it, so whatever, I’ll just leave my fake account alone.

Craig Cannon [12:50] – That’s so funny.

Samantha Bradshaw [12:50] – It can die.” But that also means that because it’s so hard to create fake accounts on Facebook, the accounts that are fake might actually be a little bit more powerful because people don’t expect there to be as many fake people on Facebook as they do Twitter. They might actually have more of an impact in the circles and communities that they’ve managed to infiltrate.

Craig Cannon [13:18] – When it comes to the responsibility of a platform to get rid of fake accounts, where do you think it lies? Should they do anything, or I think the consensus is they should, but what do you think?

Samantha Bradshaw [13:33] – I definitely think they should. For a long time there hasn’t been an incentive for them to because the more active accounts there are on these platforms, the more they’re valued on the market, right, because all the sudden, there’s this huge user base of people, people that these platforms can advertise to and sell that advertising space to. But as soon as you start saying, “Oh, millions of people are not actually real,” they become devalued, right? For a long time, there wasn’t this incentive to actually go through and delete all the fake accounts.

Craig Cannon [14:16] – Basically, the public market thinks more users equals more money, therefore, good, keep going, but then on the advertiser’s side, if those views are from bots, you’re also not getting what you want.

Samantha Bradshaw [14:30] – Exactly. This is where it has sort of flipped now because advertisers are starting to realize, actually, these aren’t real people, so why am I paying this much money to advertise to a piece of code that’s not going to buy my product?

Craig Cannon [14:46] – Now that these Senate hearings are happening, they’re happening all over the world, when you’re starting to see governments get involved, I know it happened in Germany around a certain degree of censorship. What do you think is a right course of action?

Samantha Bradshaw [15:04] – This is a really complicated problem that has so many dimensions to it. It’s not just fake news,

Craig Cannon [15:13] – Sure.

Samantha Bradshaw [15:14] – or that kind of disinformation. We talked about the accounts and how those are problematic. We talked about foreign interference and, like, these really coordinated campaigns against governments. There are so many different issues that are kind of connected in this media manipulation, social media manipulation bucket. When governments go after the content of what’s being shared, that’s a mistake. I don’t think that’s getting to the underlying problem that’s sort of fueling the fake content or the disinformation to spread or to go viral in the first place. Things like NetDG, for example, when it was first introduced in Germany in, I think, 2017, early 2017, someone from the AfD had posted some, you know, horribly racist comment online, and of course, it got taken down immediately because NetDG essentially says Facebook or any platform has to remove content that breaks German law within 24 hours or else they’ll face, like, a 50 million euro fine. Because this broke the hate speech law, Facebook removed it immediately. They also removed all of the content that was created around that tweet, all of the people that were calling this person out for making such a racist comment, all of the criticism on it, and so all of a sudden, you start to lose the vibrancy of this online political sphere.

Samantha Bradshaw [16:51] – I don’t think going after the content is necessarily a good idea. We’re already also seeing authoritarian governments adopt this law into their own, into their own legal systems to silence dissent and to go after journalists who are publishing so-called fake news and whatnot. It has a lot of unintended consequences, and a lot more negative consequences, and, things like collateral censorship. In terms of what could be done, enforcing more transparency around the platforms and their operations and their algorithms is really important. Right now, all of these platforms, they’re just black boxes, and we don’t understand anything about how these algorithms work and how they’re tailored to deliver certain kinds of content. Complete transparency is also not great because if you perfectly understand what goes viral on Google, everyone will then know how to break it, right? At least, maybe understanding the intentions of the designer and tracing those kinds of processes and those meetings and having those kinds of principles be more out in the public would be one way of starting to get that kind of in, into understanding what’s happening with these algorithms, in these more closed black boxes.

Craig Cannon [18:17] – What about, in a world where, say, we could get rid of all fake news? How do avoid the problem of us only wanting to see what we agree with?

Samantha Bradshaw [18:26] – This is also part of the problem, too, because fake news is not just a digital problem– but it’s one of human nature. There’s a lot of research out there in academia that shows people do select news and information that adheres to their own beliefs, right? It’s the selection effect, so– That’s where things like education and stuff come in. Education is often used as the–

Samantha Bradshaw [18:54] – go-to solution to the problem, but it does have a really important role here in teaching us how to be good citizens and reminding us why democracy is important, and why it’s important to look at different sources and to be able to negotiate that public consensus with one another. The system’s not perfect, but we’ve been focusing on how much it’s been broken for so long that the positives have sort of been lost in a lot of the conversations.

Craig Cannon [19:27] – We were talking about this before, but the lack of optimism in online communication misses the real optimism that exists in day-to-day life. There’s not really been a place for that yet, or at least it doesn’t do that well unless it’s incredibly cheesy, kind of like life inspirational, life hacks, that sort of thing. In terms of these platforms in the long run, based on your research, do you think this problem is getting, things are getting better, or do you think that they just combust with enough bots involved?

Samantha Bradshaw [20:04] – We’re already starting to see combustion happen,

Craig Cannon [20:05] – Right.

Samantha Bradshaw [20:08] – especially with Facebook. A lot of my friends this year are starting to get off these platforms and to de-platform, which might open up the market to new kind of models that aren’t based on advertising and aren’t completely monetized by digital advertising. Maybe we’ll start to see different kind of business models pop up and maybe create a little bit more space in the market. Right now, it’s just so consolidated, too, and–

Craig Cannon [20:40] – Is that the breaking up of Facebook, WhatsApp, Instagram, is that a topic of conversation here at the OII, as well?

Samantha Bradshaw [20:51] – It’s something that I think we all tangentially are thinking about. And what that would look like and how that would actually be done. I’m not an economist or– an expert on any of this, but I do think that the consolidation of these platforms and the fact that there’s no space for competition. That’s kind of locked us into these systems, which has then made the problem so much worse.

Craig Cannon [21:20] – Right.

Samantha Bradshaw [21:20] – Because now, all of a sudden, it’s everyone that is being affected, opposed to, you know, just a smaller–

Craig Cannon [21:26] – You have to be 100% in or 100% out, Which is tricky.

Samantha Bradshaw [21:30] – Exactly.

Craig Cannon [21:30] – What have you been doing in terms of your personal internet habits? You haven’t checked out completely, obviously, I saw you have a Twitter account. What do you do?

Samantha Bradshaw [21:43] – I keep using the excuse that, well, I study this, so I need to be on it to see, you know, what’s happening. I might pick up on some change that I wouldn’t have been able to actually see or understand if I wasn’t on the platform. I’m just a little bit more conscious around the kinds of information that I’m putting out on these platforms. I’m trying to be a little bit more conscious around, like, apps on my mobile phone and making sure that I’m restricting access to certain kinds of things. Just practicing a lot more digital privacy and better habits–

Craig Cannon [22:22] – Right.

Samantha Bradshaw [22:23] – around that.

Craig Cannon [22:23] – Okay, anything weird and ultra-fringe, or just kind of basic good practice?

Samantha Bradshaw [22:29] – I’m pretty basic —

Craig Cannon [22:32] – All right, yeah.

Samantha Bradshaw [22:32] – When it comes to this stuff, yeah. I haven’t totally got the tin foil hat on and disconnecting everything in my life because I’m worried about these issues– To that extent, but I do think that especially these issues around privacy and data collection and how that relates to disinformation– and even targeting messages based on the data about me. These are huge, that’s another huge bucket of problems.

Craig Cannon [23:02] – Do you feel it’s the same as it is in the States? Because the UK’s had CCTV everywhere for quite a while now. Are people more comfortable with that tracking, or less comfortable, or the same?

Samantha Bradshaw [23:18] – Probably the same. A lot of my British friends are still outraged when they find out that private companies have been doing X, Y, and Z with their data. The difference around what private companies are doing and what governments do around surveillance, because everyone kind of knows what the government does, there are all kinds of laws that have been passed and debated, that’s a little bit more transparent in terms of what they’re collecting, or at least, they give the impression of transparency around that. There’s more of a public discussion where people are, there’s still a shock factor around the fact that Facebook was storing private phone calls and private messages and selling that data to other marketers without the users knowing. I think the shock factor still exists in that space.

Craig Cannon [24:16] – To go back to the States just a little bit. I know this is happening everywhere, but I can be a little America-centric or U.S.-centric. What are you thoughts on what will come of the Mueller report?

Samantha Bradshaw [24:29] – I don’t know, to be honest. I hope, I’m happy that the Mueller report at least brought to public attention a lot of the detail around what the IRA, the Internet Research Agency, was doing in the U.S. That was a very credible source of information around the wide range of techniques, not just the digital, but even the real world activities that were going on. Whether or not it will lead to any outcome, I’m not sure, but the fact that it has at least made this information public, hat’s already a win for me. Whatever comes out in the future, at least we have more information and more knowledge around the investigation.

Craig Cannon [25:24] – In terms of the midterms, have you guys crunched those numbers yet?

Samantha Bradshaw [25:32] – When we looked in the midterms, actually, let me go back because we studied the 2016 elections. And we studied the midterms, and in both these studies, we were looking at what people were sharing as political information and news online– on both Twitter and Facebook. In 2016, we found that Americans, on average, were sharing about a one to one ratio of junk news to professionally produced information.

Craig Cannon [26:01] – Junk news being fake or just not high quality?

Samantha Bradshaw [26:04] – Not high quality. We have a, like a five point definition that we tend to check off, and you need to have, like, four out of the five things, so things like counterfeit, is it mimicking a real, legitimate news source? Is it using the Washington Post font or the BBC colors to give more credibility? Things like, do they adhere to any kind of journalistic standards? Do they publish corrections, the kind of language that they use, you know, are they using a lot of F-bombs and really hyperbolic language. Things like that. It’s not just fake news, but it’s all of this other kind of low quality information that isn’t necessarily helping our democracy. In 2016, users were sharing junk news at a one to one ratio to professionally produced information. In 2018, when we redid this analysis during the midterm, the ratio of junk news actually went up. It went to, like, 1.2 or 1.3 to one.

Craig Cannon [27:12] – With the same amount of users, or–

Samantha Bradshaw [27:15] – We controlled for the number of users, and we also, in 2016, did a study on the swing states. And in actual swing states, people were sharing more junk news compared to uncontested ones. We haven’t done that same analysis now for 2018 midterms, but I think it would be interesting to see if it’s also geographically distributed– in terms of who’s sharing what.

Craig Cannon [27:43] – The assumption would be that these people are more heavily targeted in terms of bot nets, right?

Samantha Bradshaw [27:48] – Exactly. It’s hard to say exactly for sure who or what is sharing that information. But we can see that by numbers, it’s more concentrated in certain places than in others.

Craig Cannon [28:01] – Yeah, because I’ve been curious to see thatthe net numbers just go into decline. You can still find instances of fake news being shared. More people become more skeptical and aren’t checking it at all. Did you find that in the midterm study, or is that just not part of it?

Samantha Bradshaw [28:19] – We didn’t go into people’s emotions or how they felt about– sharing this kind of information. What we would have expected to see, too, was that it’s declining– because the platforms have been saying over and over that they’ve been taking steps to reduce the kind of disinformation and misinformation being spread.

Craig Cannon [28:38] – And it’s not.

Samantha Bradshaw [28:38] – And it’s not. We’ve done this study in other countries, as well. During the UK elections and Germany and France and Sweden, Mexico, all of those countries have much lower levels of junk news to professionally produced news shares than the U.S. So the U.S. is definitely a traumatic case here, but it’s still interesting, nonetheless.

Craig Cannon [29:03] – And what about Canada? You said there’s something coming up this year?

Samantha Bradshaw [29:06] – Well, we’d love to study the Canadian elections– in 2019. I’m Canadian myself, so I’d be really intrigued just to see what’s going on. Canada has always prided itself on being a very inclusive country, and a lot of the junk news that we see in the U.S. uses a lot of anti-immigration rhetoric and things like that. Just out of personal interest. I’m worried. I don’t think Canada’s necessarily immune from those kinds of conversations, and I’m already starting to see some of the kind of populist narratives appear in my own newsfeed and in my own communities of friends, so it will be really interesting–

Samantha Bradshaw [29:55] – to study.

Craig Cannon [29:56] – Okay, because I imagine you see it here, now, with Brexit, too, right?

Samantha Bradshaw [29:59] – Definitely.

Craig Cannon [30:01] – Despite the vote already having happened, it’s still common, right?

Samantha Bradshaw [30:05] – It’s still a thing. It will continue to be part of the political rhetoric in the UK for years to come.

Craig Cannon [30:16] – With the U.S. in 2020, all signs point to this increasing?

Samantha Bradshaw [30:20] – I would think so. The fact that we saw an increase already in 2018, I don’t think the platforms are going to be able to get their act together in time for 2020. The U.S. election is where we see a lot of new innovation in these manipulation techniques because millions and millions of dollars go into these campaign media strategies, so there’s a lot of money to play around, to experiment, to innovate. It’s going to be interesting.

Craig Cannon [30:53] – This is terrifying. Do you think 2020 or 2024 will be the year of Deepfakes?

Samantha Bradshaw [31:01] – I don’t know. There’s a lot of hype about– Deepfakes right now. I don’t know how real it’s actually going to be, and we’re already seeing a lot of the research agencies, like DARPA and things like that, work on being able to detect when photos and videos have been manipulated. I’m a little bit more optimistic that Deepfake will not become a thing. Maybe in like low literacy media environments, there might be, you know, more, it might have more of an impact, but I like to remain optimistic.

Craig Cannon [31:42] – Closing out, what are your optimistic thoughts for the future?

Samantha Bradshaw [31:48] – My optimistic thoughts for the future? I don’t know if I have any today. I’ve just been granting about all of the problems and reminding myself– after the Christmas break, why I really care about these things.

Craig Cannon [32:04] – Are there any signs of things improving?

Samantha Bradshaw [32:06] – I like to think that they are. A lot of governments are seriously thinking about this problem. There are a couple of people that are really educating themselves around the issues that are at the intersection of technology and politics and society. A lot of the time, policy makers make laws without necessarily understanding the technology, and there’s a big gap there. It does make me feel more optimistic when I see Senator Warren or here in the UK, we have Damian Collins. They’re really on top of their game and taking this in and thinking really seriously and deeply about what good regulation could look like that’s not going to just break the technology. The fact that there is more energy around government regulation and proper government regulation, that makes me feel a little bit more optimistic.

Craig Cannon [33:05] – If someone wanted to study what you’re studying or try and help this cause, what would you tell them to do?

Samantha Bradshaw [33:11] – I would just say, be nice to each other on the internet, honestly.

Craig Cannon [33:15] – Wow.

Samantha Bradshaw [33:16] – There’s just so much anger in society right now. You know, we’re seeing more and more polarization especially in the U.S. That gap has been widening and widening for the past twenty years, and we just need to remember that we’re all humans at the end of the day. We might have different beliefs, but that doesn’t make us, you know, evil or wrong or terrible people. We need to just be nice to each other and learn to talk to each other again.

Craig Cannon [33:42] – That’s a great point, yes. Checking out isn’t necessarily a net positive for the community.

Samantha Bradshaw [33:47] – Exactly.

Craig Cannon [33:47] – Yeah. All right, thanks so much, Sam.

Samantha Bradshaw [33:50] – Thank you. All right, thanks for listening. As always, you can find the transcript and video at blog.ycombinator.com, and if you have a second, it would be awesome to give us a rating and review wherever you find your podcasts. See you next time.

Author

  • Y Combinator

    Y Combinator created a new model for funding early stage startups. Twice a year we invest a small amount of money ($150k) in a large number of startups (recently 200). The startups move to Silicon