Regulate, Rinse, Repeat

Regulate, Rinse, Repeat

Released Friday, 18th October 2024
Good episode? Give it some love!
Regulate, Rinse, Repeat

Regulate, Rinse, Repeat

Regulate, Rinse, Repeat

Regulate, Rinse, Repeat

Friday, 18th October 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Mike, I'm in the midst of some house renovation,

0:02

right? And I'm a complete

0:04

novice at DIY. So I've been listing the help

0:06

of some of those apps like

0:08

TaskRabbit. And I've

0:10

gone to TaskRabbit and it asked me this, this

0:13

question, right? So I want to, I'm going to pose it to you.

0:15

All right.

0:16

the app prompts me with, I need

0:18

help with

0:21

Oh, that's a long, that's a long

0:23

list, Ben. That is

0:26

a big list. Uh,

0:28

Give me one.

0:29

I think what I will say is that,

0:31

um, right before we started recording,

0:33

we did some prep and then we took a little break and I

0:36

started my laundry and I think we're

0:38

going to be discussing laundry in a few different

0:40

contexts today. So, uh,

0:42

I will say I will need help with folding

0:44

my laundry after this podcast is done.

0:47

Nice.

0:48

How about you? How about you? What do you need help with?

0:50

Well, likewise, apart from,

0:53

you know, understanding how to

0:55

paint and to tile and all the rest of it,

0:57

I need help with understanding when there's a block,

0:59

not a block. and

1:02

when does Elon get to decide? That's

1:04

what I need help with. and

1:13

welcome to Control Alt Speech. Your weekly

1:15

roundup of the major stories about online

1:17

speech, content moderation, and

1:20

internet regulation. This week's episode

1:22

is brought to you with financial support from the Future

1:24

of Online Trust and Safety Fund and sponsored

1:26

by Modulate, the pro social voice

1:28

technology company, making online spaces

1:31

safer and more inclusive. We've

1:33

got a really exciting chat with Mike Papas,

1:35

Modulate's CEO later on today.

1:37

He's going to be talking about the interesting

1:40

question of whether to build or buy trust and safety

1:42

tools, which is a growing topic of conversation

1:44

in the space. but right now

1:46

I'm here with Mike Masnick, the

1:49

other Mike, the original Mike. And

1:52

old mic.

1:54

I'm not making any, uh, jokes about your age.

1:57

I know how that goes down. Um, how are

1:59

you doing? How's, how's your week been?

2:01

Oh, it's been good. Uh, it's been, um,

2:03

super busy, lots of stuff going on

2:05

as always. and,

2:08

uh, yeah, it's just, just, you

2:10

know, everything's been, kind of great. Crazy

2:13

and nonstop.

2:15

Yeah, we were kind of saying before we

2:17

started recording that we could have recorded two podcasts

2:20

about the number of stories we had this week. There's

2:22

an awful lot, even more than usual,

2:24

I'd say, and lots of really quality reporting and

2:26

interesting stories to talk

2:28

Yeah. Yeah. So we were saying at some

2:30

point, you know, we're going to have to add the

2:32

midweek special extra episode.

2:35

But for now, I do think we

2:37

have plenty of good stuff cover in

2:39

the meantime. But oh,

2:42

and we should mention, by the way, I think

2:44

it's important to note, next week, we

2:46

are going to be off. Yeah. We

2:48

have been able to do a podcast every single

2:50

week, even the week we had planned to

2:52

take off in July, uh, for

2:54

July 4th, because the Supreme Court forced

2:57

us or forced me into a motel

2:59

room to record a quick

3:01

special episode. But, uh, next

3:03

week we'll be off, but we'll be back, the week after.

3:06

Yeah, right. And this is confirmed. We won't be doing

3:08

a midweek podcast. I'm worried that listeners will think

3:10

they've got to catch up even, even more content

3:12

moderation news, which isn't the case, but there's

3:15

a lot going on. Um, We've also

3:18

had some really, nice reviews again

3:20

this week, Mike, and we've had a bunch of people get in touch, in

3:22

response to my request for info

3:24

about where they listen to control or speech

3:27

and what they're doing while they're listening,

3:30

laundry comes up.

3:31

it keeps coming up, right? It keeps coming up. And

3:34

I think it's probably because podcasts.

3:36

Generally are an accompaniment to

3:38

boring tasks like laundry. I don't think it's just controllable

3:41

speech, but there's a few other ones as well.

3:43

Um, we've got some nice responses from folks

3:45

saying they, they went on, long walks

3:47

while listening to the podcast. And, my

3:50

favorite one was a listener who said that they were

3:52

deboning a turkey carcass in preparation

3:54

for Thanksgiving. Um, they were making

3:56

a soup. So I thought

3:58

it was a kind of nice, nice selection of responses.

4:01

Yeah. Can you imagine listening to us while

4:03

de boning a turkey?

4:05

do you think that the person imagines us

4:08

as, as the Turkey,

4:11

I don't know if, I don't know if I want to go there.

4:14

I also got some really nice

4:16

responses from folks about the

4:18

pedestrian flags, so I'd never seen these

4:20

pedestrian flags and we had a few listeners. Thanks.

4:23

Thanks, John, for sending in photos of some pedestrian

4:25

flags from Seattle. They actually exist.

4:27

You weren't pulling my leg. Okay.

4:29

I was being entirely honest with you. I

4:31

have seen them in multiple places and,

4:33

yeah, they exist. And now you've seen pictures

4:36

of them at least, but at some point I,

4:38

I have to imagine that they must exist

4:40

somewhere in the UK also. Um,

4:43

don't know. I will make a pledge

4:45

to, if there is pedestrian flags in the UK,

4:47

I will find them and I will, I

4:49

we'll get we'll get a picture of you standing waving

4:52

a flag. I think that's that's where we need

4:54

to go.

4:54

Okay. Okay. great stuff. So

4:57

yeah, if people want to share where

4:59

and how they listen to control of speech, get in touch with

5:01

us. we've got some messages on blue sky.

5:03

We also got some emails, podcast at control

5:05

alt speech. com. We will get back to

5:07

you. and we thank you for your inputs

5:09

as ever. Right, Mike. So we

5:12

are jumping straight into the many stories

5:14

that we have this week. we both

5:16

like a good news story,

5:19

you know, we both like a kind of interesting,

5:22

relevant, news lead, but we also like,

5:24

stories that are a bit more interactive, a bit more interesting.

5:26

I don't know if you remember the days of Snowfall,

5:28

the New York Times is very interactive

5:30

piece from 2012. your first

5:32

one is, not dissimilar to that, I'd

5:34

say. Is it? it's a great tale and it's

5:37

told really nicely online. Talk us through what that is.

5:39

Yeah, so this is from NBC News and it's by Brandy

5:41

Zdrozny, who, if you don't

5:43

know, hopefully listeners of this podcast

5:45

know Brandy. She's one of the best

5:47

reporters out there on misinformation

5:50

and has done ton of amazing work

5:52

on that over the years. Um, so this is a, a

5:54

brand new article that just came out this week. On

5:57

how Russian propaganda reaches and influences

5:59

the U. S. And it's a very interactive,

6:02

well done piece in that snowfall

6:04

style, the modern version

6:06

of snowfall. and it's just talking about how

6:09

Russian disinformation campaign start and

6:11

basically like how they are creating

6:14

all sorts of, different

6:16

things bits of nonsense and just sort

6:18

of sending it out into the world through

6:21

this process, creating fake videos,

6:23

sometimes AI generated. Sometimes

6:25

it appears maybe hiring actors to

6:27

play roles of people, you know, sort of confessing

6:30

things and just making up

6:32

complete nonsense stories. And

6:34

then it sort of follows through as to where those stories

6:37

go and how they filter onto,

6:39

fake news sites that Russian

6:41

folks have set up. How they find

6:44

various trolls on

6:47

the site formerly known as Twitter.

6:49

and then often it sort of launders its way

6:52

up through the chain until

6:54

eventually someone like a JD Vance

6:56

or a Donald Trump might repeat them.

6:58

And so it actually follows one particular

7:00

story, that was completely made up,

7:03

regarding Ukrainian,

7:05

officials that are close to, Zelensky

7:07

buying very expensive yachts

7:10

and sort of implying that U. S.

7:12

money that is going into Ukraine

7:14

is being laundered Through,

7:17

various people to, purchase

7:19

expensive yachts for Zelensky, and

7:22

that then sort of, you know, filtered

7:24

its way through and again laundered

7:27

its way up the chain to

7:29

the point that at one point J. D. Vance sort of mentioned

7:32

it on some podcast that he was on

7:34

that we shouldn't, you know, like, why, why are we

7:36

sending more money to Ukraine for them to buy expensive

7:38

yachts, which was a thing that did not happen. Um,

7:41

but because it sort of, started from Russian

7:44

disinformation and, made its way

7:46

along through, nonsense,

7:48

peddling American trolls on

7:50

Twitter, eventually JD Vance picks

7:52

it up because that's kind of the world that he lives

7:54

in. And then suddenly there's like this legitimate

7:56

question of, Oh wait, are Ukrainian

7:59

officials buying yachts with American funding? it's

8:02

a really well done story and it's,

8:04

really like, again, it's a very interactive

8:07

presentation. The presentation is really fantastic,

8:10

and they sort of show like how they're doing

8:12

this with a whole bunch of stories and most of which aren't,

8:14

catching on at all, but a few of them

8:16

are, are actually breaking through. Yeah.

8:19

this is what's so wild about this piece

8:21

is the scale at which, this group

8:24

is, doing this work, and the systematic

8:26

nature of it right it's, it's huge

8:29

in terms of scale and like you say, Not many

8:31

stories make it through, but when they do, they really hit.

8:34

the story that's, mentioned as well about,

8:36

a kind of Ukrainian man who's a

8:38

kind of actor, who apparently

8:41

confesses to trying to assassinate Tucker Carlson

8:43

for 4, 000. Wow. Tucker

8:45

Carlson's in Moscow, absolutely wild

8:48

and comes with a whole array of like fake

8:50

images of, burner phones and

8:53

pictures of Tucker Carlson in Moscow and

8:55

all packaged up in a really, you

8:57

know, if you don't know any different, believable

8:59

way. what do we know about this kind

9:01

of group that's behind this and,

9:03

and how, kind of concerning is it that

9:06

so many stories are getting through into the mainstream,

9:08

do you think?

9:09

Yeah. So, you know, there have been a few

9:11

different sort of groups related to

9:14

doing all these things people know about, like

9:16

the Internet Research Agency, which was the

9:18

one that sort of became famous eight years ago.

9:21

This one in particular is from this group called

9:23

Storm 1516.

9:25

And they're creating a bunch of these things.

9:27

And there's somehow, you know, connected

9:29

to, Russian officials, but

9:32

it seems like there's not that much information

9:34

directly on the group, but they're

9:36

able to produce these really impressive things. And

9:39

then, the example you gave of,

9:41

the Tucker Carlson story, like that got

9:43

picked up by a bunch of American

9:45

sources. And. Interestingly, there was

9:47

a story about a month ago, which,

9:50

I don't think we actually covered it on the podcast

9:52

about, you know, U. S. officials charged,

9:55

uh, people associated with RT,

9:57

Russia, Today, the sort of, you know, Russian

10:00

propaganda news outlet. and

10:02

they had been funding these American

10:05

YouTube influencers like

10:07

Tim Poole and Benny Johnson and folks

10:09

like that, to ridiculous amounts.

10:11

maybe we did mention it

10:13

yeah, I think we'll be there

10:13

we did. We briefly mentioned it. and

10:16

those are, you Notably in this

10:18

story, those are the same, same

10:20

people who picked up on this story about

10:22

Tucker Carlson supposedly being

10:24

targeted for assassination. and they

10:27

all ran with it. And so there's this whole

10:29

ecosystem that has been created to sort of launder

10:32

this Russian propaganda. And,

10:34

there's a little bit. that I hesitate when

10:36

talking about this story, because we heard

10:39

there were similar arguments made,

10:41

over the last eight years, really nine years,

10:44

about like Russian propaganda and

10:46

who it influences. And there's evidence

10:48

that it doesn't. It doesn't actually

10:51

do much to influence people

10:53

in some ways, but the thing that it, seems

10:55

clearly to do, and as the campaigns

10:57

are becoming more sophisticated, I think the real sort of

10:59

takeaway from this is that the campaigns are being more sophisticated,

11:03

but it really shows how much

11:05

of the disinformation world is

11:07

really about confirmation

11:09

bias. It is so much

11:11

about not necessarily convincing

11:14

people. But taking people

11:16

who already think this must be happening

11:18

or these bad things must be going on

11:20

and giving them something to feed off of

11:22

and just say, like, yes, well,

11:25

this story confirms my priors

11:27

and therefore it feels true and therefore it

11:29

must be true. And I think.

11:32

people have made this point before, but I think

11:34

it's one that's worth reminding people and thinking

11:36

about it. Last week, we were talking about like risks versus

11:38

harms, but I think, you know, sort of a related

11:40

issue is the confirmation bias

11:43

at the heart of so many of these discussions

11:45

about, online speech and dis and

11:47

misinformation is that it's

11:49

really all about confirmation bias and people

11:51

looking for content to support what they want

11:53

to believe. And what we're

11:56

creating. In this ecosystem

11:58

right now are tools to support

12:01

anyone's bias, and

12:03

to give them content that allows

12:05

them to accept it. And we're all guilty of

12:07

it. Like lots of people would say like, oh, well,

12:09

you know, I'm, I'm better than that. But it's not

12:11

true. I mean, I think everybody at some point

12:13

falls for some nonsense because they want it to be

12:15

true and it fits with their prior beliefs

12:17

on something. But here,

12:20

what we're seeing when the campaigns get this big

12:22

and this advances, it gets easier for

12:24

people who want to believe these

12:26

things to take them and to,

12:29

expand them further. And because

12:31

they're so sophisticated and because they're so advanced

12:33

and they're creating these videos with all the supporting

12:35

documentation, even though it's all 100

12:37

percent fake and 100 percent made up easier

12:41

for these things to sort of catch on become

12:43

stories in some form or another. Okay. And

12:45

then the one other interesting thing that I think

12:47

is coming out of it is that none of

12:50

these stories, even the ones that have broken through. And a lot

12:52

of these stories, they note like just didn't catch on at all,

12:54

but even the few that did catch on didn't

12:56

go huge. Like they weren't like these big

12:59

disinformation campaigns, but they just sort of like

13:01

creep in. Like even I mentioned JD Vance mentioning

13:03

the Ukraine, Ukrainian yachts or whatever.

13:06

He just mentions that in passing in a podcast.

13:08

It doesn't become a major story, But it's just

13:10

one of these things where he's making these arguments about the

13:12

funding for Ukraine right now and

13:14

saying like, Oh, do we want that funding to go towards,

13:17

more Ukrainian yachts? that's not

13:19

the major point of the story, but it just sort of filters

13:21

into the general narrative.

13:23

Is there, apart from people being more immune

13:25

to narratives like this and being aware

13:27

of how stories get laundered

13:30

through this system, which is really clear

13:32

from this, piece, are there other ways that

13:34

we could kind of like slow, the

13:36

spread information through this chain, do you think,

13:38

are there ways of, of spotting it, you know,

13:40

dots that we can trace maybe in the system,

13:43

because it's really tough for anybody to

13:45

be, so on top of all

13:47

of the different narratives out there that they're not. captured

13:50

by one of the many many narratives

13:52

that have been perpetuated by these Russian organizations.

13:55

How can we slow this down a bit more or stymie

13:57

some of the spread?

13:58

I mean, I think there's a few different things. So one, and

14:01

this is the thing that I always go back to, and it always feels like

14:03

sort of a cheap shot, but Media literacy

14:05

is incredibly important. And again, I just said

14:07

like confirmation bias, everybody falls prey

14:10

to it at some point. I've done it. I'm

14:12

sure you've done it. Like there are stories that you just

14:14

want to believe. And so you fall for it, but

14:16

having general media literacy

14:18

and sort of recognizing like, is this

14:20

coming from a trustworthy source? What are the details?

14:22

Who is reporting it? really matters and

14:24

learning like when to be skeptical,

14:27

especially if things that confirm your priors

14:29

is a really important skill. It's not one

14:31

that anyone will ever be perfect at, but. Training

14:33

people on that, I think is important. The

14:35

second thing is just sort of recognizing media

14:37

ecosystems. And this is something that, is

14:40

really, really important in

14:42

the sort of reality

14:45

based world. I

14:47

Sounds fun.

14:48

I, I'm trying to choose words

14:50

diplomatically here, but like in the reality

14:52

based world, mistakes are made.

14:54

Reporters make mistakes, news

14:56

organizations make mistakes, we recognize

14:58

that, but when those mistakes

15:00

are discovered, they tend to

15:03

recognize that, admit that, talk

15:05

about it, make corrections, and do

15:08

things like that. In the

15:10

sort of pure nonsense peddling

15:12

world, which now exists,

15:15

what they will do is not do that,

15:18

right? I mean, they will spin stories, they will

15:20

take fake stories, they will take real story. I mean,

15:22

a lot of these stories will come with a grain of

15:24

truth behind them, because that allows them

15:26

to sort of obfuscate present these things

15:28

in a way that, presents a darker picture

15:30

than the reality says. I mean, we've seen that

15:32

with things like the Twitter files

15:34

and the Hunter Biden laptop, which,

15:37

you know, has been totally misrepresented over

15:39

time. But those sort of start with

15:41

a grain of truth. But the ecosystems

15:43

that are pushing those messages, you'll

15:46

notice that they don't issue corrections.

15:48

They don't hold each other to

15:50

account. They don't, admit

15:53

when they have these kinds of failings, they

15:55

sort of try and cover it up. They don't mention

15:57

it again, or they, they sort of

15:59

dismisses like, Oh, well, you know, we,

16:01

we had just heard that, you know, we were just reporting

16:03

what somebody else said rather than

16:05

admitting that there was a mistake made. So.

16:08

I think there's an element of like watching

16:10

the media ecosystem that you are

16:13

engaged with and seeing how they

16:15

handle when mistakes are made or

16:17

when false things are reported or misleading

16:19

things are reported. And that,

16:21

that is a key indicator for me.

16:23

Yeah. I mean, there's a couple of great quotes in this piece

16:25

from the kind of The right wing grifters

16:27

who not only have made money by being paid

16:29

directly by Russia through,

16:32

through the fake company that you mentioned, but

16:34

also making money, obviously, through the monetization

16:37

of videos about rumors like this, they're

16:39

benefiting twice, but there's some great quotes, you know,

16:42

where they say we actually reported it as being a pox

16:45

and we noted at the time that it wasn't possible

16:47

to verify the claim. It's like, that would have been

16:49

a, if anything, a. Scintilla

16:51

of, um, of a second in which they mentioned

16:53

it and then they would have moved on and

16:56

they would not have gone back to it.

16:57

Yeah, I mean, I think of the quotes someone said at

16:59

one point was like, you know, they reported on it and

17:01

then at the end they were just like, you know, we have no

17:04

real way of verifying this, but, you know,

17:06

we thought it was important. It's like, that is

17:08

not, that is not really

17:10

a good way to handle this

17:11

no, and there's, there's a bigger conversation which we

17:13

should come back to. I'm really interested in, from

17:15

the work I do in my day job around, what are

17:17

the kind of signals that content creators

17:19

and, influencers, inverted

17:22

commas, what kind of, ethics and

17:24

transparency should they have and how should

17:26

the platforms rank them

17:28

or distribute them according to those kind of

17:30

journalistic principles is something that I think a lot, a lot

17:32

of platforms are thinking about. Great.

17:35

share that story in the show notes. It's definitely something to

17:37

go and have a read of. It's really interesting, to

17:39

kind of scroll through and see the different elements. But

17:41

we'll move now onto, our second

17:43

story about laundry and about laundering.

17:46

piece that, came out this week as well, really

17:49

fantastic piece by, Daphne Keller

17:51

from Stanford who stood in for me a couple

17:53

of weeks ago on the podcast. She's written

17:55

for Lawfare about the rise

17:57

of the compliant speech platform. and

18:00

it's a long read. It's definitely worth spending time with.

18:02

There's lots of great elements to it. Essentially,

18:04

Daphne speaks to. around

18:06

a hundred kind of trust and safety experts. And she's

18:08

pulled together all the insights from that

18:10

to basically make the argument that content

18:13

moderation is becoming a compliance function,

18:15

much like, banks have compliance

18:18

functions and factory floors have to

18:20

be compliant to kind of keep workers

18:22

safe, Trust and safety is now moving

18:24

in that direction, and her

18:27

argument is that the regulation that has been

18:29

introduced over the last few years, the Digital Safety

18:31

Act, the Digital Services Act, the

18:33

Online Safety Act, and the Code

18:36

of Practice for online services in Singapore

18:38

have created these conditions where

18:40

platforms are essentially having to be

18:42

able to report on everything they do,

18:45

all the decisions that they make, they're having

18:47

to standardize how they work. They're having

18:49

to make all of the decisions trackable.

18:51

That's a big part of the DSA is that the

18:54

takedown decisions are recorded in a database.

18:57

And that's. Creating a whole

18:59

ecosystem around, auditing

19:01

those decisions and tracking whether those

19:03

processes have been followed correctly, and she

19:05

kind of makes the case that this is essentially compliance

19:08

and she goes on to say that there are a bunch of

19:10

downsides to this, which, are

19:12

ones that we talk about a lot on the podcast, which

19:14

are definitely worth noting and somewhat

19:16

concerning around government having

19:19

overdue influence into speech

19:21

rules on platforms about how,

19:24

there's a kind of overdue focus on, on metrics

19:26

and following the right process, even if the

19:28

outcome isn't the, the best for users

19:30

or for society as a whole. And so,

19:32

you know, it's a really great kind of original

19:35

piece of reporting, I'd say, which we both

19:37

read and we both thought, yeah, this is great. and

19:39

we have to talk about it on the podcast. So here we are.

19:41

yeah, yeah. No, I think

19:44

it's, it is a really valuable

19:46

contribution to the way of thinking about this

19:48

understanding all of these things. And,

19:51

um, I, I think like Daphne

19:53

have some concerns about it. there's

19:55

an argument I can't remember again, if

19:57

I've made this argument directly on this podcast before

20:00

that, I think that there's an important

20:04

role in, having companies

20:07

and top executives at companies thinking

20:09

of trust and safety specifically,

20:12

as a really important,

20:14

uh, component. sort of marketing function

20:17

that is important, centrally

20:19

to the company. I think that a lot

20:21

of companies think of trust and safety

20:24

as a cost center. And we see that

20:26

like there are layoffs all the time. You have to hire a whole bunch of

20:28

people and it's a pain that

20:30

is frustrating to

20:32

many Company executives and

20:34

that comes through when you see them testify

20:37

or when they talk about these things or Mark Zuckerberg's

20:39

recent Comments on this stuff. I think

20:41

he sees trust and safety as

20:43

just a nuisance and a cost center

20:45

for meta And it

20:47

reminds me of the way that

20:50

companies 20, 30 years ago

20:52

viewed customer service. It was

20:54

a cost center. You have to spend all this money

20:56

on people who answer calls from angry

20:58

customers. And you know, you

21:01

want to get them off the phone as quickly as possible

21:03

and not have to deal with their complaints.

21:05

And then some companies began to come around

21:07

to the idea that wait

21:10

a second, like this is one of the major touch

21:12

points we have between our company

21:15

and our customers. This is

21:17

where they are interacting with us. This is where

21:19

they're communicating with us. Customer

21:21

service itself is actually a marketing function.

21:23

It is a way for us to present a, good

21:26

side of what we do to our customers.

21:28

And therefore, we should invest in it because

21:31

that will come back in the future

21:33

in support. And the argument I keep

21:35

making is that We need to move trust and safety

21:37

to that kind of role where

21:39

it is seen as we are making this platform

21:42

better. And in the long run,

21:44

that means more users, more advertisers,

21:46

more whatever, because we are making

21:48

the platform better. That's what a trust and safety

21:50

role should be. It should be seen as an investment,

21:53

not as a cost sink. And

21:55

my worry is That as

21:57

trust and safety moves more and more into being a compliance

22:00

function, it goes further and further

22:02

away from that. It is again, seen as

22:04

like, okay, these are boxes we

22:06

need to check to make the regulators

22:08

happy, not this is what is

22:10

actually best for our users. This is

22:12

what is going to create a better internet,

22:15

a better service, a better, whatever

22:17

overall. Whereas if

22:19

we could get the companies to recognize

22:21

that this is in their best interest, not

22:24

because the auditors say so,

22:26

not because the regulators say so, but

22:28

because it is actually better for the long term

22:30

health of the wider Internet and

22:32

the services themselves, that would be better. So

22:34

my reaction to this is I think it's

22:36

great contribution and I think it's accurate

22:39

and really important, but I worry

22:41

that This sort of, capitulation

22:43

to the idea of it being a compliance

22:46

function actually takes us further from the

22:48

world that we want.

22:49

Yeah. I mean, I do like the idea of. trust

22:51

and safety being like customer experience and

22:53

it going in that direction, but what's the downside?

22:56

Do you think of it being more like compliance? Like

22:58

if it doesn't go that way.

23:00

Yeah. I mean, some of this comes out. It's kind of

23:02

funny as you read Daphne's piece that,

23:04

like everybody is tiptoeing around the

23:06

idea that like, we're not setting the rules.

23:08

We're not telling you what to leave up or take

23:10

down. But like,

23:13

you know, we have all these things that you need to do

23:15

in order to make it work. and

23:18

so. What happens when you have

23:20

that sort of situation, you have auditors involved,

23:22

what everyone is trying to do is basically

23:24

just cover their ass. So that if

23:27

there's a lawsuit, they can say, well, we

23:29

did everything by the book. You

23:31

know, we did the things that the auditors

23:33

told us to do, that the regulators indicated

23:35

were okay. Or in some cases, where

23:38

there have been previous court cases

23:40

or previous, fines, we

23:42

followed the rules that everybody else laid

23:44

out. And so then it's just a, issue

23:47

of checking boxes as opposed to actually

23:49

doing what's right. It is what is, going

23:51

to decrease the liability on us? What is

23:53

going to decrease the legal risk when you have

23:55

auditors involved? That's what you're talking about.

23:58

Not what is actually going to be the best

24:00

for the internet and for the users

24:02

of the internet and for their speech. And

24:04

there is. The potential. that

24:07

those two things align that what the government

24:09

wants, what the auditors suggest, and what

24:11

is best for the users and for speech align.

24:14

But historically, we haven't really seen that

24:16

play out.

24:17

hmm

24:17

And so that's, that's where my fear is.

24:19

yeah, I mean I Daffy mentions

24:21

banking in her piece and

24:24

I wonder if compliance

24:26

is actually that bad. And if banking hasn't

24:29

shown us that compliance isn't

24:31

that bad. So I slightly disagree with

24:33

kind of where you're coming from. And I just want to explain why. So

24:36

banks pre 2008 were

24:39

not dissimilar to tech platforms in some sense,

24:41

right? They had a lot of users.

24:43

They were processing a lot of personal data. They

24:45

were going through millions of transactions every day.

24:48

they had a lot of kind of wealth and money involved,

24:51

and they were very central to people's lives. Not,

24:53

not hugely dissimilar platforms in some

24:55

sense. 2008 happens,

24:57

the trust in banks kind of completely disappears.

24:59

There's course of regulation, and naturally

25:02

there's pushback from the banks and so they

25:04

will start to say things that we actually

25:07

have heard quite recently, right? So that

25:09

they, you know, Regulation would affect

25:11

the bank's ability to lend

25:13

to consumers and would affect the growth of,

25:16

economies and businesses, it would

25:18

make it difficult to, sustain

25:20

their own operations because there'd be an administrative burden

25:23

on regulation and you know,

25:25

it would make it easy for larger banks to

25:28

sustain a place in the market because smaller

25:30

banks wouldn't be able To kind of comply in the same way.

25:33

And then what happened was all of these

25:35

regulations got introduced, right? So

25:38

you had all of these frameworks introduced

25:40

for systemic risks. You had a bunch

25:42

of transparency required around certain

25:44

markets. You had consumer protection

25:46

for certain products. And

25:49

now we all know and understand that

25:51

banks are regulated. and I wonder

25:54

if, that is a not dissimilar

25:56

place to where, where we are now,

25:59

which is compliance,

26:02

is a cost of doing business. If you want to

26:04

get into the speech space, you have to do

26:06

all of these things. And actually

26:09

it's only because there is a

26:11

big pushback against it, that we

26:13

actually think differently about, speech

26:16

and about the kind of state of online speech

26:18

right now. Your thoughts.

26:22

Oh boy. All right. So, um,

26:25

yeah, I, think there are a lot of differences. And

26:27

so one, the banking space was

26:29

very heavily regulated before that. Yes, the regulations

26:32

have changed and they have certainly advanced, but it was certainly

26:34

not a Wild West of regulation

26:36

free stuff that the, the banking

26:38

industry has long been a heavily

26:40

regulated industry. Also, many

26:43

of the concerns that were raised about some of those banking

26:45

regulations have actually proven true.

26:47

I mean, we are now a year and a half

26:49

since Silicon Valley Bank, which is

26:51

the bank with which, my company,

26:54

did business was. All

26:57

their accounts were frozen and all my

26:59

company's money was frozen.

27:00

Oh shit. I

27:03

didn't realise that.

27:04

It was, it was, quite an experience.

27:06

It was actually, this is the funny story was

27:08

the day that happened was,

27:10

a day that we had an offsite to

27:14

work on the trust

27:16

and safety tycoon game, which came

27:18

out last year with lots. And so we had

27:20

game. Great game. Go and play it. Bye.

27:22

I've gone up north, uh, to this very

27:24

nice house that a friend of ours had rented,

27:27

uh, with this amazing view, and,

27:30

in the morning before I left to drive up,

27:32

I saw the news about Silicon Valley Bank,

27:34

basically being shut down. Freaked

27:37

out and that was not a, comforting

27:39

offsite. And we did not get as much work done because

27:41

I spent a whole bunch of time in this beautiful

27:44

house on the phone, trying to figure out if

27:46

our bank accounts was

27:48

available to

27:49

Yeah,

27:50

So, you know, and since then, of course, there's

27:52

been a lot of look at, it looks into what's

27:54

happening with the U S banking system and

27:57

how poorly targeted some of those regulations

27:59

were and how they actually have, you know, the end result

28:01

of this was like Silicon Valley bank got taken over

28:04

by a larger bank and other larger banks

28:06

also have sort of, you know, there's been a lot of consolidation

28:09

and. it has actually slowed

28:11

down certain things and it has created some problems.

28:14

And on top of that, the even

28:16

more important thing is like, you know, Even though

28:18

in some cases, and I know this is one that

28:20

like sets people off in some cases,

28:23

money can be equated with speech

28:25

and expression. That's a whole other

28:27

area that we are not about to go down into.

28:30

But in general, money and speech

28:32

are not the same. The same thing. And

28:34

so regulating money and how people hold

28:36

money is a wholly different

28:38

thing than regulating how people can

28:40

speak and how they can communicate. And so

28:43

I think that the analogy breaks down

28:45

after a certain point. And as soon as you're talking

28:47

about compliance and regulations around

28:49

speech, the end result, and this is a point that

28:51

Daphne definitely makes, is that you are going

28:53

to lead to suppression of certain kinds of speech.

28:55

And that is always at the core, the

28:58

concerns that I have about these regulations.

29:00

Is what happens because as soon as

29:02

you get into that area, almost

29:05

always happens is that the speech

29:07

that is most heavily regulated and suppressed

29:10

is speech from marginalized groups, marginalized

29:12

individuals, people with less power,

29:14

and the more powerful ones, folks are fine.

29:17

But people who, have more controversial

29:20

opinions, but important ones, are

29:22

kept out of it. I mean, we've talked about this where

29:24

there are, Attempts to regulate kinds

29:26

of disinformation that what happens. It

29:28

leads to pulling down information on LGBTQ

29:31

content. It leads to pulling down information

29:34

on health care, uh, things around

29:36

abortion. All of these kinds of

29:38

topics get targeted. By these

29:40

speech regulations. And from the company standpoint,

29:42

if the idea is just to comply, the easiest

29:45

thing to do is just say, we're not going to do

29:47

that. And we're seeing that to some extent, not,

29:49

you know, this is not getting all the way there, but

29:51

there is a story that we had talked about doing, which we didn't do

29:54

though now I'm bringing it up around,

29:56

like how meta Instagram and

29:59

threads are handling, political

30:01

content and they're sort of pushing it, they're

30:03

down ranking it and trying to not have it

30:05

apply, you know, not show up as

30:07

much within the conversations. And that is

30:10

a direct result from all the shit that

30:12

meta is getting for how

30:14

it moderates political content. And so they've

30:16

sort of thrown their hands up and said, it's easier

30:18

for us not to deal with this content at all.

30:21

And therefore we're going to sort of push it down the algorithm.

30:23

We're not going to block it, but we're not going to have it promoted

30:25

as widely. And that's a direct response

30:28

to like people complaining about which content

30:30

they did promote in the past. And then just saying,

30:32

we just don't want to deal with this anymore. The further

30:34

and further you get towards creating

30:36

regulations and compliance burdens for the

30:38

speech, the more it's going to lead to this kind

30:41

of speech being suppressed because it's just not worth

30:43

it.

30:43

yeah. And I do, I do buy the idea that,

30:45

There is going to be an unnecessary

30:48

blocking, deranking, downranking

30:50

of speech and that concerns me.

30:53

but I imagine that probably Banks said similar

30:55

things in 2008. And

30:58

my question is like, in 10 years

31:00

time, You know, 15 years time,

31:02

however far we are from from those regulations,

31:05

kind of context that users are banking

31:07

in better than then? And

31:09

can you see a situation when

31:12

this regulation, this compliance

31:14

agenda when that actually takes

31:16

hold and it looks like it will, because we're seeing

31:19

more and more, regulation, government regulation about

31:21

speech all the time. Can you see a situation

31:23

where the people would adjust to the context and in

31:25

10 or 15 years time have processes

31:28

to better understand, mitigate

31:30

in a way that isn't just about downranking

31:33

or blocking speech as we're worried about?

31:35

Yeah, I mean, you know, we'll see, like, people always

31:37

adjust to the reality that they have around them.

31:39

and so I don't think that that is,

31:42

you know, yeah, I mean, people will adjust and you don't

31:44

know what you don't have. So, you

31:46

know, whatever, whatever world we live in,

31:48

people will say like, yes, this is normal. And this

31:50

is okay. But I think, the idea

31:53

of That, all the regulations

31:55

around finance, and I'm not,

31:58

let me be clear, because it was about to sound like I was

32:00

saying, like, oh, get rid of banking regulations. I am

32:02

not saying that. Not saying that.

32:04

Do you keep all of your money under the floorboards now because

32:06

of what happened?

32:08

yeah, but, we are still

32:10

living in a, in a world today where

32:12

financial scams are massive

32:14

and all over the place. And,

32:17

You can say that maybe some of the

32:19

regulations on banking led to

32:21

the rise in like crypto scams. And

32:23

now we're talking about pig butchering, which, you know,

32:25

sort of relies on crypto scamming. there

32:27

are a bunch of different things like this stuff sort

32:30

of oozes out somewhere else in

32:32

the system. The idea that we can sort of perfectly

32:34

regulate these things is not true because

32:37

humanity is messy, and society

32:39

is messy, and the way that people interact

32:41

is messy. and, so much of these

32:43

regulations, and again, I'm not making

32:45

the argument for no regulations. I'm not saying get

32:47

rid of them, but I'm saying recognize that there are

32:50

consequences to these regulations and

32:52

the unintended consequences lead to, you

32:54

know, the problems ooze out somewhere

32:56

else. And, and we should be very

32:58

careful about where and how those problems

33:01

ooze out. And don't assume that just because

33:03

like the world itself doesn't collapse,

33:05

that these approaches are necessarily the right

33:07

way to do it.

33:08

Yeah. Okay. As you can see from the discussion

33:10

we've had, it's a great piece. I definitely

33:12

recommend listeners go and have a read. Daphne's done a

33:14

great job of adding something new.

33:17

And we could go on and talk about, it

33:19

for, a lot longer, but we, we

33:21

have other stories to get to. We've,

33:23

we've gone, Mike, from there, from a, uh,

33:26

a very thoughtful, well argued

33:28

piece by somebody who's really trying to add to

33:30

the debate to somebody who

33:33

is the antithesis of that. We're

33:35

back talking about Elon Musk. Um,

33:38

we're back talking about Elon Musk. And what's he done

33:40

this week? What should we, what can we be angry about?

33:42

Well, I was about to say, because, you know,

33:44

I wrote an article about this and I think we're going to

33:46

link to my article in the show notes. So I thought you were about

33:49

to say, we went from a thoughtful piece by Daphne to

33:51

a terrible, stupid piece by you, Mike.

33:55

Never. I'd never say that. Don't do it,

33:58

But Elon has continued his,

34:00

belief in changing what used

34:02

to be called Twitter to his liking and

34:04

without any thought whether or not it actually makes sense. And

34:06

so he did two big

34:08

things this week. One is changing the terms of service,

34:11

that will go into effect in mid November,

34:14

in ways that have, um, For the time being, I'm going to

34:16

just raise some eyebrows. But the bigger one, the more noticeable

34:18

one is that, while he's talked about

34:20

this in the past, they've announced officially,

34:22

that they are changing how the block feature

34:25

works on x. Where

34:28

formally, you know, when you block someone, they can't

34:30

see what you're doing and they can't interact

34:32

with you. And if anyone else interacts with,

34:35

you know, it's all, it's all blocked. He's

34:37

never liked that. And so now the way

34:39

it is going to happen is that if someone

34:41

blocks you, you can still see their content.

34:44

You just won't be able to interact with it. You won't be able

34:46

to reply. You won't be able to retweet it. But

34:49

you can still see it. And there is

34:52

to give him a very, very,

34:54

very tiny amount of

34:56

Mike. Don't do it. no

34:59

need.

34:59

there is a non

35:02

crazy way to think about

35:04

this. And that is basically

35:07

when people are posting to a public. system,

35:10

that content is public and anyone

35:12

can see it, right? If

35:15

I post something to tech dirt, I can't

35:17

ban someone from seeing what

35:19

I've written. If you write something and everything in moderation,

35:22

you can't ban someone from seeing it.

35:24

It is public. And therefore there's just a general

35:26

sense that anyone can see it. And this is the point

35:28

that everyone always raises with the way that Twitter does

35:30

blocks or did blocks, which

35:33

is that. You know, if somebody blocks you, you can

35:35

just go into incognito mode and

35:37

you can see that content because it's public.

35:39

And so block is sort of a weird

35:42

function on social media.

35:45

So the theory is like, well, this changes

35:47

that because it's public, it remains public,

35:50

but the reality is very different than the theory

35:52

on this. And the reality is that that

35:54

friction that the block function adds

35:57

makes a real difference in slowing

35:59

down abuse and harassment a

36:01

variety of different kinds of.

36:03

Yeah, it's a first, it's a first line of defense in

36:05

a lot of cases, isn't it, to, people being,

36:08

targeted in some way.

36:09

Exactly. And it's not perfect by any

36:11

means, but it's not meant to be. It is just an

36:13

attempt to add some friction. And often,

36:15

not always, often that amount of friction

36:17

is enough to make a difference in

36:20

very important ways. And so, it

36:23

works for some element of

36:25

what it is trying to deal with, and

36:28

not everything. And what Musk is doing

36:30

is sort of removing it for a large

36:32

segment of the population for which it actually

36:34

does work.

36:35

Mmm.

36:36

And so, the history here is also

36:38

important, which is that 11

36:41

years ago, Dick Costello, who was

36:43

CEO of Twitter at the time, had the

36:45

exact same idea and had

36:47

the exact same thought that Block is stupid and

36:50

we should And

36:52

he did the same thing. He turned it so that you

36:54

couldn't interact with people, but

36:56

you could still see people who

36:58

blocked you. It lasted for,

37:00

I think, two hours before

37:03

the revolt was so loud

37:05

and so angry and so

37:07

public that they rolled it back and

37:09

never spoke about it again. As far as I know,

37:11

there was never any like, you know, postmortem.

37:14

There was no discussion of it. There was a, we're

37:16

rolling it back. We're sorry. We're going to rethink this.

37:18

And then it was never mentioned again.

37:20

Right. Elon doesn't do postmortems.

37:22

no, no, no. And, I mean,

37:24

Elon at times will roll stuff back.

37:26

So it'll be interesting to see if that. I

37:30

am pretty sure that Elon has no

37:32

idea that Twitter tried this before and

37:34

that it failed miserably and was a complete

37:37

disaster and an embarrassment for

37:39

Twitter at the time. and we are starting

37:41

to see there is pushback. I have even

37:43

seen, people who are normally big supporters

37:45

of Elon have been screaming about

37:48

what a stupid idea this is. and

37:50

we're seeing a big exodus

37:52

from Twitter to other sites.

37:55

And so people are recognizing

37:57

that this is a problem and they don't like the way

37:59

the site works.

38:00

yeah, you were saying before we started recording that blue

38:02

skies had this massive, uh, Surgeon

38:04

users, which we're kind of tallying with

38:07

the changes to block. Right. And we, this is where

38:09

I ring the, uh, you know, Mike

38:10

Yes, yes, yes, yes. Disclaimer,

38:13

disclaimer, disclaimer. I

38:15

am on the board of blue sky. I am associated

38:17

with blue sky. Consider me biased. Do

38:19

not take what I'm saying as impartial

38:22

for the next section here.

38:25

I know he can't block people

38:27

on the platform for you or unban.

38:29

You don't get in touch with them.

38:31

Yes. Yes. Uh, yeah, please don't. but

38:34

yes, blue sky has seen a massive influx

38:36

in users. like even bigger,

38:39

you know, blue sky saw a huge influx

38:41

of users when, Brazil,

38:43

uh, Band twitter and

38:45

a whole, you know, basically, you

38:47

know, we jumped up about 4 million users

38:49

the course of a couple of weeks, from that

38:51

band. This is larger

38:54

than that as of right now. I don't know if it will

38:56

last as long, but, right before we

38:58

started recording, I looked at the U

39:00

S app store, iPhone

39:02

downloads and blue sky

39:04

was the number four app. four

39:07

free app. being downloaded,

39:09

which is pretty impressive. We were one ahead

39:11

of TikTok, ahead of Instagram,

39:13

ahead of WhatsApp, ahead of Gmail. the

39:16

only ones that we were behind were threads, which

39:18

also, I believe lots of people

39:20

are probably going to threads, uh, ChatGPT

39:23

and Google. So, having BlueSky, this

39:25

very small player in the space, Being

39:28

the fourth most downloaded app almost,

39:30

and, it was reported, this is not me revealing

39:33

anything, uh, internal that,

39:35

that I'm aware of, but it was reported publicly

39:37

that within BlueSky, they

39:40

refer to, EME, which

39:42

are Elon Musk events that

39:44

suddenly lead to a massive influx

39:46

of users on the platform. And

39:49

so it's interesting. to see

39:51

like how these traces and I'll note again,

39:53

bias, all of that, all the caveats.

39:56

you know, blue sky implements block in a very

39:59

different way. And in fact goes further than,

40:01

Twitter did before, you know, it is referred

40:03

to as the nuclear block on blue

40:05

sky, where when you block someone,

40:07

it obliterates every

40:10

interaction that you have with that person. Whereas

40:12

with blocks on Twitter, the way they, you know, have

40:14

worked for, A very long time is that

40:16

even if you block someone, other people can still track,

40:19

the conversation that you had, they can

40:21

sort of see what happened on blue sky.

40:24

Basically disappears and it is effectively

40:27

impossible to track down and it stops

40:29

harassment campaigns cold. Like

40:31

it is a very, very

40:33

impactful way of doing it. There are some

40:36

ways around some of it, but

40:38

they throw a lot more friction into the gears

40:40

and it has been, it's really powerful.

40:43

And it's. sort of clued me into

40:45

how these very small changes and how

40:47

these things are implemented have huge impact

40:50

in terms of how harassment

40:52

campaigns can and cannot work on

40:54

different platforms.

40:56

Yeah, and are we taking the influx

40:58

of users to blue sky to mean that people

41:00

are interested in how platforms think about safety?

41:03

it something that we think is a kind of conscious decision

41:05

for users or do you think there's people are just shopping

41:08

around for alternatives?

41:09

I don't know the answer. And so this would

41:11

be purely speculation. You know, I think

41:13

that it's just a sense that, Yes,

41:16

that some of the safety features

41:18

matter. I mean, we saw this early on with blue

41:20

sky, you know, when blue sky first launched as

41:22

a beta, it was a private beta invite

41:24

only the first sort of controversy

41:27

was that they hadn't yet implemented block

41:29

at all. And this was, they were just sort of testing

41:31

out the system. And there were like 5,

41:33

000 people on the platform and people were

41:35

so angry. How could you launch this

41:37

without block? Like block was like a, is

41:40

now considered a fundamental feature

41:42

of social media, which is kind of surprising

41:44

because it was not originally, that was a, a

41:46

later addition to all of this. And

41:48

as we see, even Twitter sort of was rethinking

41:51

how it works, but in our minds now people

41:53

have sort of learned to associate

41:55

blocking as a fundamental

41:57

feature. Key safety feature

41:59

of social media that social media needs,

42:02

then how it's implemented really matters.

42:05

And whereas blue sky is gone for this nuclear

42:07

block, Twitter or X is

42:09

now moving in this opposite direction of

42:11

the loosest level of block that

42:13

you could possibly imagine. Something is much more akin to

42:15

the concept of muting rather than

42:17

blocking. And so. it's really

42:20

interesting, and I hope that we'll

42:22

find out more of the reasons why people

42:24

are switching, but, that seems to be the clear

42:26

reason why we're seeing this

42:28

massive, massive influx to BlueSky,

42:30

and I am assuming that Threads and also Mastodon

42:33

are probably seeing a similar influx of

42:36

people. And, uh, You know, I think

42:38

it's great. I think, exploring all these

42:40

different systems and recognizing that

42:42

now you have choices in terms of which

42:44

systems do you think are going to do the most to

42:47

keep you safe and make you feel comfortable on a platform.

42:49

I think that's, that's actually a really good thing.

42:51

yeah, no, it's, it's great that there's alternatives

42:54

out there that people feel they can even

42:56

try. obviously last week we had a story

42:58

about threads and Instagram moderation

43:00

being below par. there being a

43:02

lot of oddities to the moderation

43:04

process as a result of presumed moves

43:07

to AI. And actually this is a kind of

43:09

nice segue onto what will be a, probably

43:11

a final story for this week, Mike, but. a

43:14

bit of an update to the explanation

43:16

for that, those technical glitches.

43:18

Yes. So they came, they came out and said

43:20

that, you know, we, did our whole discussion on it

43:22

and talked about how it was, obviously

43:25

AI because it was so stupidly done

43:27

and like blocking everyone who, who used

43:29

the word cracker, you know, without

43:31

understanding any of the context and how like

43:34

cracker jacks are a snack food and not

43:36

a racial insult. So we, assumed

43:38

naturally that it was an AI driven thing, because

43:40

that was the only thing that made sense. But Instagram

43:42

came out and Adam Asari came out and said

43:44

that it was a human reviewer problem and

43:47

that they had not properly

43:49

instructed human reviewers on how to handle certain

43:51

kinds of context and things like that. And

43:54

I'm not sure I believe them.

43:55

Well, I mean, there's, there's a few parts to this, like

43:58

don't believe him, right? And so you

44:00

think he's putting the emphasis on humans because

44:02

it makes, it's in his interest to

44:05

like disown The human part of this moderation

44:07

process, I think actually there's something

44:09

here about the kind of tooling that the humans were, enabled

44:12

with, like he mentioned about that

44:14

the tools didn't give it the human's context.

44:16

Humans aren't going to be able to kind of moderate effectively

44:18

if they don't have context about something. whether

44:21

or not you're right it's a tooling aspect,

44:24

it's not the fact that there's people involved in the process.

44:26

Right.

44:27

Yeah, that is true. And, and, you

44:29

know, we know that the tools do matter, but

44:32

again, like, you know, kind of what we said last week was

44:34

that it is insane that, meta,

44:36

the sort of most experienced

44:38

and the, you know, largest resourced

44:41

player in trust and safety would

44:43

have tools that were so bad. And we were talking about

44:46

on the AI front, but if that's also true

44:48

in terms of like the tools for

44:50

the moderators, that they don't have the

44:52

context. I understand like. Tools

44:55

for context are a difficult thing, and it's,

44:57

something I've actually spent a lot of time thinking about it's

44:59

difficult because context

45:01

can come in all different ways and could be

45:03

external to, the content

45:06

that you're looking at could be external to the platform

45:08

itself, but not knowing

45:10

that Crackerjack is

45:14

a It's not a slur

45:17

Yeah. The tool has to have been designed very

45:19

badly in the first place for

45:22

that to happen.

45:23

yes, yes. That is my take on it

45:25

is, is I don't like, that seems

45:27

like a huge mistake. And for a company as

45:29

big, like I could see like a very small

45:31

player pulling a kind of off the shelf

45:33

tool somewhere and not really

45:35

implementing it. Right. Like having that issue.

45:38

I have that issue a little bit now with like moderating

45:41

comments on TechDirt. Our tools are not.

45:43

Perfect. And there are a couple of things where like context

45:46

would be really useful that I don't have, and I have to take

45:48

a few extra steps to look at like, Oh,

45:50

who's this person replying to? Because that actually matters,

45:53

Yeah.

45:53

but you know, we're tiny, right?

45:56

And so like meta is

45:58

not right. I I'm, I am

46:01

amazed that they would not have tools in place that

46:03

could handle basic context

46:04

Yeah. Agreed. And I'm not, I'm not a

46:06

big fan of the headline of this article, which

46:08

will include in the show notes. Cause it does, it does

46:10

suggest that he doesn't row back on,

46:12

on that initial claim and he, he

46:15

does. So anyway. it's a complex

46:17

issue as always with content moderation

46:19

and trust and safety. I don't know if we've solved

46:21

any of the issues today, Mike, in, in the

46:24

time we've been talking, but it's been

46:25

I will say, this is actually a really good lead

46:27

into the bonus chat that I have with Mike Pappas,

46:30

where this question of buy versus build,

46:32

there are some issues about, how

46:35

do you choose which things and like, what

46:37

features do you want and how customized

46:39

do they need to be? And that's some of

46:41

what Mike and I talk about in this bonus

46:43

chat, because, these are not easy questions.

46:46

these are really, really important ones. And so,

46:48

I, I think Mike and I get into that

46:50

in this bonus chat. And so actually we

46:52

didn't plan it this way, but it leads in really nicely

46:55

to the bonus chat between, uh, myself

46:57

and Mike Pappas from Modulate. So

47:09

Mike, welcome back to the show. Always

47:11

good to have you here. Always good to catch up. we

47:13

wanted to talk today. You, you recently wrote this

47:15

blog post on the modulate site on

47:17

the issue of building versus buying,

47:20

technology for trust and safety purposes.

47:22

And in particular around voice moderation,

47:24

which I think is an issue that really comes up a lot

47:27

in all sorts of contexts around trust

47:29

and safety right now, and it's, a topic that. I

47:31

heard a lot about at TrustCon and I hear about in

47:33

all sorts of, other contexts as well

47:35

when thinking about this stuff. And I was

47:37

just kind of curious, like, what made you

47:40

want to write about that topic in the first place?

47:42

thanks for having me back, Mike. Always enjoy having

47:44

these conversations. the answer to this

47:46

is in some sense pretty mundane. We are

47:48

a trust and safety tools vendor. So

47:51

this is an inherent sort of existential

47:53

question for us is, does

47:55

it make sense for people to buy

47:57

externally? we're very

47:59

sort of mission driven, ethics driven company.

48:01

We have a lot of deep internal conversations

48:04

about how do we want to think about marketing

48:06

and sales strategy. We don't want

48:08

to trick someone into buying our product

48:10

if it's not actually a good fit for them. So

48:13

I think what's more interesting here is

48:15

really looking at it from that lens of.

48:18

Can we add a kind of value

48:21

by being an external organization

48:23

that you cannot get from

48:26

sort of spinning up an internal team to build

48:28

the same tool? And I think what's interesting is that

48:31

we do feel like there's a couple of areas where

48:33

we can provide value that way.

48:35

Yeah, I mean, I think one of the things, and, and, and, This

48:37

comes up in all sorts of contexts, again,

48:39

like not just within trust and safety field. I mean,

48:41

I think there's lots of areas where everyone is always

48:43

having that build versus buy conversation.

48:46

And one of the things that comes up a lot is

48:48

this idea of like how central is the

48:50

thing to your business. and

48:52

the general sense being, you know, if something is more

48:55

central to your business, you tend

48:57

to lean more towards building versus

48:59

buying. But not always sure that's true,

49:01

right? I mean, I think there are times where, even

49:04

if something is central, there is value

49:06

in. if there is an expert who

49:09

has the ability to do more because of

49:11

scale or because of expertise or

49:13

something else, it actually makes more sense

49:15

to partner with that vendor. And so

49:17

sort of curious where you, how do you, how do

49:20

you feel about that? Or how do you think through those

49:22

things yourself?

49:23

Yeah, I mean, electricity is vital

49:25

to my business. That doesn't mean I build it myself,

49:28

right? To, to your point, there's a lot

49:30

of things that are essential components.

49:33

And because of that essentialness,

49:36

you cannot afford to get it

49:38

wrong. And I think as

49:40

we've seen over the last several years, you know, trust

49:42

and safety has long been

49:44

seen as kind of this cost center,

49:46

this must we do it. And we'll try

49:48

to do it as little as possible. And

49:51

as. Both sort of regulation, consumer

49:54

sentiment, and frankly, just platforms

49:56

that care that are being built

49:58

around that perspective of caring

50:00

are all sort of looking at this and saying, no,

50:03

actually, we really should treat this as

50:05

essential. I actually

50:07

see that coinciding with more

50:09

of them starting to perk their ears

50:11

up a little bit and say, Let's take

50:13

a closer look at what else is out there

50:15

and think about is it worth

50:17

getting that really premium offering

50:19

as opposed to just trying to slap something

50:22

together to check a box?

50:24

Yeah, it's, it's interesting. I, I've

50:26

been thinking about this in a slightly different context

50:28

and I didn't realize I was going to go down

50:30

this path when we started this conversation, but You

50:32

bringing up the idea of people thinking of it as a cost

50:35

center. This is something that I've actually thought a lot

50:37

about lately as well, where,

50:40

years ago, I remember there was this whole debate

50:42

over like call centers and

50:45

customer support and a lot of companies

50:47

viewing that as a cost center and

50:49

then somebody. You know, had the bright idea,

50:51

like, wait, this is the main touch point

50:54

that many of our customers have with our

50:56

company or their service or products. Maybe

50:59

we should realize that like, customer support

51:02

is actually a marketing function rather

51:04

than a cost center that is just, causing,

51:06

headaches for us and then we try and keep as, cheap

51:08

as possible. And in that

51:11

situation too, right. I mean, a lot of people choose

51:13

to still outsource it because they know that

51:15

even, though it is. really important

51:17

feature of their business. It still makes sense to outsource

51:19

it because you can do that at

51:22

a much better rate. do you think of it as

51:24

sort of similar to that?

51:25

Yeah, I think that's right. And I think

51:27

it's, partly, you know, why,

51:29

why we really enjoy working with the kinds of

51:31

businesses that we do is because

51:34

they have that conception

51:36

of this is not just a cost center.

51:38

I mean, games are often built

51:40

by people who are, you

51:42

know, visionaries trying to create this particular

51:45

experience that it's really emotionally

51:47

important to them to realize the experience

51:49

that they had when they Wanted to create

51:52

and this is part of it. If as soon as

51:54

you step into that wondrous world,

51:56

someone's calling you the N word. It's

51:58

not so wondrous anymore, right?

52:01

and I think even with some of the enterprise

52:03

platforms that we've been working with more recently,

52:05

we see similarly. It

52:07

comes from the culture and mentality

52:10

of the business that if you're the

52:12

kind of organization that says we want

52:14

to win by treating people

52:16

well and by allowing that

52:18

to elevate them and help them

52:21

create something even more wonderful,

52:23

then you're going to start treating

52:26

trust and safety more as

52:28

a core component of your business.

52:30

Yeah. And, you know, I think it's interesting,

52:32

this comes out in the blog post that you wrote,

52:35

that there are some things that are very specific

52:37

to the, voice space

52:39

and the work that you do. And you know,

52:41

it's funny cause I'm, I read your blog post

52:44

on, building versus buying and I'm just

52:46

like, All of the things that you

52:48

have to do to do voice

52:51

moderation. Well, it seems horrifying

52:54

to me, right? It's just like this huge

52:56

list of things. And so not that

52:58

I'm in the space where I need voice moderation,

53:00

but like, for me, it's an easy call. There's

53:02

no way I want to build, so

53:05

do you want to talk a little bit about some of the specifics

53:07

that you sort of discussed in that blog post about, what

53:10

companies should be thinking about specifically

53:12

in the voice moderation space.

53:14

sure. Yeah. I, I think broader

53:16

than voice moderation, it really comes

53:18

down the builder by decision to three major

53:21

components. There's the

53:23

data. What decisions

53:25

do you want to make? How can you make the best decisions?

53:28

There's the technical capability to actually,

53:30

build those tools and have them function. And

53:33

then there's sort of the user interface. What's

53:35

the experience of this? Do we know how to make that

53:37

efficient? In voice, each

53:39

of those things kind of takes on an extra level,

53:42

right? In order to get good data, you

53:44

You first of all need it to be voice

53:46

native data. If you're just looking at transcripts,

53:49

we've worked with some platforms that took

53:51

text transcripts and then sent

53:53

them to, you know, Amazon Mechanical Turk

53:55

or something and had someone try to read the transcript,

53:58

you're not getting the emotion, you're not getting

54:00

any of what we're actually looking

54:02

for in that voice to really understand

54:04

that. So if you want to build a proper

54:07

voice moderation tool, you need

54:09

data that's native to.

54:11

Real conversations between

54:13

people. This is also where I think

54:16

you get a benefit from working with an

54:18

external partner, because we

54:20

see what's happening across

54:22

so many different platforms. We

54:25

can see emerging new trends that haven't

54:27

reached your ecosystem yet. We

54:29

can get much broader data sets to

54:31

be more robust to different kinds of accents

54:33

and languages. We just have

54:35

a lot more ability to do

54:37

that kind of stuff because of the cross section

54:40

we see. The technical sort of

54:42

capability. I don't think I need

54:44

to spell out too much there. Processing voice

54:46

is really complicated. You need a specialized

54:48

team on that. but just on the user

54:50

interface piece, I'll call out for

54:52

voice and for a lot of content moderation,

54:55

but it really becomes extra true in voice.

54:57

Moderator mental health. is a consideration

55:00

you have to make, right? if you're not

55:02

just seeing this awful stuff, but hearing

55:04

it over and over again over the course of

55:06

a day, that can be really brutal, and

55:08

it can actually lead to physical issues, such

55:11

as tinnitus. So being able

55:13

to actually process that audio

55:15

and make sure that if people are

55:17

listening to it, they're doing it as little

55:19

as possible. You're pre processing

55:21

it so it's not causing physical damage.

55:23

So, That kind of extra stuff

55:25

is also easily missed when someone

55:28

is just thinking, oh, how can I, you know,

55:30

spin something up as quickly as possible here?

55:33

Yeah. I think that's, that's interesting. There's, I mean,

55:35

there's a few things that are obviously specific to

55:37

the voice world that I just never even would have thought

55:39

of, as well. Um, it's kind of interesting.

55:41

You know, a lot of this discussion is really, and

55:43

obviously, as you mentioned, you're in the vendor

55:46

space. And so this is an existential

55:48

question for you, but I'm sort of curious.

55:50

not as a devil's advocate kind of question,

55:52

but in which situations do

55:54

you think it actually does make sense for companies

55:57

build, build their own

55:58

I think the, biggest reasons are sort

56:00

of where they get an edge on one of

56:02

those dimensions. So, uh,

56:05

On the data dimension, if you have a really

56:07

unique kind of experience,

56:09

or you have some way that people are interacting

56:11

that really is not reflected

56:14

in other situations, maybe

56:16

it makes more sense for you to say, Hey, we need

56:18

to train on our own data because that has

56:21

truly nothing to do with anything else. I'm honestly

56:23

struggling to come up with a good example of

56:25

that, but I've, I've looked at some very

56:27

niche apps of, you know, this is specifically.

56:30

Chess camp only for like four

56:32

to six year olds, which is a very different

56:34

vibe of how to have a conversation. Maybe

56:37

something very niche like that. I think

56:39

it's more commonly true that it's a

56:41

technical advantage. that's not

56:44

true for things like text and voice that

56:46

are standardized, images

56:48

as well, but UGC,

56:50

obviously coming from gaming, this kind of moderation

56:53

is a big topic that we hear about a lot,

56:55

and usually when you're thinking about what are people building

56:58

in my game, You can't just print

57:00

out an image of it and say,

57:02

Oh, run that through the Is It Genitalia

57:04

filter. You have to look at it from different

57:07

angles. You have to think about how things can compose

57:09

together. You have to think about are they actually

57:11

writing something in the game

57:13

universe? Or all of

57:16

those factors are going to be very

57:18

specific to your ecosystem

57:20

and require access to

57:22

your ecosystem at a deeper level

57:24

than just grabbing the text or grabbing

57:27

the audio clips of what's being sort of

57:29

said. So I think those are the

57:31

situations where it makes more sense to try

57:33

to build in house when just you have access

57:35

to that in a deeper way than an

57:37

external partner would be able to.

57:40

very interesting. One sort of final question

57:42

here. And this, this is something that has come

57:44

up when I've had this discussion with people as

57:46

well, in terms of thinking about it is

57:48

one of the fears that some

57:50

people have around doing

57:53

the outsourced by decision

57:56

is that it leads to sort of Homogenization

57:59

of the tools that if everybody's using

58:01

sort of the same tool, then

58:03

it, gets stuck with just, you

58:05

know, one or maybe two, two or three

58:07

players in the space. Do you have any thoughts

58:09

on, on that or how to think

58:11

through that issue? Right.

58:15

I really buy it to be honest. Um,

58:17

you know, If you're building

58:19

one thing internally, that

58:22

itself is an echo chamber.

58:25

If you're pulling from external

58:27

tools, if there was only one external

58:30

tool that everyone was using, yes,

58:32

I would have some concerns. Now, of course, there's

58:34

configuration options and all that, but I would still

58:36

have some concerns. But we're so

58:38

amazingly far from that world.

58:40

We are very much in a competitive ecosystem

58:43

where there's a lot of different tools out there,

58:45

bringing different approaches, different

58:48

mentalities, handling different

58:50

kinds of harms, some focused

58:52

on the financial stuff, some focused on interpersonal.

58:55

I don't think we're in any risk

58:58

of everything calcifying into

59:00

the exact same environment there.

59:02

And I think by. Having more

59:04

platforms talking openly to

59:07

folks like vendors that do sort

59:09

of operate in that in between

59:11

space. That actually

59:13

sort of up levels the whole conversation

59:15

we're having as a collective industry because

59:18

now we're seeing all those different

59:20

perspectives being gathered together

59:22

in one place.

59:23

Cool. Very, very interesting. Yeah. This

59:26

is such a fascinating topic. And

59:28

I think it's something that a lot of people are thinking about.

59:30

So, I'm glad you wrote the blog post.

59:32

I'm glad you're coming on the podcast to

59:34

talk about it as well. And I'm sure there will be many

59:36

more conversations along this line but,

59:38

thanks for joining us today.

59:40

Yeah, thanks again for having me. Always happy to chat.

59:45

Thanks for listening to Ctrl-Alt-Speech.

59:48

Subscribe now to get our weekly episodes

59:50

as soon as they're released. If your

59:52

company or organization is interested in sponsoring

59:55

the podcast, contact us by visiting

59:57

ctrlaltspeech.Com. That's

59:59

C T R L Alt Speech. com.

1:00:02

This podcast is produced with financial support

1:00:05

from the Future of Online Trust and Safety Fund,

1:00:07

a fiscally sponsored multi donor fund

1:00:09

at Global Impact that supports charitable

1:00:11

activities to build a more robust, capable,

1:00:13

and inclusive trust and safety ecosystem.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features