Once You Slop, You Can't Stop

Once You Slop, You Can't Stop

Released Thursday, 20th March 2025
Good episode? Give it some love!
Once You Slop, You Can't Stop

Once You Slop, You Can't Stop

Once You Slop, You Can't Stop

Once You Slop, You Can't Stop

Thursday, 20th March 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

So Mike, I've been browsing the internet

0:02

for some shelves, right? You know, as

0:04

one does. As one does, you know,

0:07

this is the thing I do in

0:09

my spare time now. My house renovation

0:11

is almost coming to an end.

0:13

I've talked about a bunch of this

0:16

podcast, I hope listeners on board. But

0:18

I'm basically at the shelf stage, so

0:20

I was looking around for some shelves,

0:23

and naturally I ended up on eBay,

0:25

and I thought we haven't used

0:27

eBay on control or speech. Oh. And

0:29

you can take that as a kind of material

0:32

search, or you can think of it

0:34

as a kind of philosophical prompt. Well,

0:36

I was going to say there's

0:38

a few different ways, but I

0:40

can search for a different government

0:43

for the US, but I think I would

0:45

like to search for more time. I

0:47

feel like I don't have nearly enough

0:49

time for anything. Yes, we're just discussing

0:51

in terms of like how much stuff

0:53

is going on. I could use more

0:56

time. So if I could buy more

0:58

time on eBay. I'd go with that.

1:00

Yeah, okay. What about you? Maybe a

1:02

person assistant. Search for anything then. If

1:04

I was to search for something,

1:06

I would search for the kind of

1:08

dying vestiges of a free

1:11

speech platform and that links

1:13

to something that we'll talk

1:15

about in today's podcast. So

1:17

I'll explain more. Hello

1:27

and welcome to Control All Speech,

1:29

your weekly roundup of the major

1:31

stories about online speech, content moderation

1:33

and internet regulation. It's March 20th

1:36

2025 and this week's episode is brought

1:38

to you with Financial Support from the

1:40

Future of Online Trust and Safety Fund.

1:42

This week we're talking about AI Slob,

1:44

the Online Safety Act coming into force

1:46

and Metis' attempt to build a better

1:48

community notes than X. My name is Ben

1:50

White Law. I'm the founder and editor of

1:53

Everything in moderation. And one with a very

1:55

time-poor... Mike Masnick. How much would you

1:57

pay Mike to have an extra hour?

1:59

every day. Gosh, I don't know. That

2:01

is a good question. Just one hour.

2:03

Yeah, one hour is not enough. I

2:06

think I mean an extra day every

2:08

week. Yeah, okay. Where nothing happens, right?

2:10

So... I need everybody else, like everybody

2:12

else can stick with the seven day

2:14

week and I need that extra day

2:17

when nothing is happening and it's just

2:19

perfectly quiet and I can just catch

2:21

up on stuff. I think that's what

2:23

I would, yeah, someone get me that,

2:25

someone get me that. So you're paying

2:27

for the whole world to stay still,

2:30

essentially. Yeah. No, I mean, you know,

2:32

in their lives, like, they can just

2:34

think that the world still has seven

2:36

days. But I want, I want, like,

2:38

this pause, that I can still do

2:40

stuff. Give me that extra day. I'm

2:43

not greedy. You know, I don't, I

2:45

don't need two days. Just, just one.

2:47

It reminds me of a children's TV

2:49

show in the UK called Bernard's Watch.

2:51

Okay. And if you ever came across

2:54

this. No. It's kind of unhinginged. click

2:56

and stop and he went about his

2:58

day-to-day life and everyone else is frozen.

3:00

Yeah, that's it. You want to burn

3:02

its watch, don't you? I think, yeah,

3:04

yeah, I think there was a movie,

3:07

like a Hollywood movie on that premise

3:09

too, but I didn't see it. I

3:11

remember the preview with that preview with

3:13

that premise, but no, I just need

3:15

that, but like I wouldn't do anything

3:18

nefarious with it. I would literally just

3:20

sit and work, right, right? I hope

3:22

you don't think this is work. This

3:24

is our time. This is our time

3:26

to decompress my. I think that's a

3:28

sensible thing to buy. My purchase would

3:31

be something very not sensible. And I

3:33

was referring to it in the opening

3:35

today. I want to buy the Twitter

3:37

logo that's up for sale again. I

3:39

don't know if you saw this. The

3:41

big bird. used to sit outside of

3:44

Twitter's office back when it was called

3:46

Twitter in San Francisco has gone up

3:48

for sale again and it is the

3:50

perfect analogy for what has happened to

3:52

Twitter slash X since it was bought

3:55

by Elon Musk. So it was, I

3:57

don't if you remember in 2023, Musk

3:59

sold it as part of a kind

4:01

of fire sale, like a kind of

4:03

garage sale. Is that what you call

4:05

it in the US? Yeah. Sure. Yeah.

4:08

And it sold for $100,000. It's currently

4:10

up for sale now. and you just

4:12

looked right and it was 2700 dollars?

4:14

27,000. 27,500. Okay, so yeah, just as

4:16

Twitter's X's Valley plummeted over the last

4:18

several years, although there's been some sort

4:21

of correction this week, so has Larry

4:23

Bird, the giant blue logo that used

4:25

to say outside the office in San

4:27

Francisco. Maybe I could use it as

4:29

a shelf. Yeah, there you go. You

4:32

take two birds with one stone. But

4:34

you look at that. But the, yeah,

4:36

I mean, I guess it's just been

4:38

sitting in storage in a storage facility

4:40

in San Francisco. Yeah. And so you'll

4:42

have to pay for shipping to the

4:45

UK. Yeah, yeah, if you get it.

4:47

But I will note that it says

4:49

there's about six hours left as we

4:51

record this. So by the time this

4:53

comes out. You may only have a

4:55

few minutes left to bid on it

4:58

and purchase it for Ben. In case

5:00

any of our listeners want to make

5:02

a donation to control all speech and

5:04

to Ben shelving fund in particular, you

5:06

can purchase the bird and send it

5:09

off to London. Yeah, you would get

5:11

regular photos if you contribute. Yeah, I

5:13

wonder what your wife would feel about

5:15

a giant bird showing up for your

5:17

newly renovated home. Yeah, I probably wouldn't

5:19

love it, but it's funny to see

5:22

such a kind of old vestige of...

5:24

what Twitter used to be come up

5:26

for sale again in such circumstances. Yeah,

5:28

cool. So purchasing power aside, you know,

5:30

our wish lists, we've kind of dealt

5:32

with today now, but we have a

5:35

lot of stories that we want to

5:37

get through. We have some really interesting

5:39

kind of notable stories that we're going

5:41

to talk through and explain, just as

5:43

we always do in the podcast. Before

5:46

we do a request for some help

5:48

from our listeners, we haven't had a

5:50

review of the since, I think, September

5:52

2024, Mike. Yeah. Isn't that tragic? Yeah.

5:54

So if you don't hate listening to

5:56

this podcast, that's, that's, that's, that's, that's

5:59

mixed it up a bit. If you

6:01

don't hate listening to this, very low

6:03

bar, you're sitting there. I know, but

6:05

it might prompt people to think, actually,

6:07

this is me, you know, I'm somebody

6:09

that doesn't hate this podcast. Maybe I'll

6:12

spend some time helping Mike and Ben.

6:14

in their quest to have their podcast

6:16

discovered across all of the platforms on

6:18

which it is disseminated every week. If

6:20

you don't hate it, Spotify, Apple podcast,

6:23

wherever you get the podcast, leave a

6:25

little review. Doesn't need to be very

6:27

long. Yeah, exactly. And you know, we

6:29

look at the numbers and we've seen

6:31

like our download numbers increase. So there's

6:33

more of you. There's definitely some of

6:36

you who weren't listening back in September

6:38

of 2024. So someone out there must

6:40

be listening to this and thinking. I

6:42

have not written a review. for control

6:44

speech and I could, right? Like think

6:47

positively. You can do it if you're

6:49

listening to this and you haven't written

6:51

a review. And we'll give you a

6:53

shout out in next week's podcast if

6:55

you do. We won't shame you. There's

6:57

no shame in submitting a review, but

7:00

we'll give you a shout out. There

7:02

is a risk, Mike, that some of

7:04

our listeners perhaps are connected to our

7:06

first story of this week. I'm hoping

7:08

that every single... download and listen is

7:10

a real human person who works in

7:13

the industry. There's a small chance. There's

7:15

a few... AI generated bots. That's how

7:17

the internet works. I'm sorry to say

7:19

it to you. And the first story

7:21

from 404 media really goes deep on

7:24

this idea of AI slop and content

7:26

being kind of created as a result

7:28

of AI generated systems and platforms. I

7:30

got so deep into this, explain what

7:32

it is while you picked it. and

7:34

why it's important to our listeners. Yeah,

7:37

I mean, obviously 404 media does such

7:39

wonderful work, but this is such a

7:41

wonderful story, kind of laying out. Everybody's

7:43

heard about, you know, the rise of

7:45

AI and AI Slop and 404, and

7:47

in fact, just done a lot of

7:50

reporting in general on. AI Slop and

7:52

the fact that you know social media

7:54

especially meta properties in particular are sort

7:56

of filling up with this stuff and

7:58

there's been this like confusion over why

8:01

and and how but part of what's

8:03

really interesting to me here is just

8:05

like not just how much is getting

8:07

filled up with just nonsense AI stuff

8:09

but the fact that there are these

8:11

people these content creators who are basically

8:14

saying like were fighting the algorithm and

8:16

we're trying to figure out the best

8:18

way to get the most click throughs.

8:20

And the best way to do that

8:22

is take as many shots on goal

8:24

as we possibly can. And AI will

8:27

just generate a ton of stuff. We

8:29

don't have to put work into it.

8:31

You look at the really successful content

8:33

creators like Mr. Beast or whatever, who

8:35

spends millions of dollars on every video

8:38

and then is able to pay that

8:40

off because he gets tons and tons

8:42

of views. But these guys are saying

8:44

like that space is too expensive. There's

8:46

too much capital costs up front to

8:48

get there. and it's too competitive. And

8:51

so the better thing to do is

8:53

just use this bullshit generator engine to

8:55

generate as much bullshit as possible, put

8:57

it all out there. If stuff doesn't

8:59

work, it's no loss to us because

9:01

it doesn't cost us anything, and therefore,

9:04

let's just get it out there until

9:06

we find something that hits. And so

9:08

now you have a whole bunch of

9:10

people doing that. And so that's sort

9:12

of like the motivating... behind a lot

9:15

of this is this idea that, well,

9:17

we need to get attention and the

9:19

best way to get that attention is

9:21

just to try as many things as

9:23

possible and see what actually clicks. And

9:25

then some of these guys are like

9:28

teaching other people how to use AI

9:30

Slop and that's becoming like a, you

9:32

just have this whole ecosystem of nonsense,

9:34

which is all about trying to game

9:36

the algorithm. Yeah, the bit that I

9:38

really like about this piece is the

9:41

comparisons to a kind of brute force

9:43

attack. Yeah, which is obviously a kind

9:45

of cybersecurity term. You know more about

9:47

it than me, but like, talk us

9:49

through like why AI Slop is kind

9:52

of like a brute force attack. Right,

9:54

I mean it's just a question of

9:56

like trying to get through, right? And

9:58

so like a normal brute force attack

10:00

is like, the simplest version of a

10:02

brute force attack is like trying to

10:05

guess a password or something, right? And

10:07

so you just will generate as many

10:09

random, not random, but you know, passwords

10:11

that might get into an account. Or

10:13

if you're like looking for some other

10:15

way into a website, you'll go through

10:18

like. scrape the entire thing and try

10:20

and find like every possible way in.

10:22

It's just like trying as many things

10:24

as possible. It's not very sophisticated. It's,

10:26

you know, it's brute, like, as it

10:29

says as I did. But this is

10:31

that for the algorithm. And the thing

10:33

that it kind of reminds me of

10:35

in a lot of ways is... When

10:37

Google came along, there were other search

10:39

engines in the 1990s and the rise

10:42

of the web, and none of the

10:44

search engines were that good. And then

10:46

Google sort of solved search by using

10:48

this page-rank system. And they said, you

10:50

know, we have the system to actually

10:53

make sure that the best stuff gets

10:55

to the top. And the way it

10:57

was originally done was based on how

10:59

many links with the idea that things

11:01

would only get links if they were

11:03

good. But pretty quickly, people figured it

11:06

out like, like, oh, here's an. to

11:08

our advantage. As more people go to

11:10

Google to figure out where to go,

11:12

if we can just get more links

11:14

and then became this whole crazy race

11:16

and one-uping each other of search engine

11:19

optimization and often what sometimes referred which

11:21

is black hat methods, sort of sneaky

11:23

methods, try and get around it. I

11:25

mean, not a day goes by when

11:27

I don't get emails from spammers asking

11:30

to pay me to put links on

11:32

Tector, because Tectorate has, you know, strong.

11:34

reputation within Google and they will offer

11:36

me money to put links because they

11:38

want those links to gain the system.

11:40

But you know you could use that

11:43

money to buy more time. I thought

11:45

you were going to say to buy

11:47

the Twitter. Yeah, exactly. Either one would

11:49

do. I will say I every once

11:51

in a while when I'm really like

11:53

in a cynical mood. I will respond

11:56

to one of those guys and say

11:58

sure it's a hundred million dollars. And

12:00

I've usually they disappear after that. Occasionally

12:02

I've had one. back to be like

12:04

no what's the real number and I'm

12:07

like a hundred million dollars I'm not

12:09

joking about this like yeah I'm gonna

12:11

destroy the the reputation and value of

12:13

my site and might as well get

12:15

paid for real you know not whatever

12:17

they're offering they're always offering like a

12:20

hundred bucks or something right that you

12:22

know but every system gets gained and

12:24

it's just that in the past like

12:26

gaming social media algorithms was tricky right

12:28

I mean you had to put in

12:30

the work which is what they're like

12:33

you know there's some inexpensive YouTube channels

12:35

or Instagram things that have been successful,

12:37

but the really successful ones are a

12:39

little bit more produced. And so what

12:41

the AI is enabling them to do

12:44

is to like make it cheaper to

12:46

game the system. So now everybody's doing

12:48

that. And so to me it sort

12:50

of then becomes this arms race, you

12:52

know. We saw this with Google and

12:54

like at some point in the sort

12:57

of mid to early 2000s, Google completely

12:59

revamped their ranking algorithm because they knew

13:01

that they had so much search spam

13:03

and they referred to it as search

13:05

spam that was clogging up the results

13:07

and making the experience worse. And so

13:10

they completely redid the algorithm. Some companies

13:12

that has sort of been built up

13:14

around search spam got destroyed by it

13:16

and there was like the big shakeout.

13:18

I'm getting. we're going to have to

13:21

see something similar happen in the social

13:23

media space? Yeah. And that there's got

13:25

to be some way that meta in

13:27

particular YouTube to some extent have to

13:29

figure out how to deal with this

13:31

stuff and whether or not people actually

13:34

want this content because I think yeah

13:36

like people are clicking on some of

13:38

it but then I think people are

13:40

getting annoyed or like people are clicking

13:42

on it because of the novelty and

13:44

like why am I seeing this nonsense

13:47

you know? Yeah. to kind of emphasize

13:49

the point and what the piece does

13:51

really well is just emphasize the insanity

13:53

of some of this content. Oh yeah,

13:55

it's bonkers isn't it like there's crazy

13:58

stuff like there's like giraffes eating people

14:00

and spawning on subway platforms there's like

14:02

it's almost it's bonkers you know there's

14:04

like grannies kissing llamas and there's like

14:06

heads appearing out of sharks and models

14:08

sitting on the street of markets being

14:11

fed food by like market traders there's

14:13

a man with I can't even explain

14:15

how crazy it is yeah but the

14:17

weird thing is it's kind of watchable

14:19

yes you kind of like can't look

14:21

away which obviously is why that content

14:24

gets recommended to other people because people

14:26

are engaging with it to some degree

14:28

the thing that I you talked about

14:30

the kind of speed with which this

14:32

has happened Mike it took years for

14:35

people to figure out how to game

14:37

Google and to an extent game other

14:39

social platforms. The speed with which this

14:41

has happened is I think what's shocking

14:43

to me. And I wondered like if

14:45

this is about the shift from social

14:48

platforms that are based on following other

14:50

people and then the shift to people

14:52

being grouped into cohorts or segments and

14:54

just being served content because that shift

14:56

means that AI slot can kind of

14:59

proliferate, right? If this stuff had happened

15:01

back in the day and you were

15:03

following somebody on Twitter and they started

15:05

sharing insane videos of a man, you

15:07

know, a pizza-doh man rolling out his

15:09

own stomach, which is one of the

15:12

ones in the... in the in the

15:14

article if somebody did that you would

15:16

unfollor them right or you know you

15:18

would you have the option to opt

15:20

out of that you don't have that

15:22

option when you're on tick-tock or whether

15:25

you're on reals on Instagram when you're

15:27

just served this ongoing fire hose of

15:29

posts is that what's changed I think

15:31

yeah, it gets to this issue of

15:33

like how social media for the most

15:36

part, not entirely obviously, but for the

15:38

most part switched from this thing that

15:40

it was the people that you followed,

15:42

which then led to the rise of

15:44

the algorithmic feed. And the algorithmic feed

15:46

originally on social media was designed to

15:49

sort of highlight the content from the

15:51

people that you followed that would be

15:53

most relevant to you because otherwise it

15:55

was just undifferentiated. from people you follow,

15:57

but you would miss stuff. And so

15:59

it was an attempt to do that.

16:02

The big shift then was to go

16:04

beyond that to the sort of tick-tock

16:06

style for you page, which was beyond

16:08

the stuff that you were directly following,

16:10

having that algorithm, try to pull in

16:13

other stuff. And you can understand where

16:15

there's like some element of value there,

16:17

because like maybe you don't know about

16:19

some other people. But then that has

16:21

introduced this thing, where now it's entirely

16:23

the algorithm. And so where the shift

16:26

is that it went from. You saying

16:28

who you want to follow being the

16:30

signal for the kind of content you

16:32

wanted To now the algorithm beginning to

16:34

take on not just that as the

16:36

signal But also what things are you

16:39

watching and then what might similar people

16:41

want to watch right? And so you

16:43

have this the concept of cohorts and

16:45

everything like that Whereas then you could

16:47

opt out of following certain people, you

16:50

can't opt out of your cohort that

16:52

they sort of built for you. And

16:54

that makes for a very different experience.

16:56

And so I mentioned this to you

16:58

before the podcast. I just yesterday as

17:00

we're recording this, I just interviewed Chris

17:03

Hayes, who's the cable news journalist, really

17:05

interesting guy who just came out with

17:07

this book called the sirens Call, which

17:09

is all about how our world is

17:11

now based. on attention and writing for

17:13

attention and business models built on attention.

17:16

And I think all that plays into

17:18

this. That interview will be on the

17:20

Tector podcast soon, maybe next week. We

17:22

got to figure out which thing is

17:24

rolling out when, but probably next week.

17:27

But we had a really interesting discussion

17:29

about this very thing and how algorithms

17:31

have sort of taken over our lives.

17:33

But, you know, one of the things

17:35

that we really tried to explore in

17:37

that conversation, which went, I thought went

17:40

pretty deep. was this idea that algorithms,

17:42

like a lot of people right now

17:44

want to blame the algorithms, and so

17:46

I think it's really easy to look

17:48

at this 404 story and say like

17:50

this is the algorithm's fault, right? Like,

17:53

oh, you know, algorithms, and you sort

17:55

of, I'm not saying you did that,

17:57

but you were saying we've switched from

17:59

this thing, where was the people you

18:01

followed to the algorithms, and that leads

18:04

some people say, well, you know, the

18:06

answer that then has to be like

18:08

outlaw the algorithms, like the algorithms, users

18:10

don't actually really like that. Like the

18:12

pure following feed doesn't work as well.

18:14

It's like a good algorithmic feed. The

18:17

issue to me, and this is a

18:19

lot of what I exploit with Chris,

18:21

is who controls the algorithm and for

18:23

what purpose, not so much the existence

18:25

of the algorithm themselves. And as I

18:27

said, like the early point of the

18:30

algorithms was good. It was to sort

18:32

of highlight the content that would be

18:34

more valuable to you and would be

18:36

more useful to you in the same

18:38

way that like the success of Google

18:41

early on was that it found the

18:43

stuff that you were searching for that

18:45

was better and more useful. It's just

18:47

the problem is that then people start

18:49

to game it. you're dealing with a

18:51

bunch of different incentives. You have the

18:54

company's incentives, they're trying to make money,

18:56

you have the people who are trying

18:58

to game the system, they're trying to

19:00

make money, and then you just have

19:02

like little old you who just is

19:05

trying to find interesting information. And you're

19:07

just kind of like at the whims

19:09

of all this other stuff. But if

19:11

you had more control over that algorithm,

19:13

right? So I mean you were saying

19:15

before, like it was... people you were

19:18

following, that is still an algorithm, but

19:20

it was one that you had control

19:22

over so you could remove someone. And

19:24

so the thing to me that I

19:26

keep coming back to is like the

19:28

real issue is who controls the algorithm

19:31

and what their intent is. So when

19:33

it's a company that's just trying to

19:35

keep you engaged or trying to sell

19:37

you stuff or trying to really sell

19:39

your attention to advertisers, which is the

19:42

way that most of these companies work

19:44

right now, then their incentive is misaligned

19:46

with you that leads to the sort

19:48

of insitification. And so that fits in

19:50

neatly with then a bunch of crazy

19:52

people with access to these AI tools

19:55

who can just pump out so much

19:57

slop and just throw it into the

19:59

system doesn't matter to them if it

20:01

doesn't work. Because all the incentives are

20:03

messed up for the users. Yeah. So

20:05

what you're saying is kind of the

20:08

algorithms have algorithms as a term have

20:10

been kind of associated with very centralized

20:12

platforms. Yes. The ones that we've used

20:14

for over a decade now, but what

20:16

you're saying is actually if those algorithms

20:19

are. control by users and there are

20:21

dolls and levers and levers and you

20:23

can tweak them, then actually the algorithms

20:25

themselves aren't the issue. That makes sense.

20:27

The kind of issue that I was

20:29

wondering about was like something you touched

20:32

on is when is the course correction

20:34

going to happen? You know, when our

20:36

platform is going to limit the visibility

20:38

of this kind of AI content, do

20:40

you think they have incentives to do

20:42

so? And where does labelling fit into

20:45

this? Because I remember last year, around

20:47

this time, meta. which is where 404

20:49

media focus much of its reporting in

20:51

this piece, came out and said we're

20:53

going to label AI content. We're going

20:56

to be able to detect. They did,

20:58

didn't they? Yeah, they said, we've got

21:00

this covered, don't worry about the rise

21:02

of AI content. We're going to detect

21:04

the signals and we're going to be

21:06

able to label to users that this

21:09

is AI generated content and we're going

21:11

to work with other platforms and other

21:13

partners, going to kind of embed ourselves

21:15

into some of those existing initiatives as

21:17

one called... C2PA which has got a

21:19

lot of the platforms signed up and

21:22

the suggestion was that they were going

21:24

to make it very easy for us

21:26

to kind of opt out of that

21:28

content or for that content. to be

21:30

kind of downgraded if it was less

21:33

quality. It doesn't feel like that's happening.

21:35

No. Do you think it will? Based

21:37

back to incentives. Yeah, it's a good

21:39

question, right? I mean, Mark Zuckerberg seems

21:41

to have this view that like AI

21:43

content is actually a good thing for

21:46

the platform. He's making sort of a

21:48

bet on that and he sort of

21:50

talked about that, right? And they've talked

21:52

about building their own, like AI generated

21:54

users on social media that could interact

21:56

when we've talked about that in the

21:59

past. His view, and therefore the company's

22:01

view, is whatever keeps people coming back

22:03

and using the platform, is good. I

22:05

do wonder how successful this could be

22:07

long-term, right? You know, if you're just

22:10

getting all this garbage content, like sure

22:12

it is entertaining and amusing the first

22:14

few times, like, you know, check the

22:16

shit out, basically. But after a certain

22:18

point, you have to be like... you

22:20

know, come on. Why am I spending

22:23

all my time looking at this when

22:25

I could have more sort of realistic

22:27

engagement somewhere else? And so I don't

22:29

think this works long term. I sort

22:31

of feel, if anything, the most likely

22:34

scenario is that it follows the path

22:36

that Google followed at the time where

22:38

they suddenly decided, okay, we need to

22:40

crack down on this. And then it

22:42

becomes a big project and they come

22:44

up with some system to crack down

22:47

on it. And that only lasts for

22:49

so long. I mean, I think a

22:51

lot of people would argue... probably correctly

22:53

today, like Google is a bunch of

22:55

garbage again and the search engine optimizers

22:57

have really won and Google's made a

23:00

lot of really bad decisions that have

23:02

really sort of been shidified the results

23:04

in some cases. You know, it's a

23:06

constant struggle and a constant battle and

23:08

you hope that they'll reach the point

23:11

that they'll fix it. But I sort

23:13

of feel that meta in particular is

23:15

going to have to realize that this

23:17

this kind of content is not a

23:19

long-term sustainable success, especially as there are

23:21

more and more alternatives that are sort

23:24

of trying to take a different approach.

23:26

And you know, one of the things

23:28

that, and this is another thing that

23:30

came up in the conversation that Chris

23:32

and I had, which is that more

23:34

and more people seem to be moving

23:37

away from social media platforms like this

23:39

to like group chats, where there is

23:41

no. algorithm and it's much more authentic

23:43

and you're able to engage and the

23:45

feeling of fun that people had in

23:48

the early MySpace Facebook Twitter years people

23:50

seem to be getting that feeling more

23:52

from group chats than they are from

23:54

global social media yeah and it's because

23:56

you have all this other stuff sort

23:58

of crowding their way into social media

24:01

and I think there's gonna have to

24:03

be a reckoning of some sort by

24:05

meta with that yeah So you're right,

24:07

it could be the users vote with

24:09

their feet and say actually we want

24:11

to spend more time as well. But

24:14

it could also be that regulation dictates

24:16

and governments dictate actually what content should

24:18

be labelled as AI slot or otherwise.

24:20

And you found a couple of stories

24:22

that relate to that this year, right?

24:25

Yeah. So this was interesting where Spain

24:27

announced that they're going to change the

24:29

law so that there would be finding

24:31

companies for platforms if they don't label.

24:33

AI generated content. And then there's another

24:35

one saying China is doing the same

24:38

thing. And so there is this part

24:40

of me that thinks, as I often

24:42

do, like if Western countries are doing

24:44

the same thing that China is doing.

24:46

Maybe think about that for a second.

24:48

It's funny, the article, the one on

24:51

China is from Bloomberg, and it has

24:53

the lead of China join the EU

24:55

and US in rolling out new regulations

24:57

to control disinformation by requiring labeling of

24:59

synthetic content online. And I was like,

25:02

what are they talking about regarding the

25:04

US? Because we don't have a regulation

25:06

requiring that at all. So I think

25:08

the reporting on this is a little

25:10

bit confused. The Spain one. It was

25:12

just from writers is interesting where they're

25:15

saying they have this new bill that

25:17

will impose massive fines on companies that

25:19

use content generated by AI without properly

25:21

labeling it as such. And you understand

25:23

the thinking behind it. The Chinese approach,

25:25

like it's pretty clear that that law

25:28

is designed to like stifle certain kinds

25:30

of speech and use AI labeling as

25:32

an excuse. The Spanish law I think

25:34

is meant in good faith because they're

25:36

scared about deep fakes, but I find

25:39

these approaches also to be kind of

25:41

silly and probably ineffective in part because

25:43

like where do you draw the line,

25:45

right? I mean everybody who's doing content

25:47

creation these days uses AI tools in

25:49

some form or another at some point.

25:52

Okay, maybe not everyone, but a large

25:54

percentage, right? So if you're writing and

25:56

you're using like grammarly to check your

25:58

grammar, that is an AI tool. Do

26:00

you need to label it? That it

26:02

told you you had a typo? Or

26:05

maybe it suggested, you know, not ending

26:07

a sentence in a preposition, right? Like,

26:09

is that now AI generated? You know,

26:11

I think most people would say no,

26:13

but then that raises the question of

26:16

where is the line? But same thing

26:18

with like images and videos, like if

26:20

you're touching up an image, if you're

26:22

changing a video, if you're editing something,

26:24

a lot of those tools are really

26:26

AI. So where do you draw the

26:29

line? between what needs to be labeled

26:31

and what doesn't. It feels like this

26:33

is the kind of thing, this is

26:35

my complaint with lots of regulations where

26:37

it's like, okay, I see this thing,

26:40

it is a problem, therefore let's just

26:42

outlaw it without thinking through, like, does

26:44

that really make sense? Is that really

26:46

targeting the thing or are you wiping

26:48

out a whole bunch of other useful

26:50

stuff with it? Yeah, I think the

26:53

line is pizza men rolling out their

26:55

own stomach. Yeah, how do you write

26:57

that into law though? Easy, easy. No,

26:59

I agree. Just ask Ben. Ben is

27:01

the decider. Once you see it, you

27:03

know, let me tell you that. I

27:06

will say that, I've been thinking a

27:08

lot about like what the terms we

27:10

use to refer to content on social

27:12

media platforms in the age of AI

27:14

slop. And I wonder if we can,

27:17

I'm going to use this opportunity to

27:19

pitch you, Mike, a change in terminology.

27:21

You know, we've been using the word

27:23

feed feed. for as long as social

27:25

media has been around. Yeah, I think

27:27

it's time to end the use of

27:30

feed. Now that we're consuming all this

27:32

AI slop, I'm thinking we use the

27:34

word trough. I think we, you know,

27:36

I'm going to, we can say, I

27:38

mean, I'm going to the trough to

27:40

consume my news. I'm going to open

27:43

my, you know, my app and go

27:45

to my favorite trough. I think that's

27:47

kind of befitting of where we're going

27:49

with content on the internet, right? And

27:51

we can all, you know, like, little

27:54

pigs around a trough, just consume all

27:56

the crazy images of pizza dough men

27:58

and insane human seals and whatever else

28:00

the AI generators the AI generators have

28:02

got for us. Yeah, so Ben, just

28:04

for you, I'm going to use an

28:07

AI generator to generate a picture of

28:09

pigs around a trough with social media

28:11

content flowing through the trough, all right?

28:13

So I think that's going to work.

28:15

Yeah, coming to a platform near you.

28:17

Okay, well, we went a bit mad

28:20

Denmark, but I said, fascinating story. Do

28:22

go and read that 404 media piece

28:24

would include it in the show notes.

28:26

Our next story is one that we've

28:28

touched on a lot since the start

28:31

of the year. It's... that big old

28:33

question of what Mete is doing now

28:35

that it's not fact-checking, right? And many

28:37

of you listeners will have seen this

28:39

week that the company has announced that

28:41

it's rolling out its community notes function.

28:44

It's decided that this is the way

28:46

forward. It's starting to test it with

28:48

around 200,000 people that have signed up.

28:50

And there's some interesting details in there,

28:52

Mike, in the press release this week,

28:54

though I wanted to kind of talk

28:57

a bit about and get your thoughts

28:59

on. Basically the beta is in six

29:01

languages. It's only focused in the US,

29:03

but it's covering six different languages. And

29:05

what's going to happen is that people

29:08

are going to start to see the

29:10

community notes underneath particular posts, and they're

29:12

going to be able to contribute to

29:14

them. So if you're one of the

29:16

200,000 that's signed up, you're going to

29:18

be able to give you a thought.

29:21

Every community note is going to be

29:23

500 characters. It has to include a

29:25

link. And the community notes will be

29:27

kind of, I think, if validated and

29:29

rolled over time. the very end of

29:31

the for these, it says that the

29:34

community notes are just in the US

29:36

right now, but there's a plan for

29:38

it to be all over the world.

29:40

There's a global domination in aspects to

29:42

the community notes, so it's just the

29:45

US now, but there's clearly plans to

29:47

do it beyond that. And I think

29:49

there's a couple of really interesting points

29:51

to this month, but for me there's

29:53

the missed opportunity, and then there is

29:55

the fact that they've somehow bungled this,

29:58

that it's even worse than X's community

30:00

notes. Okay. So I didn't realize this.

30:02

Did you know that when Community Notes

30:04

are applied on X on Twitter that

30:06

actually it doesn't affect the distribution of

30:08

that post? Did you know about this

30:11

probably? I think I had heard that.

30:13

I think I kind of do that.

30:15

But I hadn't thought deeply about it.

30:17

Yeah, I mean, so Facebook is meta

30:19

is copying. X, Twitter in a number

30:22

of different ways. It's using the open

30:24

source code that Twitter X has been

30:26

putting out into the ether. It's taking

30:28

a similar broad approach, but it's also

30:30

deciding not to downrank posts that have

30:32

community notes that contrast to the post

30:35

itself. And I'm just thinking that's a

30:37

clear missed opportunity. What is the point

30:39

of getting people to input on a

30:41

post and have the process that goes

30:43

on behind the scenes on a community

30:46

notes post where you have people from

30:48

different parts of the political spectrum. inputting

30:50

and potentially disagreeing and then showing that

30:52

community notes if you're not going to

30:54

then take a kind of view on

30:56

the distribution that seems like a mistake

30:59

to me or you know a decision

31:01

at least the matter is taking in

31:03

the way it's doing community notes so

31:05

I take umbrage with that then there's

31:07

also the fact that at least on

31:09

X there are community notes on ads

31:12

you know some of the best community

31:14

notes have been on random ads on

31:16

the platform where it's calling out gambling

31:18

companies for crazy claims or really low

31:20

quality e-commerce sites for the bonkers stuff

31:23

that they sell. And you get some

31:25

kind of funny interesting community notes. ones

31:27

I've seen the most at least. Meta

31:29

is deciding not to do that on

31:31

its ads and clearly there's a kind

31:33

of commercial imperative or incentive to do

31:36

that but I'm like that's another big

31:38

opportunity like at least if you're going

31:40

to take a lot of the broad

31:42

approach off Twitter and X at least

31:44

do the kind of coverage of community

31:46

notes that Twitter has. So Casey Newton

31:49

on platform has written a bit about

31:51

this as well this week. He says

31:53

that it's flawed approach. It's unclear how

31:55

it's going to affect the experience of

31:57

people on platforms, like Instagram and WhatsApp

32:00

and Facebook, but from the press release

32:02

alone, you can see that there's not

32:04

been as much thought-bin into this as

32:06

there could have been. Yeah, I mean,

32:08

there's a question of how much thought

32:10

went into all of this, right? I

32:13

mean, the sort of the entire new

32:15

approach, right? I mean, a lot of

32:17

it really felt like... Zuckerberg just sort

32:19

of giving up on contact moderation generally

32:21

and just being sick of it and

32:23

just not wanting to deal with it

32:26

and sort of saying, well, you know,

32:28

people seem to like community notes, so

32:30

let's just go with that. And in

32:32

fact, you know, just the fact that

32:34

they're using the same code, which itself

32:37

was based off of other open source

32:39

code, is kind of interesting. Yeah, this

32:41

is what happens when I engage in

32:43

ideas and good faith, Mike. You're right.

32:45

This whole you turn was never... was

32:47

never done for good reasons for better

32:50

outcomes. So yeah, I don't know that

32:52

necessarily I like trying to parse out

32:54

like why exactly are they doing it

32:56

this way? I'm not sure there's a

32:58

good answer for that. You know, I

33:00

think also like just the fact that

33:03

on X, the fact that community notes

33:05

does apply to ads may be a

33:07

legacy issue as well, and that ads

33:09

on X were based on just taking

33:11

a tweet and promoting it. And therefore,

33:14

you know, probably separating out that code.

33:16

the community notes originally birdwatch code from

33:18

those tweets probably was more difficult than

33:20

it was worth and so people were

33:22

like let's just leave it and then

33:24

Elon Musk fired everybody anyway so there's

33:27

like nobody to actually do that I

33:29

think if it became a big enough

33:31

issue, they would probably remove it too.

33:33

Whereas meta is much more of an

33:35

ads-focused business, you know, that is their

33:37

business, and so I'm sure they were

33:40

much more conscious of how that would

33:42

play, and also their ad system is

33:44

a little bit different, and a little

33:46

bit more involved. And so I sort

33:48

of understand probably the way that came

33:51

down, but yeah, it does sort of

33:53

give you a suggestion of like, this

33:55

is not really about... You know, the

33:57

whole sort of embrace of community notes

33:59

was Zuckerberg to say, like, here's a

34:01

solution, it's like power to the people

34:04

nonsense, allows him to say something that

34:06

is not entirely like we're turning off

34:08

all of our moderation. Yeah. And then,

34:10

you know, sort of just hoping for

34:12

the best. So, you know, the fact

34:15

that it's not floating back to the

34:17

recommendation algorithm. interesting. I mean, it sort

34:19

of goes into our first story and

34:21

the AI Slop, are there going to

34:23

be community notes on the AI Slop?

34:25

That'll be interesting to see what happens

34:28

there. And so I could see there

34:30

are arguments for having that play into

34:32

the algorithm or not. You know, one

34:34

of the arguments against it is like,

34:36

If it does impact the distribution algorithm,

34:38

then you're giving more incentives for people

34:41

to game the community notes. And community

34:43

notes, again, is designed to be resistant

34:45

to gaming. It's actually very clever in

34:47

terms of the way it is set

34:49

up and the sort of consensus mechanisms

34:52

that are built into it. But that

34:54

doesn't mean it's impossible to game. And

34:56

so if you were to put it

34:58

into the distribution part of the cue,

35:00

that would create a larger... attack surface

35:02

that people would go after and then

35:05

you would see brute force attempts on

35:07

the community notes feature. Yeah, that's a

35:09

theme already of the podcast. I actually

35:11

went to look at what the eligibility

35:13

criteria is for users on X who

35:15

want to sign up as contributors to

35:18

Community Notes. Meta have said you need

35:20

to have an account for six months

35:22

before you can join the program. Really,

35:24

actually... If you go to the page

35:26

on x.com that says become a part

35:29

of the community of reviewers you get

35:31

this 404 cannot access Basically it's broken,

35:33

right? Yeah. So it does tell you

35:35

the eligibility requirements, which are also the

35:37

six months and a verified phone number,

35:39

but then, wait a second. Oh, not

35:42

anymore. So we checked this right before

35:44

you can check now, Ben, but you

35:46

had me check right before we started

35:48

recording and I got the 404 page

35:50

and an error. And I just went

35:52

again to click to open it again

35:55

because I wanted to read the exact

35:57

error message. And now it is showing

35:59

me. a button to join community notes.

36:01

Oh, okay. So somebody's been listening into

36:03

a pre-pod cost preparation. There was definitely

36:06

an error before. Okay. Because you sent

36:08

it to me and I clicked on

36:10

it and I got the error message

36:12

and now I'm like, okay, so here's

36:14

what it is. If I go directly

36:16

to the link that you sent me,

36:19

I get the error message and it

36:21

says, oops, something went wrong, please try

36:23

again later. If I follow the low...

36:25

through reading the rules and then click

36:27

to go it does give me the

36:29

chance to join Community Notes and I'm

36:32

actually going to click this button I

36:34

have no idea what is going to

36:36

happen. Do you know what it is

36:38

Mike? I think it's the twitter.com versus

36:40

the X.com. It could be. So I

36:43

think there's a page on the Community

36:45

Notes help center article that directs to

36:47

a old URL that's Twitter./flow/join birdwatch and

36:49

that clearly is an old euro well

36:51

I do not I have now found

36:53

out I do not qualify to join

36:56

community notes yeah I thought that it

36:58

does it says so there there are

37:00

five categories and I have checks on

37:02

four of them but I am rejected

37:04

because my phone carrier is not trusted

37:06

oh So I have no recent rules

37:09

violations. I joined at least six months

37:11

ago. I have a verified phone number,

37:13

which I probably should remove. And I'm

37:15

not. associate with any other community notes

37:17

account, but they do not trust my

37:20

phone carrier. Interesting. So I am blocked

37:22

from joining community notes. Well, we are

37:24

all worse off for that, because I'm

37:26

sure you would have left some very

37:28

good community notes. Sorry, sorry to say,

37:30

this is embarrassing for you to be

37:33

recorded, isn't it? You know, a man

37:35

of your standing being denied. I would

37:37

expect nothing less. I would expect nothing

37:39

less. Well, fingers crossed. I will say,

37:41

meta did pop up a thing inviting

37:43

me to join their community notes. Oh,

37:46

you in? And well, I clicked and

37:48

I said, okay, and then it said,

37:50

well, we'll let you know more and

37:52

I haven't heard anything more. So you

37:54

probably went here back if... Yeah, I

37:57

might not. If Twits is anything to

37:59

go by. I may get a notice

38:01

that my phone... It's not trustworthy. Yeah.

38:03

Indeed. Great. So they were our two

38:05

kind of big stories that we selected.

38:07

Obviously we go a bit deeper on

38:10

the chunky interesting stories that we feel

38:12

merit most discussion this week. But there's

38:14

also a third story this week that

38:16

we both touched on, you know, happy

38:18

online safety act week. Yes. I don't

38:21

know how you'll celebrate. See what we

38:23

celebrate, you know, other, you know, regulatory

38:25

acts are available. So on Monday. the

38:27

USA in the UK came into force?

38:29

I'm almost surprised we're allowed to talk

38:31

then. You're in the UK and I

38:34

don't know, our podcast recording doesn't violate

38:36

the Online Safety Act. Well, I mean,

38:38

if a lot of the coverage is

38:40

anything to go by this week, then

38:42

you would think so. You would think

38:44

so. There's been a whole tranche of

38:47

op-eds and reports about what the duties

38:49

are of the USA. You know, we've

38:51

touched on it a bunch of times,

38:53

but this is the UK's... Long in

38:55

the making regulation about platforms and intermediaries

38:58

having to remove illegal content and reduce

39:00

the risk of UK users on the

39:02

platform, if they don't, you're liable for

39:04

an £18 million fine or 10% of

39:06

the global revenue, whatever's greater, that old

39:08

chestnut that we've seen from other regulations.

39:11

And we've touched on the OSA Mike

39:13

in recent weeks because a number of

39:15

smaller sites and forums and communities have

39:17

started to talk about the fact that

39:19

they're nervous about whether they are going

39:21

to be caught up in the OSA.

39:24

And we've seen a couple more this

39:26

week. My favorite was a website called

39:28

lemi.zip. I don't know if you came

39:30

across this. Finney. It's a finished news

39:32

aggregator, kind of decentralized platform, quite kind

39:35

of quirky, text only, and it's got

39:37

a very long and quite scathing review

39:39

of the OSA in which it says

39:41

the act does not meaningfully protect users,

39:43

the offcom, the regulator of the act

39:45

has demonstrated a lack of technical understanding

39:48

when drafting and enforcing these rules, very

39:50

very kind of pointed in its criticism

39:52

of the act. continues a lot of

39:54

what we've seen in red, but you

39:56

found a really interesting counterpoint to that

39:58

might, which is a forum that you

40:01

found that's taking a different tact that's

40:03

going a different way. Yeah, and so,

40:05

you know, there are a number of

40:07

different forms that are sort of reacting

40:09

different ways, so one that I had

40:12

seen was from lobsters. dot r s

40:14

the lobsters part is dot r s

40:16

at the end and it's sort of

40:18

a techie focused forum it's been around

40:20

for a while they had announced way

40:22

back when that they were planning to

40:25

block u k users the founder of

40:27

it just you know explain that it

40:29

would be effectively impossible and then this

40:31

week in the lead up to this

40:33

they announced that they had disabled the

40:35

geo block. They were going to geo

40:38

block UK users, but they had disabled

40:40

and they wrote this very long and

40:42

kind of interesting analysis of it where

40:44

I mean there are a few different

40:46

interesting elements to it to me where

40:49

they said basically we're going to do

40:51

this in part because we don't think

40:53

they're they're based somewhere in the Pacific

40:55

Northwest. I think they're based in Oregon

40:57

and you know have no connection to

40:59

the UK and even though the Online

41:02

Safety Act claims to cover every internet

41:04

site that is accessible in the UK.

41:06

Part of what this guy is claiming

41:08

is that. they have no jurisdiction over

41:10

me and I doubt they're going to

41:12

go after me because I'm a small

41:15

forum but if they do I'm sort

41:17

of willing to kind of fight the

41:19

extraterritoriality aspect of this. Okay. But what's

41:21

interesting is like he notes that he

41:23

like reached out to off-com people multiple

41:26

times trying to like have a discussion

41:28

with them and he goes through like

41:30

all the reasons why it's effectively impossible

41:32

to comply with how the fines are

41:34

ridiculous and disconnected from reality for this

41:36

which is a hobby forum that he...

41:39

He just runs on the side, he's

41:41

not making any money from it. But

41:43

it's basically like, it's just not worth

41:45

it. And there are too many questions

41:47

and too many things to deal with.

41:49

And so he just sort of gave

41:52

up on the idea of geo-blocking it

41:54

and just figures he's effectively going to

41:56

take his chance. So there is this

41:58

thing in here, which is that at

42:00

one point, offcom replied and said, when

42:03

we write to you to say that

42:05

you're in breach, that is, you are

42:07

in breach. Which, you know, like, even

42:09

though, you know, and I, this struck

42:11

me as interesting because for years now,

42:13

Offcom has been running around and I

42:16

saw them say this directly at TrustCon

42:18

that they were not going to be

42:20

this like. iron-fisted enforcer of the law,

42:22

but that they wanted it to be

42:24

this sort of back and forth, and

42:27

if they thought you were not in

42:29

compliance, they would reach out to you

42:31

and have a conversation, and they would

42:33

be the friendly regulator. And yet this

42:35

message... that even though it's kind of

42:37

interesting that it's in this post where

42:40

he's saying he's just gonna ignore the

42:42

law they say when we write to

42:44

you to say that you're in breach

42:46

you are in breach and basically saying

42:48

don't wait until you get that breach

42:50

letter and says reach out to us

42:53

work with us so they still have

42:55

that a little bit like work with

42:57

us but it's like they're not shining

42:59

away from the fact that they could

43:01

try and find you for millions of

43:04

dollars and so I thought it was

43:06

a really interesting analysis saying look we're

43:08

not going to comply and we're just

43:10

going to hope that they don't go

43:12

after us. Yeah I mean that's with

43:14

a lot of the regulations come in

43:17

that's really the role that it plays

43:19

right it's the kind of damakly sword

43:21

of yes that hangs over your head

43:23

and that they can kind of dispatch

43:25

at any moment. and the question will

43:27

be is I is often willing to

43:30

let that sword go against a relatively

43:32

small forum based in the northwest of

43:34

the state's like probably not so we're

43:36

seeing more and more reporting of implications

43:38

of the act which is I think

43:41

interesting because right now it feels like

43:43

people still don't understand implications of it

43:45

and how it affects their access to

43:47

the internet and communities that they've probably

43:49

long been part of, such as the

43:51

cycling forum and such as the kind

43:54

of developer website we talked about in

43:56

previous episodes. And the hamster forum. And

43:58

the hamster forum has shut down. Oh

44:00

man. And it says we are sorry

44:02

the forum is closed down. You can

44:04

now find us at Instagram. And so

44:07

if the idea was, you know, again

44:09

this is the point that I've raised

44:11

a bunch of times when you have

44:13

these online regulations. they often give more

44:15

power to the big companies that supposedly

44:18

these regulations are designed to deal with

44:20

and so yeah the hamster form a

44:22

forum about hamsters I'm sorry to down

44:24

and move to Instagram instead to to

44:26

listen to control speech that are users

44:28

of the forum and who are hearing

44:31

about this now I'm sorry yes it's

44:33

not great but yeah this is a

44:35

theme that we are seeing we won't

44:37

figure out the online safety at this

44:39

week Mike We all going to talk

44:41

about it again and again and again,

44:44

I'm sure. I wanted to bring a

44:46

story to you that, it's actually a

44:48

kind of positive story, a story that

44:50

I really liked. I almost sent it

44:52

to you as soon as I read

44:55

it. It's a Guardian article about how

44:57

tech experts keep their children safe online.

44:59

And I love it because it says

45:01

so much, it's kind of sensible stuff

45:03

about kids safety and children's use of

45:05

smartphones and internet devices. And in particular,

45:08

it says a couple of things that

45:10

I think are just so worth reading.

45:12

If you're a parent, if you're not

45:14

a parent... There's so much good stuff

45:16

in this article. The corruptive it is,

45:18

you know, and there's a few experts

45:21

in there that they both say the

45:23

same thing, the corruptive it is talking

45:25

about the internet to your child early.

45:27

And it's exactly what you've been saying

45:29

in previous episodes of control or speech.

45:32

There is no simple way to protect

45:34

kids on the internet. They're going to

45:36

see inappropriate content from time to time,

45:38

just like they fall over and graze

45:40

their knee. actually the only way you

45:42

can deal with it is by broaching

45:45

with them early working with them almost

45:47

as as adults and understanding that they

45:49

know as much as you or sometimes

45:51

more than you it's just such a

45:53

great article I've very rarely seen anything

45:55

like it so what did you think

45:58

about it was yeah it is a

46:00

fantastic article I'm doing you think about

46:02

it was yeah it is a fantastic

46:04

article I'm definitely going to use this

46:06

and send it to people as well

46:09

it makes the point that I've been

46:11

trying them know like teaching them how

46:13

to recognize that they may come across

46:15

things that are dangerous and learning how

46:17

to do it and you can handhold

46:19

with them but the idea of like

46:22

trying to lock down the internet and

46:24

make sure that they never see anything

46:26

bad is not only impossible it is

46:28

ineffective because it also doesn't teach kids

46:30

how to use things appropriately and so

46:33

there's a lot of like be open

46:35

to conversation, let kids know that they

46:37

might come across stuff if they have

46:39

problems, to come talk to the parents,

46:41

to have that open line of communication,

46:43

which I think is the most important

46:46

thing, and to just let them know

46:48

that there are problematic things on the

46:50

internet and teach them how to use

46:52

it appropriately, maybe work with them to

46:54

see how they're playing stuff, or using

46:56

different sites or different games or whatever

46:59

it is, and make sure that you

47:01

have that line of communication open is

47:03

the most important thing, because anything that

47:05

you do. that is like trying to

47:07

cut them off or try and block

47:10

them or it even says like you

47:12

know kids are going to break the

47:14

rules that you set for them they're

47:16

gonna get around whatever blocks but like

47:18

not freaking out about that and not

47:20

coming down hard on them either but

47:23

like figuring out like what is it

47:25

you're trying to do like do you

47:27

just want to watch something a little

47:29

bit more like let's talk about that

47:31

and this is something that we've done

47:33

in our family as well. Like we

47:36

do have time limits set on things

47:38

but we have the ability and this

47:40

comes up relatively frequently where my kids

47:42

will be using computers and they'll run

47:44

out of the time and we'll say

47:47

like hey can I get another half

47:49

an hour I was in the middle

47:51

of this or whatever and we talk

47:53

about it and figure out yeah you

47:55

know it's usually okay yeah and like

47:57

that's what this is giving this sort

48:00

of common sense approach where the big

48:02

thing is don't lock down everything that's

48:04

not effective but have those open lines

48:06

of communication and it's It feels unsatisfactory

48:08

for some people, but it's super important.

48:10

Yeah, and I'm not saying at all

48:13

that it's not hard for parents to

48:15

figure out how to set up particular

48:17

platforms or set up their home Wi-Fi

48:19

in a way that screens out most

48:21

of the risk. You talked there about

48:24

how you've done it in your home,

48:26

and I've got friends who have parents

48:28

and they spend a lot of time

48:30

thinking about this, but this article is

48:32

great for acknowledging that, but also saying

48:34

that there are alternatives to it too.

48:37

So, great peace. onto our last story

48:39

Mike which is is a little bit

48:41

linked it's it's about how teams use

48:43

social media and this one scared the

48:45

hell out of us and I'm gonna

48:47

I'm gonna pose it to you directly

48:50

okay so I want your kind of

48:52

honest answer are you a half-swipe guy

48:54

well I will say I did not

48:56

discover the concept of half-swiping until last

48:58

night when I read this article so

49:01

I did not know this was a

49:03

thing So, Snapchat had implemented this feature

49:05

at an affordance within the app where

49:07

if you can see whether or not

49:09

you've looked at the thing that was

49:11

sent to you on Snapchat, the person

49:14

who sent it to you can see

49:16

whether or not you've looked at the

49:18

content that they sent you. So, Snapchat

49:20

had implemented this feature at an affordance

49:22

within the app where if you don't

49:24

fully swipe in, but you have swipe,

49:27

you could see the message that was

49:29

being sent to you without indicating back

49:31

to them. that you had seen it.

49:33

So no read receipts. No read receipts.

49:35

There's a sort of a way of

49:38

getting around the read receipts. is you

49:40

could do that. So people couldn't tell,

49:42

but then they added this other like

49:44

premium feature for some users, like if

49:46

you paid, you could see if someone

49:48

was effectively doing the half-site. And then

49:51

you would have the information, they'd say,

49:53

oh, they're not really committing to seeing

49:55

my content. And so. There's a lot

49:57

of talk in here about how for

49:59

teenagers who are in different social relationships

50:02

or you know interested in people romantically

50:04

how they're like freaking out about this

50:06

like are they half-swiping or are they

50:08

not replying to me do they not

50:10

really like me if they only half-swipe

50:12

all of this stuff that's just like

50:15

all this social dynamic stuff that is

50:17

heightened by the affordances specifically of the

50:19

app that the developers of these apps,

50:21

I'm sure I haven't really thought through.

50:23

I think about this in the context

50:25

of, there's all these talks now, even

50:28

with Snapchat, there have been a bunch

50:30

of lawsuits about specific affordances within the

50:32

app and how they're sort of negligent

50:34

in some form or another. But you

50:36

can see how these kinds of things

50:39

develop. They're not developed to like. make

50:41

a teenage girl have her heart broken

50:43

because someone half-swipes instead of full swipes.

50:45

Or like they have this example where

50:47

this one girl wanted to hang out

50:49

with a boy and sent him a

50:52

message and he apparently half-swipe but she

50:54

had the premium account so she had

50:56

the premium account so she could see

50:58

that a half-swipe and he could see

51:00

that a half-swipe and he didn't reply

51:02

for a whole hour and then he

51:05

did say he wanted to hang out

51:07

but she still upset because he took

51:09

an hour and he took an hour

51:11

and he half-wipe and he took it.

51:13

this is this is teenagers right like

51:16

this is always the way teenagers have

51:18

been and I talked about I grew

51:20

up in a time pre-internet effectively and

51:22

calling we had to use the phone

51:24

and call and there was all this

51:26

nervousness and if their parents pick up

51:29

what do you say and how do

51:31

you address and all this kind of

51:33

stuff I think some of this is

51:35

just the nature of like being a

51:37

teenager yeah but is sort of turned

51:39

into this like big deal because of

51:42

the app itself yeah right And so

51:44

to some extent I think this is

51:46

an exaggeration, but it's also kind of

51:48

funny to see like how these things

51:50

play out in the nature of a

51:53

teenager who's sort of like going through

51:55

the things that teenagers go through. Yeah.

51:57

So true, it's the importance of product

51:59

design and how it contributes to interactions.

52:01

It's not necessarily a safety thing, but

52:03

you can see how that half-swipe and

52:06

that delay in somebody replying could lead

52:08

to the kind of anxiety and stress

52:10

that might lead to safety issues, to

52:12

bullying, to people to take decisions that

52:14

they don't want to take. So it's

52:16

a light-hearted story that has a serious

52:19

element to it. And it goes back

52:21

to the story we touched on that

52:23

was publishing the Guardian about importance of

52:25

conversations. and of course of kind of

52:27

talking to children and anyone frankly about

52:30

the kind of different affordances of different

52:32

platforms and technologies. There's no way out

52:34

of it otherwise. Yeah. And this has

52:36

been a conversation that has been helpful

52:38

for me Mike and hopefully will be

52:40

one for listeners as well. That's a

52:43

point I think we should wrap up

52:45

this week. We've really enjoyed talking about

52:47

this week's news. Thanks again Mike for

52:49

joining. It's been great to chat to

52:51

everyone. We'll see you next week. Don't

52:53

forget that review. We'll see you in

52:56

seven days. Thanks

52:59

for listening to Control Alt

53:01

Speech. Subscribe now to get

53:03

our weekly episodes as soon

53:05

as they're released. If your

53:07

company or organization is interested

53:09

in sponsoring the podcast, contact

53:12

us by visiting Control Alt

53:14

Speech.com. That's CTRL Alt Speech.com.

53:16

This podcast is produced with

53:18

financial support from the Future

53:20

of Online Trust and Safety

53:22

Fund, a fiscally sponsored multi-doner

53:24

fund at Global Impact that

53:26

supports charitable activities to build

53:28

a more robust, capable, and

53:30

inclusive trust and safety ecosystem.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features