Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
So I don't know how we haven't already used this prompt
0:02
mic. Perhaps we should be feeding
0:04
all of the podcasts into chat GPT to,
0:06
to tell us. But, um,
0:09
I'm going to ask you today, the prompt
0:11
that you get on chat GPT when you go onto the
0:13
site. And that is what can I help with?
0:17
Can you find a thousand or so
0:20
people who would like to purchase my social
0:22
media card game?
0:25
What? Yeah. 1 billion users. Let's get
0:27
the plugin.
0:28
Yeah, 1 billion users, which I will note
0:30
this week OpenAI announced
0:32
that they are aiming to get 1 billion
0:34
users on chat GPT.
0:37
So this all ties together
0:38
Oh my God. It's, do you think they, they took
0:40
inspiration from the Kickstarter?
0:42
They must have, but I would like them to
0:44
send 1 billion users to the Kickstarter.
0:46
We are not quite at our threshold
0:49
of support to actually make the game
0:51
go into production. But if you are
0:53
listening to this, I know you will like
0:55
this game because it is all about
0:57
social media and it is fun for the whole
1:00
family to play. So if
1:02
you're listening to this and you have not backed our
1:04
1 million users, 1 billion users.
1:07
Game on Kickstarter. You can just go to Kickstarter
1:09
and search 1 billion users. Please,
1:11
please check it out. We need more backers.
1:13
And if you know other people who are
1:15
interested in online speech and trust and safety,
1:18
and you want to have a fun game to play that
1:20
will also help you explain trust and safety
1:23
and social media and all this kind of stuff. Please,
1:25
please do it. I hate to beg, but I need to beg.
1:29
I'm, I'm going to buy five more.
1:32
There we go. We have a discount. If you buy five,
1:34
we have a discount,
1:35
I'm going to do it.
1:36
but anyways, Ben, what can I help
1:38
you with today?
1:39
What can you help with? I am incredibly
1:42
jet lagged, Mike. I've just landed back
1:44
from your fair nation after a week
1:46
of work there. And I do not know
1:49
what day or time it is. Um, I
1:51
have not figured out how,
1:53
To, yeah, to actually
1:55
speak as you can tell. Um, so any,
1:57
any advice, any help on how to get
2:00
back to a normal rhythm of sleeping would
2:02
be much appreciated. Let's see how
2:04
this goes.
2:05
Yes, yes, this, this may be an interesting one.
2:07
Uh, I can confirm that. Well,
2:09
one, I was up late last night working
2:11
on stuff and I was texting Ben who was
2:13
in Chicago. this was not
2:15
that many hours ago
2:17
No,
2:18
and now he is. In London,
2:20
and he showed me his, bag
2:23
that he had just thrown in the corner after
2:25
getting back home in
2:27
time to record this. So we are both
2:29
working on quite little sleep
2:32
and, uh, Ben even less than I. So
2:34
this may be a very fun
2:36
or very strange episode, depending
2:38
on where things go. Bye.
2:40
everyone. Hello
2:49
and welcome to control all speech, your
2:51
weekly roundup of the major stories about
2:54
online speech. Content moderation
2:56
and internet regulation. It's December
2:58
the 6th, 2024. And this week's
3:00
episode is brought to you with financial support from
3:02
the future of online trust and safety fund. And
3:05
by today's sponsor internet society,
3:07
a global nonprofit that advocates for an
3:09
open globally connected, secure,
3:12
and trustworthy internet for everyone. I'm
3:15
Ben Whitelaw. I'm the founder and editor of everything in moderation.
3:18
at least I, I am, or was
3:20
the last time I checked, um, and I'm
3:22
with an equally kind of bleary
3:24
eyed Mike Masnick, for altogether
3:26
different reasons. It's good to
3:28
be here, Mike. We haven't been in this, in these respective
3:31
chairs for a few weeks.
3:32
Yeah, that's right. We took off last week.
3:35
We missed all the big stories, uh,
3:37
during Thanksgiving week and then the week
3:39
before that you were traveling. So you were not
3:41
here. And so, uh, it's been, been a
3:43
little while since, since we've, been able
3:46
to do this together.
3:48
Um, we, we've got a lot to cover today, partly
3:50
because yeah, we were, we took a break last week.
3:53
happy Thanksgiving again for, to you and to our
3:55
listeners. And we have a really
3:57
good, bonus chat at the end of today's episode
3:59
with some super smart folks from the internet society,
4:02
around social media bands in Canada
4:04
and Australia and age verification in general.
4:07
So, stick around for that at the end. Thanks.
4:09
Today's show, and we won't cover those kinds
4:11
of stories today, just because we do that
4:13
really well with, with those folks. we're going to dive
4:15
straight in to, um, what is essentially
4:17
Mike a breaking story. Um,
4:19
it's, it's a live story that you
4:22
essentially woke up, rubbed
4:24
your eyes and, uh, have
4:26
dived into, it's the,
4:28
uh, the tick tock case it's back
4:30
on the agenda.
4:32
Yeah. So. We knew that
4:34
there was supposed to be a ruling coming down basically
4:36
any day. This is the, US law
4:39
that in theory forces
4:41
ByteDance to divest of TikTok
4:44
in the US or
4:46
forces, the app
4:48
stores and ISPs to block
4:50
access to it. And they
4:53
are supposed to either divest.
4:55
or we'll have access blocked on
4:57
January the 19th. And there's a whole sort
4:59
of political side
5:02
story to this, which is that our
5:04
incoming returning
5:06
president, uh, Donald Trump,
5:09
who originally was the one who proposed
5:11
the TikTok ban and tried to do so in a
5:13
very ham fisted way that failed, four
5:16
years ago. Has now
5:18
completely flipped positions, perhaps because
5:20
he was backed heavily by
5:22
one of, ByteDance's
5:25
biggest investors, but has
5:27
insisted that he's going to protect TikTok. But
5:29
he comes into office, I
5:31
think a day or two after this
5:34
law goes into effect. And so
5:36
TikTok had challenged the law and it went to the
5:39
DC circuit, which is in the appeals court.
5:41
And they came down with their ruling
5:43
today, basically saying
5:46
that the law is. Just
5:48
fine. There is no problems with the law
5:50
and TikTok must obey
5:52
it. And ByteDance must obey it. this
5:55
ruling did come out literally,
5:57
about an hour before we were recording.
6:00
I have read what I think
6:02
are the most salient points
6:04
in it. however, given the
6:06
timing, I may make some mistakes in
6:08
terms of this because I have not had a chance to go through
6:10
it in great detail.
6:11
Have you even had a coffee Mike? Are
6:13
I have not actually, I am,
6:16
I am, uh, yeah,
6:18
anyways, so, I
6:20
think they're wrong is,
6:23
is, is my quick summary. it's
6:25
interesting because the government tried
6:28
to argue that there were no first amendment
6:30
issues at all. And that the case could be decided
6:32
without even considering the first amendment.
6:35
The court went in a different direction and said that
6:37
government was wrong, but that the first amendment. Does
6:40
apply here. And then once you
6:42
say that the First Amendment does apply, then there's a question
6:44
of which level of scrutiny, and
6:46
there are different levels, you know have
6:49
to be passed in order for the law to be considered
6:51
constitutional. The highest bar
6:53
is is strict scrutiny. And
6:55
then there are, there are lower bars. And the government had argued
6:58
that if the First Amendment does apply, you
7:00
should go for one of the lower bars. ByteDance
7:03
had argued that it should be strict scrutiny,
7:05
which is the hardest to pass. The
7:07
court notes that there isn't
7:09
a comparable situation that says
7:11
which of the levels of scrutiny should
7:14
apply. And so they actually choose
7:16
not to, say which level
7:18
should apply. Though there is a concurring, uh, opinion
7:21
by one of the judges suggesting,
7:24
which level should apply, but
7:26
the court says it doesn't matter because
7:28
we think that this law passes even
7:30
the highest bar strict scrutiny.
7:33
And so then they just analyze it based on strict
7:35
scrutiny. And I think they do it.
7:37
in my one read through it, I
7:40
think they do a terrible job of it. They
7:42
basically buy into strongly
7:44
buy into the claims by the
7:47
U. S. government of the national
7:49
security concerns, even though
7:51
they admit that no actual evidence
7:53
is presented for those, but they basically
7:55
say, well, the U. S. government, a lot of people
7:57
in the U. S. government say that we should be concerned
8:01
and therefore we're just going to take that as fact.
8:03
This is not uncommon in
8:05
the U. S. court system when
8:08
the government says, well, we have national security and
8:11
blah, blah, blah. We can't tell you what they are. We're just
8:13
really concerned. the courts are often
8:15
willing to go along with that. That
8:18
has led to a whole bunch of really terrible things
8:20
having to do with civil liberties and civil rights
8:22
and all sorts of stuff going back
8:24
decades. But that seems to happen
8:26
again here where they're just like national security concerns
8:29
seem totally legit and you
8:31
know ByteDance hasn't given us anything to
8:33
Reject that they admit multiple times
8:35
that a lot of these concerns are totally speculative
8:38
And yet because ByteDance can't respond
8:40
to the questions Speculative concerns.
8:43
Therefore, this is all okay. There's
8:46
a whole bunch of other like little stuff in here that
8:48
again, it's just kind of like, well,
8:51
you know, by chance I said, we have all these other,
8:53
less restrictive means of doing this, which is part
8:55
of the strict scrutiny test. And the court
8:57
is like, nah, you know, we don't think any of those
9:00
good enough. I mean, you know, Bytance really pushed
9:02
this idea that, they would separate out
9:04
all the operations, which they've mostly done.
9:07
Everything is hosted in the U S by Oracle.
9:09
They've given Oracle the power to audit stuff.
9:11
they even offered the U S government, like an off
9:14
switch, like that they could, you
9:16
know, push a button and turn off TikTok, you
9:18
know, in the case that something really bad happened
9:20
and the government was like, nah, you know,
9:23
that doesn't really satisfy what the government
9:25
is doing here. There are a bunch of other oddities
9:27
in here. You know, there were issues around
9:29
like specifically targeting, like the bill actually
9:31
names Tik TOK, which seems
9:34
like it's singling them out for different treatment. And
9:36
the court kind of waves that off
9:38
and says, well, no, that's
9:40
not really true, even though it does mention
9:42
it. it, they said at one point, it's not
9:45
punishment. If it was a bill of attainder, which is
9:47
a specific thing where it's like just targeting someone
9:49
for punishment, they said, yes,
9:51
it only names TikTok, but not for
9:53
punishment. And therefore it's not
9:55
a bill of attainder. There are all of these like
9:58
really odd things. So
10:00
the big question now is kind of like what happens
10:02
next? Like does TikTok.
10:04
get shut off on January 19th
10:07
and be gone for two days. And then Trump
10:09
comes into office and suddenly
10:11
changes position and allows to talk to come
10:13
back. I don't know. I mean,
10:16
I, I think, and
10:18
I could be, there may be, there's some procedural
10:20
weirdness in terms of how this law was written
10:23
in terms of like forcing it to go straight to the
10:25
circuit court rather than a district court, which would
10:27
be the normal thing. I assume
10:29
that they can now go to the Supreme Court and
10:31
ask for a stay on this ruling,
10:34
in order to appeal it. And maybe that puts
10:37
the decision off past, January
10:39
19th and allows Trump to get into office
10:41
and then sort of, you know, do something
10:43
to keep TikTok, unless he decides that he,
10:46
he maybe doesn't want TikTok. We, we
10:48
don't really know at this point.
10:50
Who knows what side of the bed Donald Trump will,
10:52
will wake up on. And so it's
10:55
really interesting to me, Mike, thanks for unpacking that. It's really
10:57
interesting to me that all of the work done by TikTok
11:00
to essentially kind of separate
11:02
it out as a, as a. distinct
11:04
legal entity, under this kind of project,
11:06
Texas, umbrella that it, that
11:09
it came up with a few years back actually has seemingly
11:11
not worked at all. it has not proven to
11:13
be persuasive to, in this case
11:15
at all as to the national security,
11:18
threat still. Why do you think that
11:20
is, why do you think that the, the kind of, All
11:22
of that efforts has been, has been kind of fallen
11:24
on deaf ears,
11:25
I mean, there is some element of it, which is
11:28
just, honestly, it feels like general
11:31
fear of China
11:33
and that comes through in the ruling to
11:35
like you see repeatedly, just
11:37
talk of the PRC, the PRC, this,
11:40
the PRC, that, and even
11:42
if they say they're not going to, if the PRC
11:44
comes down and demands that they do this or that,
11:46
they will have to obey and
11:49
therefore, that is the overriding
11:51
concern. And, you know, there were stories
11:54
that came out in the press from
11:56
like former TikTok employees saying
11:58
that, employees in China still
12:00
had access to data and
12:03
that that's noted in the decision. They,
12:05
they point that out. And so they say, you know, we just, basically
12:08
they just don't trust that this is real.
12:10
They, it's sort of. the judges seem to
12:12
feel that this was kind of a fictional setup,
12:14
um, and they don't really trust
12:16
that Oracle's ability
12:19
to audit it as actually meaningful
12:21
because again, they, they feel that, the PRC
12:23
could come down and, do all of this. You
12:26
know, the other argument that is made too, is that,
12:29
It's really just focused on the PRC.
12:31
I mean, because they say,
12:33
you know, one of the reasons why they claim that this isn't
12:35
a first amendment violation is because if
12:38
the company does divest, they
12:40
say all of the same content
12:42
and all of the same moderation could still
12:44
occur. There would be no change to
12:46
the, expressive nature of things. The only
12:49
thing we are doing is trying to
12:51
disconnect the PRC from that. From tick
12:53
tock. And so they feel that,
12:55
the project Texas set up and
12:57
the Oracle audits and the U S government
12:59
like off button don't separate
13:02
the company from the PRC. And so
13:04
that's where the court comes down.
13:05
They'd only be happy if, the PRC
13:08
sold by dance or by dance,
13:11
changed owners or changed hands. That's, that's
13:13
the only way this would actually kind of, uh, change
13:15
the, change the outcome in, in many sense.
13:17
Yeah. And, and, you know, I think for a variety
13:19
of reasons, I think, I think it's wrong. I think
13:21
it's, there are other precedents
13:23
on this that they sort of poo poo. And
13:26
we're just like, the hat doesn't, doesn't really apply here.
13:28
but it is something that happens. I think, I think
13:30
it's a, bad look. I think it's a bad look for, you
13:33
know, a supposedly free country in the U
13:35
S to sort of take this viewpoint. And
13:37
I think it will justify all sorts of other bad stuff
13:40
from elsewhere as well, based on
13:42
things like this, uh, it's, I think it's
13:44
a bad look. I know some people really.
13:46
Are concerned and there may be legitimate reasons
13:49
to be concerned about the Chinese government
13:51
and their connections here But I just
13:53
feel that this law goes against, Basic
13:56
american principles on on things and
13:58
and i'm disappointed by by the ruling which you
14:00
know again I've only read it once and really focused
14:03
on the first amendment section, which is sort of the second
14:05
half of the ruling And, um,
14:07
I just, uh, it strikes me as very unconvincing.
14:10
yeah. And that national security concern
14:12
has yet to be proven as real
14:15
or anything beyond really hypothetical at this stage.
14:17
Yep. Yep. And, you know, it, it, that frustrates
14:20
me. I mean, I have this come up in all sorts of, other
14:22
cases around like Fourth Amendment stuff and
14:24
encryption and just, you know, the willingness
14:26
of the courts to accept the government just sort of
14:29
giving this blanket statement that this is a national
14:31
security concern without proof is,
14:33
is just kind of frustrating.
14:35
Yeah. Okay. it links, um,
14:37
nicely to, to our next story, Mike, which
14:39
you have, Figured out, um,
14:43
it's, it's a TikTok story, but it's about a country
14:46
that we don't often talk about in controlled speech, um, Romania.
14:49
and I guess this is kind of really what the, in
14:51
some senses, the threat that
14:53
the, courts, and the government in
14:55
the U S are trying to kind of mitigate against. Right.
14:57
Yeah. Yeah. It's, it's kind of
14:59
an interesting, interestingly related.
15:02
Um, there's an election
15:04
going on this coming Sunday, in
15:06
Romania, but in the first
15:08
round of the election, that's sort of a, you know, a
15:10
two round election, the first round to figure
15:12
out who are the top two candidates and then the,
15:14
final election is
15:16
you know, with just those two top two candidates,
15:19
there was a surprise, you know, second place
15:21
finish that knocked out the incumbent.
15:24
The incumbent, I believe, came in third and
15:26
so is not taking part in the election
15:28
this weekend. there was a candidate,
15:30
and I may pronounce his name wrong, but Kalin
15:33
Georgescu, who was
15:35
considered a sort of mostly unknown
15:38
far right. populist
15:40
candidate who sort of came out of
15:42
nowhere with a big TikTok following.
15:44
Um, and he has been
15:47
sort of very pro Russia,
15:49
pro Putin. And in fact, a lot
15:51
of his successful TikToks are
15:53
sort of reminiscent of,
15:56
Putin. Style, uh,
15:59
press appearances, riding
16:01
a horse, doing judo,
16:03
apparently running on a track without breaking
16:05
a sweat. It's just sort of this like physical
16:08
prowess.
16:09
nothing, nothing to, you know, who doesn't like
16:11
that? That sounds like a great, a great TikTok
16:13
experience.
16:14
yes. Yeah. And so he built
16:16
up a really big following on, Tik
16:18
TOK. The polls in the country had suggested that
16:20
he was, not going to get that many
16:22
votes. and a lot of people, it
16:24
appears sort of chalked up his. Big
16:27
Tik TOK following to
16:29
foreign influence campaigns. They also noted that those
16:31
videos were like highly stylized
16:33
and produced. And there were questions
16:36
of who was doing some of the
16:38
producing and the stylizing behind
16:40
it. There were questions about, whether
16:43
or not the followers were real, were
16:45
they foreign efforts? Were
16:47
they bots? there were questions
16:49
around. Influencers
16:51
who were promoting the videos
16:53
and were they paid or not?
16:55
I, I, I found this really interesting
16:57
point around, like, whether this candidate was, marked
17:00
in the same way as other political candidates, basically
17:02
suggesting that, they weren't kind of tagged
17:05
as a political candidate running an election
17:07
and therefore gotten larger reach
17:09
than the other candidates, which is a really interesting
17:12
kind of element to this, isn't there?
17:13
Yeah. And that's unclear because tick
17:16
tock denies that. but deal
17:18
is because there are restrictions on
17:20
political advertising within
17:22
Romania, If all the candidates
17:24
are treated as candidates, then
17:26
in theory, there were limitations on kind of what sorts
17:28
of promotions they could do. And if they were paying
17:31
influencers to promote, is that
17:33
a political ad? and how does that apply?
17:36
And so there are all these questions and the suggestion
17:38
from many, including some
17:40
government officials is that TikTok
17:42
didn't designate him as a, politician,
17:44
as a candidate, and therefore didn't have
17:47
these restrictions and that allowed his videos
17:49
to go much further and build up a bigger
17:51
following. Again, TikTok denies
17:53
it. So it's a little unclear whether or
17:55
not that is true. There is though, that,
17:58
the things that TikTok. Did do
18:00
it was at one point they did remove
18:02
what they refer to as a covert network
18:05
that was trying to boost
18:07
his videos. And so, they
18:09
did. Something, you
18:12
know, they certainly, paid attention to something
18:14
and found some inauthentic
18:16
behavior behind him and that they
18:19
claim that they stopped. And they also said that
18:21
they found some similar types
18:23
of inauthentic behavior for some of the other candidates.
18:26
Um, so it wasn't that they did nothing.
18:29
they were willing to step up and do
18:31
something, but Romanian
18:33
officials, feel pretty sketched out by
18:35
this and they've asked the EU to investigate.
18:38
and you know, it, it brings up all the questions
18:40
that happen when, whenever there are
18:43
unexpected results in elections
18:45
and everyone's looking for explanations and, and
18:47
these days, oftentimes they quickly
18:49
jump to, well, it was this internet platform
18:52
that caused the problem. Yeah.
18:53
Yeah. I mean, there's a couple of interesting points
18:55
here is in some ways this story is, one
18:57
we've seen many times before, which is
18:59
platform being used by
19:01
political candidate. with the kind of
19:03
looming threat of foreign interference in the background.
19:06
And, you know, we've there's countless examples of that,
19:09
Analytica being the kind of most, most famous
19:11
one, I guess. And, It's
19:13
nice to see that things don't change in the
19:15
best part of a decade. But
19:17
I was really interested in, you know, if that is the case, if there
19:19
is a kind of lack of oversight here, Romania
19:22
is pretty small. It has 20 million citizens.
19:25
8 million of those apparently use
19:27
TikTok or have a kind of TikTok account. So it's a significant
19:30
number. You know, why
19:32
was it that there wasn't really, I guess, more attention
19:34
paid to Making sure that candidates
19:37
were, given the same kind of platform. And
19:39
I decided to kind of go into the DSA
19:42
data, my find out how many,
19:44
how many moderators Romania has,
19:47
um, or how many are Romanian speaking. And,
19:49
and so there are, according to the DSA
19:51
report from January to June,
19:54
95 Romanian
19:56
speaking moderators for a country
19:58
of 20 million. And we don't
20:00
know if that's good or bad. It doesn't necessarily take
20:02
into account, I guess, people setting policy
20:04
either. That's probably just the folks who are, looking
20:06
at reports and appeals and those kinds of things.
20:09
But it doesn't seem a lot compared to other countries.
20:11
so the Netherlands, which
20:13
has roughly the same number of. citizens,
20:16
we don't know exactly how many users, of
20:18
TikTok, but they have 160 moderators,
20:21
speaking Dutch.
20:22
roughly double,
20:23
So roughly double, um, Sweden
20:26
has, roughly the same number of moderators,
20:29
99, but has half the population. And again,
20:31
we don't, we don't know how many, Of those people
20:33
use, use Tik TOK. So again,
20:36
you know, it's interesting to me
20:38
that kind of how resources are
20:40
applied. And again, this is a tale
20:42
as old as time, how platforms,
20:45
in the kind of non core markets,
20:48
set policy and operationalized
20:50
policy is something that we, we see time and time
20:52
again as, as a sticking point, do
20:54
you think that's kind of relevant here? Does that feel like a,
20:56
a pathway trod a little bit?
20:59
Yeah. I mean, you know, it is one of these,
21:01
big questions that comes up all the time, so
21:03
many of these discussions, you know, and we
21:06
try to, you know, one of the reasons
21:08
you and I are always looking for stories outside
21:11
of, the U S and, The
21:13
big countries that everybody talks about is because
21:15
there is an important story there about how
21:18
these companies handle it and the, you
21:20
know, less followed countries where,
21:22
where less attention is, paid to them. And
21:25
so I think, I think it is a, it's a huge
21:27
story. And, you know, there is this element
21:29
of this one where I sort of feel like, I
21:32
think. A lot of people really believe
21:34
that so much of his following, in
21:36
this case, were bots, that
21:38
they didn't think that the votes would follow, and
21:40
yet the votes did. And so, there was
21:42
this part of me when I first story where I was
21:44
like, and, you know, the first thing I was reading
21:46
was like, oh, you know, it's all fake followers
21:49
and bots. I was like, yeah, but it wasn't fake
21:51
voters, you know. So,
21:53
so something is happening here. but
21:56
to be honest too, like, I don't know that, you can
21:58
say necessarily that like, Oh,
22:01
it's 95 or whatever moderators
22:03
too little. I don't know. And
22:05
we, we don't know, like, was
22:07
that not enough where the policy's not in place again,
22:10
there's the whole thing where like TikTok has denied this
22:12
and they did take down some inauthentic
22:15
behavior. so it, we, we have,
22:17
you know, Partial information, not full information.
22:19
If the EU does an investigation,
22:22
it would be really interesting to find out if
22:24
more details come out of it, but it is,
22:26
it is sort of an interesting story pay attention
22:28
to and to see what comes of this.
22:30
yeah, definitely. And I mean, I think the other part of
22:32
this story, something that kind of is
22:34
a thread that runs through U. S. TikTok ban
22:37
as well, is just the, is
22:39
a media story, really. It's about the fact that
22:41
there is so many people kind of
22:43
consuming TikTok and the shift
22:45
away from traditional media to, new
22:48
forms of media. The fact that politicians can bypass
22:51
traditional forms of media. Yeah. as this
22:53
candidate has done and still perform
22:55
very well in an early round. So we'll
22:58
keep a tabs. I've never said this before, Mike, but I'll be keeping
23:00
tabs on the Romanian election this Sunday. Uh,
23:03
There you go. This, this, this coming Sunday
23:06
is my birthday. So I will celebrate it by paying
23:08
attention to the Romanian election as well.
23:11
happy birthday in advance. And, uh,
23:13
you.
23:13
What, what a way to celebrate
23:15
Yes, yes.
23:16
so if those kind of two stories paired
23:19
together, I guess, represent in some ways the
23:21
kind of ghost of, antitrust past
23:23
Mike, you know, the way that the,
23:26
a segue. What a segue.
23:28
you know, I think the next story is the ghost
23:30
of, of antitrust future. next one
23:32
you picked out and it's about a man that I actually
23:34
didn't know about, uh, up until,
23:37
an hour or so ago. Um, but
23:39
he's an incredibly frightening looking man. I'll have
23:41
to say, and I'm not sure if I'm ever going to forget
23:43
his face. Um, so
23:45
tell us about Andrew Ferguson and,
23:47
and what he's come out with this week.
23:49
Oh my goodness. So Andrew Ferguson is an FTC
23:51
commissioner. and as
23:54
a Republican, the FTC
23:56
has five commissioners, three are
23:58
appointed by whichever party
24:01
has the presidency and two are the,
24:03
other party. So right now there are two
24:05
Republican commissioners. FTC commissioners and
24:08
three Democratic ones, that will flip
24:10
in January. And,
24:12
Andrew Ferguson is one of the Republican
24:14
commissioners. He is vying
24:17
for very clearly vying for
24:19
the chair, taking over what Lena
24:21
Khan's position is, is right now under
24:23
a Trump administration. and
24:26
there was a New York Post article Today
24:28
or yesterday that sort of detailed, there are three
24:30
leading candidates, the two current FTC
24:32
commissioners, which is Ferguson
24:35
and the other one's Melissa Holyoke.
24:37
and then there's, there's a third person,
24:39
who, worked for Senator Mike Lee
24:42
that he's really pushing for to be chair. And
24:44
the, the question that people around
24:47
Trump are apparently asking is like, which
24:49
one of these is going to be toughest on,
24:52
they say, big tech. but they really mean
24:54
Trump's enemies. Especially considering
24:56
how much support he got from certain tech
24:58
sectors this time around, it'll
25:00
be, you know, the companies that they don't like.
25:03
And so the FTC took a
25:05
fairly typical FTC action
25:07
this week against an e commerce platform
25:09
called Goat. The details aren't even
25:12
that interesting in the case, but basically
25:14
they lied about shipping times.
25:16
People were paying for premium shipping and not getting
25:19
it in time. And then also
25:21
they had a, buyer protection, thing
25:23
where they're saying like, if anything goes wrong, we'll protect you. And
25:25
then they weren't, they weren't living up to that. The
25:28
FTC took action on them and basically saying,
25:30
you know, they were making promises and they weren't living
25:32
up to it. That's an unfair and deceptive practice.
25:35
Very typical standard FTC.
25:37
Nothing at all interesting about that. The
25:40
other Republican commissioner, Holyoke,
25:43
put out a concurring statement on this
25:45
that basically just said, this also
25:48
proves that, we
25:50
can use the same authorities to go
25:52
after big tech companies for
25:55
unfair moderation decisions.
25:57
Okay.
25:58
Which is nonsense.
26:00
Yeah.
26:00
but it was like a one paragraph thing
26:03
and it looks like to me, at least
26:05
Andrew Ferguson then said, okay, I see
26:07
that I got a one up it because we're
26:10
in, we're in a fight here for the chair position.
26:12
I raise you.
26:13
I raised you, I raised you crazy.
26:16
and put out this like four page concurring
26:18
thing. Again, none of this has anything to
26:20
do with the ruling on goat, which is the company
26:23
that this is ostensibly about, saying
26:25
like, yes. And I agree with Holyoke that
26:27
we can. Use this power to
26:29
go after unfair moderation, but
26:31
also we, we
26:34
can use our powers to go after
26:36
advertisers who stopped advertising
26:38
on X because that must
26:40
be antitrust collusion
26:42
to censor free speech. And
26:45
we have to support free speech. And
26:47
right now there's only one free
26:49
speech platform Elon Musk,
26:51
the brilliant, wonderful free speech supporter.
26:54
And, you know, How dare,
26:56
any other platform censor
26:59
American speech and that must be illegal
27:01
and how dare they not advertise. so
27:03
he goes after, advertisers who
27:05
stopped, he goes after Garm, which we've talked
27:08
about in the past saying, you know,
27:10
that was, clear evidence of, of
27:12
collusion. He goes after NewsGuard,
27:15
saying that, NewsGuard, who I've written about
27:17
a few times that the Republicans have gone crazy about,
27:20
NewsGuard, all they do is just say which
27:22
news sources are trustworthy and which
27:24
are not. And he sort of
27:26
admits when he's talking about NewsGuard, like,
27:29
yes, NewsGuard can have its own opinions. But
27:31
if multiple companies are basing
27:33
decisions on those opinions,
27:35
that is antitrust collusion. It,
27:38
it is four pages of crazy
27:42
indicating once again, like
27:44
with Brendan Carr, we talked about a
27:46
few weeks ago, like with Brendan Carr
27:49
that, these. Bureaucrats
27:52
really intend to use the powers of government
27:54
to attack speech online,
27:57
and they're framing it all within
27:59
the language of free speech. The whole
28:01
thing over and over again, he talks about,
28:03
how important free speech is. And he does
28:06
the Elon Musk is the only believer in
28:08
free speech. And every platform
28:10
has to use the same policies
28:12
as, Elon does. And how
28:14
dare they not do that?
28:15
so just to clarify, so we have a situation
28:17
where we're antitrust as we've
28:20
talked about in the last couple of weeks of the podcast
28:22
which is going to be a big theme within the Trump administration,
28:25
and, you know, he's run out of the FTC
28:28
has a guy in Brendan Carr
28:30
who doesn't know much.
28:33
Brendan Carr's the FCC, not the
28:35
Sorry, sorry. Yeah, so, so the
28:37
FTC commissioners also
28:39
don't know very much according to this letter,
28:41
at least this particular guy. And
28:43
so we have a situation where like maybe
28:46
nobody knows anything about
28:49
it's, I mean, the question is, do
28:51
they know or are they just like putting on
28:54
a show for Trump? Right. And, and
28:56
it's just this sort of like populist thing.
28:58
I don't know Andrew Ferguson that well, Brendan
29:01
Carr, I know a little bit. And so I know
29:03
he knows that he's. Lying,
29:05
like Brendan Carr is smart enough to
29:07
know what the law is. I don't know enough
29:09
about Ferguson to know whether or not he knows
29:11
this is crazy. You know, one
29:13
of the lines that really got me in this letter
29:15
was like, he claims at one point that the proof
29:18
of collusion among big
29:20
tech companies to censor content
29:22
in an illegal manner is
29:24
that simultaneously,
29:26
he specifically says simultaneously,
29:29
all of the big tech platforms
29:31
blocked. All discussion
29:34
and reporting on the Hunter Biden
29:37
laptop in 2020.
29:39
Which just isn't the case.
29:41
None of that happened, right? The
29:43
only thing that happened was
29:45
two companies took some action, Twitter
29:47
and Facebook. The action that Twitter
29:50
took was it blocked the link. It did
29:52
not block any other reporting on it. There was
29:54
other reporting on it. It did not block any
29:56
discussion of it. There was lots of other discussion
29:58
on it. In fact, like it was like a trending
30:01
topic. The only thing they did was
30:03
they blocked the link. To a single
30:05
New York post story for 24
30:07
hours. Then they reversed their policy
30:09
and allowed that link to be shared.
30:12
The only thing that Facebook did was it said,
30:14
well, there's some questions about the story, so we're going to
30:16
keep it out of the trending. It won't
30:18
go into the trending topics. and
30:21
they reversed that policy relatively quickly.
30:23
That was the only thing it did, but he declares
30:26
unequivocally that the entire. Tech
30:28
industry simultaneously blocked
30:30
all discussion of this. And
30:33
I, you know, one of the things that gets me
30:35
is I pointed out earlier this year that Elon
30:38
Musk did all that and more when
30:41
apparently Iranian hackers got
30:43
access to the Trump campaigns dossier
30:46
on JD Vance and they
30:48
passed around and most, most media sources
30:50
didn't bite on it. Finally, Ken Klippenstein,
30:53
who has a substack posted
30:56
it and Elon
30:58
banned Ken. He blocked
31:00
all. Links to,
31:02
any part of Ken's sub stack, not
31:05
just that one article. He,
31:07
you know, pulled down all sorts of stuff. And to this day,
31:09
I don't think you can share that. He did let
31:11
Ken back on the platform after like
31:13
two weeks, So everything
31:15
that, they have accused Twitter of
31:17
doing to the Hunter Biden laptop story, Elon
31:20
has done and more and gone much
31:22
further. And yet in this. comment
31:25
from this FTC commissioner, he
31:27
claims that Elon is the big free speech supporter
31:29
and the actions taken on the, the
31:31
Hunter laptop, which didn't happen,
31:34
prove that they're illegal
31:36
censoring, collusion, antitrust.
31:39
Not everything about this.
31:41
You've got your hands on your head, Mike. It's
31:45
it's so wrong, but this is,
31:47
this is unfortunately the world that we're living in. and
31:49
it, gives a sense of how the
31:52
incoming administration is going
31:54
to attack content moderation.
31:56
They're going to make these claims. They are
31:58
going to try and use every legal lever
32:00
they have, even as they are crazy
32:02
and totally counterfactual to reality.
32:05
Yeah. There's no I mean, the
32:07
idea that FTC could prove that
32:09
platforms coordinated on policy
32:11
changes or anything like that would
32:13
be so difficult to do, right? If this actually was,
32:16
you know, is a route you want to go down,
32:18
how do you go about saying that
32:20
this company over here would Has done
32:22
the same thing as this company over here a
32:24
way that amounts to collusion.
32:26
Yeah, well, the thing that they can do, and they
32:28
probably will do, is that they can conduct
32:30
investigations, and they can demand
32:33
to see all sorts of internal files, and
32:35
that is what is going to happen, almost certainly,
32:37
and then, I would guess what would come out of it
32:40
is, probably a, really
32:42
misleading investigation. findings
32:44
and they'll release things selectively that
32:46
take things out of context and make, make
32:49
claims that are just not accurate. and
32:51
it's going to be a mess. and this is why we're
32:53
seeing, tech companies, you know,
32:55
trying to kiss the ring of Donald Trump
32:57
and, and try and make nice because they know
32:59
if they don't, they're going to face all
33:02
of this kind of, authoritarian
33:04
nonsense.
33:04
yeah, no, indeed. And, and,
33:06
you know, currying favor, it's actually,
33:09
uh, again, a very good segue onto our next story,
33:11
um, is, is fast becoming
33:13
did it on purpose, Ben.
33:15
The theme of this, episode. So, you know, we
33:17
have a situation where, FTC
33:19
commissioners are, are, cozying up
33:21
to, the new administration. We also
33:24
have a situation where Meta
33:26
in a very coordinated way is
33:28
doing the same thing. and so this week, a number of
33:30
different outlets, including the financial
33:32
times and the verge reported on,
33:35
Nick Clegg, the, president of global
33:37
affairs at Metta, talking about how,
33:40
Metta essentially overstepped the line
33:42
when it came to content moderation during COVID,
33:45
the comments were made in a, reporter.
33:48
Briefing, which is, uh, you know, does
33:50
happen, but it's a very kind of coordinated,
33:52
very kind of controlled environment, for
33:55
a very senior person within Metta to
33:57
kind of make these statements. And, yeah,
33:59
the, coverage has been essentially
34:02
that Metta is not
34:04
apologizing, but admitting that it, it overstepped
34:06
the line in terms of, COVID information
34:09
controlled during that period. And. This
34:11
comes on the back of, you'll remember Mike, that letter
34:13
by Mark Zuckerberg, uh,
34:16
to Jim Jordan, which we had a good laugh
34:18
about, um,
34:21
I thought it was a cry.
34:23
we cried a bit, um, it, it,
34:25
it felt like a letter that had been,
34:28
written at gunpoint, um, I remember, I
34:30
remember saying, and, uh, almost
34:32
kind of made, made to write that, and it was clearly
34:34
in lieu of this situation, right, that where
34:36
Donald Trump becomes president again,
34:39
and, you have a situation as was reported
34:41
this week, where Zuckerberg is invited
34:43
to Mar a Lago to talk about the future
34:46
of tech policy, out of the U S. So in
34:48
a number of different ways we have, you know,
34:50
in here, we have meta kind of cozying up to
34:52
the new administration. We have the FTC commissioners
34:54
doing the same. Is there anybody that
34:56
has any dignity left? You
34:59
know, I think this stuff is so obvious,
35:02
right? It's, you know, it's so obvious and in some respects,
35:04
I'm frustrated about the way that it's
35:06
reported, in this way, because
35:08
apart from the line that says This is
35:11
a briefing with Nick Clegg and
35:13
some journalists who've been invited there. quite
35:16
clearly, you know, designed to be a signal
35:19
to the Trump administration of,
35:21
we know what you're going to ask us to do and
35:23
we're happy to do it.
35:24
Yeah. Yeah. That's exactly what it was. this
35:26
was totally a messaging thing. It
35:28
was, you know, a coordinated attempt by
35:31
the company to lay out this message that
35:33
will be embraced by. the
35:35
sort of MAGA faithful, to insist that it
35:37
proves, I mean, this will
35:39
be extended, right? They'll say it proves that,
35:41
not only that MEDA was
35:44
overly, willing to suppress speech,
35:46
but it will just be reinforced with the claim
35:48
that, the demands for that came from
35:51
the government, which is the part that
35:53
he didn't say that. But that is,
35:55
claim that lots of people are making. and
35:57
so, This is, it's a spineless
36:00
capitulation to this argument
36:03
and basically it's giving
36:05
the Republicans ammo to claim that
36:07
we were right all along. We were unfairly
36:09
targeted. We were unfairly censored
36:12
and, even meta admits
36:14
it. and we'll see that over and over again.
36:16
And people will point to this as if it's proof. I
36:18
had somebody yell at me this week,
36:21
only one.
36:22
well, there were a few people, but someone was yelling at
36:24
me about this, where I was talking about
36:26
some of this and they were saying, well,
36:28
you know. Zuckerberg admitted under
36:31
oath that the US government pressured
36:33
him to take down content. He didn't want to take which is
36:35
not What actually happened,
36:37
Right.
36:38
but like the message gets out there
36:40
and kind of knows what they're doing when they,
36:43
when they say this. and you
36:45
know, I understand why they're doing it. You know, they feel
36:47
like they need to do it to avoid,
36:49
to hopefully avoid costly stuff, but
36:51
it is, shows a real lack of principles
36:54
as far as I'm concerned.
36:55
Yeah. And if you're a trust and safety professional
36:58
working in meta who,
37:00
probably spent countless hours
37:02
trying to figure out what the policy
37:04
should be during an evolving situation
37:06
that no one has ever seen before that was COVID
37:09
and no one will ever see, you know, for a
37:11
long time, that's going to be really, really
37:13
tough to take.
37:14
it's demoralizing, right? I mean, you know,
37:16
the reality is what these companies should be doing
37:19
and they don't do is saying, like, these are really,
37:21
really, really difficult decisions.
37:24
And there was no way to get it right. There was
37:26
simply no way. I mean, I talk about sort
37:28
of impossibility theorem here. There
37:30
is no way to get it right. And that is it. Extra
37:32
true in a case where you have something that is brand
37:34
new. Nobody understands, you know, nobody understood
37:37
the details of COVID. We didn't know what
37:39
was right. We didn't know what was wrong. And
37:41
people made choices and lots of people made
37:43
wrong choices. Some of them made wrong
37:45
choices because they just didn't have enough information
37:47
and there's more information came out, they adjusted. Some
37:50
people made wrong choices because they, you
37:52
had crazy ideas in their head. There were
37:54
all sorts of wrong choices that were made along
37:56
the way. And it wasn't because of like
37:58
any, you know, in many
38:01
cases. It wasn't because of
38:03
bad actions or bad ideas. I think,
38:05
you know, the companies. Try to put
38:07
forth their best effort. That's what Meta should
38:09
be saying. You know, yes, we
38:11
may have made mistakes, but we made best efforts
38:14
based on what kind of information we had.
38:16
We took this seriously. We wanted to keep people
38:18
safe. And because of the changing
38:20
nature of the information environment, we
38:22
had to make decisions on the fly. And instead
38:25
he comes out and he gives this statement, which is basically
38:27
like, Oh, you know, we took down too much content
38:29
because there was too much pressure on us. Like, you
38:31
know, come on, stand up, have a spine. I,
38:33
it's. It's really, really
38:35
frustrating to me. This was a chance. This
38:37
was an opportunity for them to educate people
38:40
on how trust and safety works and what the
38:42
real purpose of trust and safety is. And
38:44
instead he's feeding into the narrative
38:46
that it's this awful censorship machine.
38:48
And it's, it's really, really frustrating.
38:51
Yeah. I mean, I've talked to a few
38:53
folks recently about trust and safety is kind
38:55
of marketing problem. The fact that, you know, it
38:58
needs to kind of present itself continually as
39:00
that difficult challenge, those impossible
39:02
trade offs, and it needs to kind of completely an
39:04
ongoing message that out, um,
39:07
it doesn't help when Nick comes out and gives a,
39:09
a soft briefing to those journalists that the
39:12
opposite is true. You know, he does say,
39:14
interestingly, just before we move on
39:16
about how, AI has
39:19
all this potential, but right now there are some really
39:22
pissed off people on, on Metas platforms
39:24
for the fact that, they make mistakes and
39:26
remove innocuous or innocent content. And,
39:29
again, you know, just. it's kind of completely
39:31
crazy because it was literally two weeks ago.
39:33
We were talking about how, how threads was an
39:35
absolute mess of a, of a moderation
39:38
process and you had all of these kind of, terms
39:40
that should have been moderated being taken down. So I'd
39:43
love for media to be a bit more critical
39:45
and, and, uh, clear on, on
39:47
what it is that senior people like Clegg are
39:49
saying, and I think that's, that's really part
39:51
of our job, um, to, to, to do
39:53
that too.
39:54
they, they, you know, put this in context,
39:56
like put, put his statements in context
39:58
and I didn't feel like, like the media
40:00
was really doing that.
40:02
no, no, indeed. Mike, I'm gonna,
40:04
um, we can't finish on that low note.
40:06
we've got a couple of lighter stories.
40:08
Um, but I'm gonna, throw to you to
40:10
pick, you had, end
40:13
with ChatGPT since that's where we started
40:16
episode. Um, tell
40:18
us about this kind of fun lighthearted story.
40:20
Yeah. This was kind of an interesting story where
40:22
suddenly it started spreading wide that, chat
40:25
GPT would break if you tried to get
40:27
it to say anything The
40:29
name David Mayer. I saw
40:31
it first in a very funny post
40:33
on blue sky from Andy Bayo,
40:35
who's is really interesting guy runs,
40:38
uh, has the site waxy. org. And
40:40
he does lots of really interesting stuff, but
40:42
he had heard that. And so he, he
40:44
created this question for chat GPT, which was
40:46
combined the first name of the artist who recorded
40:49
Ziggy Stardust and the last
40:51
name of the artist who recorded your
40:53
body is a wonderland into a single
40:55
name and chat GPT starts.
40:58
And says the artist who recorded Ziggy Sardust
41:00
is David Bowie. And the artist who recorded
41:03
Your Body is a Wonderland is John Mayer.
41:05
Combining their names and it says David,
41:07
and then it breaks and it says, I am unable
41:10
to produce a response. It
41:12
refuses to produce the name David Mayer.
41:15
And what people, yeah,
41:17
what people then quickly discovered was that there
41:20
was a short list of names that
41:22
chat GPT will break if you try
41:24
and get it to produce those names. Then
41:27
the hunt was on to sort of figure out who they were
41:29
and why
41:31
odd. So, and did they find
41:33
out what the cause was that, what, what, why doesn't
41:35
ChatGPT like these names?
41:37
so as far as anyone can
41:39
tell, and as I don't think open AI
41:41
has come out and said anything yet, and I will
41:43
note that they have fixed the David
41:45
Mayer one, so that it
41:47
now does work. But the other names
41:49
on the list do not work, still do not work.
41:52
Okay.
41:53
It appears that OpenAI
41:55
just created a block list. It
41:58
appears to be about six names. There might be
42:00
more, um, but there's a
42:02
block list that if you try and get it to produce that
42:04
name, it will break. And
42:07
it just not, you know, it doesn't
42:09
break in a nice way. It just, you know, we'll
42:11
go halfway through a question and then say, I am unable
42:14
to produce a response.
42:16
I really love the idea that there's some incredibly
42:18
well played, you know, some of the smartest
42:21
kind of machine learning engineers,
42:24
you know, sat next to some of the world's,
42:26
brightest trust and safety experts, some
42:28
of whom we know probably, um, you know,
42:30
sat together being like, what are the list of six
42:32
names that we've gotta add to a block list?
42:35
Yeah. And it's just, it's so obviously
42:37
just a straight up lock list that says we
42:39
will not produce these names and people have gone
42:41
through and reasoned out some of them and they're
42:44
all slightly different. There's a few
42:46
that we don't quite know why
42:48
there was one person, Guido Scorza,
42:50
who is a data protection
42:53
expert in Italy, and he had
42:55
posted that he had used the
42:57
GDPRs. right to erasure
43:00
process against open AI
43:02
and said, basically you have to forget everything about
43:04
me. And the only way that they could figure
43:07
out to do so was to put him on this
43:09
block list Because
43:11
they don't actually remember anything about him.
43:13
Like there's no this is the thing that a lot of people
43:15
don't Understand about these systems
43:17
is it's not a big database. It's not going in
43:19
and collecting stuff. It is training and
43:21
just thinking about stuff. So the only way to say like,
43:24
never produce any information
43:26
on Guido Scorza is to put
43:28
them on a, I mean, there are better ways,
43:31
but this was the sort of fast and quick way.
43:33
There are other people, you know, the, Brian
43:35
Hood was a mayor in
43:37
Australia who claimed that
43:39
it produced defamatory content about him
43:42
and he threatened to sue. And so. that
43:44
one probably came up first. Somebody had actually
43:46
told me about that one, like a year and a half ago when
43:48
it, when it came up and I had meant to investigate it. And then
43:50
I just never got to it. Jonathan Turley,
43:53
who's a law professor and sort
43:55
of a famously sort of Trumpian
43:57
law professor. Went on this big
44:00
rant of claiming also that chat
44:02
GPT had defamed him. And so he's on the
44:05
The weird one is Jonathan Zatrain, who
44:07
I know, and asked
44:09
like, what the hell, what
44:11
yeah,
44:12
to get off this list? And he has no
44:14
idea.
44:15
right, right.
44:17
he doesn't know what's going on. I joked
44:19
with him that like, Oh, you know, are you trying to
44:21
come up with something to prove a point or something
44:24
about all this? He's like, I did nothing.
44:28
So,
44:28
And he didn't know he was on ChachiPT's
44:31
naughty step.
44:31
he, he had found it, he had actually discovered
44:33
it. So he, he added himself when everyone
44:35
was pointing out to this list of names, he pointed out,
44:37
like he had discovered a few months ago that
44:40
chat GPT breaks on his name. and
44:42
so, you know, it's just. Basically,
44:45
it's like this really simple way. I understand
44:47
probably why it happened because all
44:49
these people, except for Jonathan, as far as we know,
44:52
got really mad. We still don't know which David
44:54
Mayer there is. There's speculation. There's a few different,
44:56
there's also a David Farber, Faber, and
44:58
nobody's sure which one that is. but
45:00
the other ones, like people got mad and open
45:03
AI probably was like, you know what, this
45:05
guy is a pain. Fucking headache. And
45:07
therefore just put them on the ban list.
45:10
Like, let's not deal with this because we don't want to
45:12
go through a lawsuit and have to do
45:14
this. And there's no other way to like
45:16
effectively, because they're, you know, they don't understand.
45:19
It's not that chat GPT is defamatory.
45:21
It's whatever the prompt is led to defamation.
45:25
And it's not that we're collecting data on
45:27
this guy, but you know, he thinks
45:29
we are, and do we really want to fight it in
45:31
court? I just put them on the ban
45:32
Put him on the list.
45:33
I'm pretty sure that's the kind of thing that
45:35
happened. You know, you would hope that
45:37
a company with as many resources, and as
45:39
you noted, as many, smart
45:42
engineers would come up with a more sophisticated
45:44
way than this, but they haven't. And so,
45:47
you know, I know how things like that happen and
45:49
I'm sure it was just kind of like, ah, just make this
45:51
headache go away in the easiest way possible. And
45:53
that means every time you try and produce this name,
45:56
we're going to break it.
45:57
Yeah. Interesting. just as an
45:59
aside, there's some, there's the OpenAI put out
46:01
some research in the past week around
46:03
kind of red teaming in new, new and novel ways.
46:06
I wonder if they should be reading their own research.
46:08
Yeah, yeah, absolutely,
46:11
absolutely.
46:12
so yeah, if you, if David May is listening,
46:14
to the podcast, if he's, if he's a regular listener,
46:16
um, make yourself known, um,
46:19
we would love to know what you did to be put on
46:21
the, uh, on the shit list of, uh,
46:23
Yeah. And then you got taken off. So if
46:25
you have a legal complaint, get it, get it going
46:28
Yeah, we know some lawyers.
46:30
Um, great. Thanks Mike. Um,
46:33
that's not the, you know, that's not the kind of end
46:35
of today's episode. It's, we've got a great bonus
46:37
chat. I don't know if you want to kind of give us a preview
46:39
of that.
46:40
Yeah. So this was really great. I had this discussion
46:42
just yesterday. it's a topic that we've talked about
46:44
a lot. The social media age bands
46:47
in Australia and how they're popping
46:49
up elsewhere. and so this is a discussion
46:51
I had with, two folks from the internet society,
46:53
Natalie Campbell, who's the senior director
46:56
of North American government and regulatory affairs.
46:58
And John Perrino, who's a senior
47:00
policy and advocacy expert. And
47:03
they're concerned about these
47:05
age restrictions and age verification.
47:08
And, we talked about Australia
47:10
and also Canada has a bill that
47:13
is, moving forward that one of the incredible
47:15
things in the chat was that,
47:17
it would require places like Starbucks
47:20
to verify the age of anyone who wants
47:22
to use the wifi in Starbucks
47:25
and, you know, all sorts of stuff. And
47:27
so they, they're sort of concerned about.
47:29
these laws, the proliferation of these laws,
47:31
what it means for the open internet and
47:34
what age verification requirements
47:36
would mean. The really fascinating discussion
47:38
and we'll go to that right now. Natalie,
47:51
John, welcome to the podcast. Glad to have you
47:53
here. wanted to start by talking about
47:55
the law that just passed in Australia, which
47:57
effectively bans those under 16
48:00
from social media. John, can you
48:02
talk through the details of that law, including
48:04
sort of which sites are impacted and
48:07
how do you, Does the Australian government
48:09
expect sites to know the age of
48:11
their visitors?
48:14
great question, Mike, and,
48:16
you know, for, for quick background for
48:18
listeners, this legislation
48:21
moved through in about less
48:23
than two weeks and
48:25
there was a consultation period open less than
48:28
24 hours. Um,
48:31
the Australian social media age verification
48:33
stuff really, really flew through. And
48:36
there's a lot that we still don't know, even
48:38
though the legislation passed, especially
48:40
on what social
48:42
media platforms would be
48:45
required to do in order to
48:47
comply with this. So
48:49
right now, essentially all we know
48:52
is that government ID
48:54
would not be required. That should
48:56
include things like a passport or government
48:59
issued driver's license, also
49:01
a digital ID. And
49:03
we know that the social media is
49:06
for Australians under
49:08
the age of 16. and
49:10
then the final thing that was kind of added on late
49:13
was a digital duty of
49:15
care, which again is to
49:17
be determined. So the bottom
49:19
line on this is much
49:21
of the bill is to be determined. They
49:23
don't know what age
49:25
verification would be,
49:28
as they say, reasonable
49:30
for the social media platforms to use.
49:32
And as a lot of the comments pointed out,
49:35
there was actually already an age verification
49:38
study ongoing, and
49:41
that's still going on, which tools,
49:43
maybe work best, what the trade offs
49:45
are in different age verification methods.
49:48
The Australian government was already doing this. They
49:50
don't have the results. They probably won't
49:52
have. The results until about
49:55
six months before the social
49:57
media platforms have to comply with this.
50:00
I know that we've seen a bunch of other countries also exploring
50:03
similar ideas around age verification.
50:06
on the podcast, we've talked about it in
50:08
the UK and somewhat in
50:10
the US to a lesser extent, there are a couple of different
50:12
issues in the US, but also
50:14
now Canada is exploring a
50:16
similar issue. So, Natalie,
50:18
can you talk through what the proposal
50:21
in Canada is about?
50:22
Sure, so, Senate Bill S 210,
50:25
this is an act to restrict young persons online
50:28
access to sexually explicit material,
50:30
is very close to becoming a law.
50:33
This bill has been flying under the radar
50:35
for quite a long time. I don't think
50:37
that people are taking it very seriously because
50:40
most Senate bills. Don't make
50:42
it to law. but this has found
50:44
its way into the very late stages
50:47
of our parliamentary process,
50:49
and it's probably the most dangerous
50:52
bill to the Internet, in
50:54
Canada right now. and one
50:56
of the main reasons for that is because of
50:58
its age verification mandates.
51:00
That would apply to virtually every
51:02
intermediary on the internet and
51:05
some that are not on the internet as well.
51:08
So essentially what the bill tries to do is
51:10
to prevent young people from access
51:12
to sexually explicit material, which
51:14
is defined extremely broadly.
51:17
and makes it so that every intermediary
51:20
that would play a part in facilitating
51:23
sexually explicit material for profit online
51:26
would have to verify a user's
51:28
age or face very high fines.
51:31
So, this, you know, is this,
51:33
is this, Probably the most extreme
51:35
age verification proposal we've seen so far,
51:37
because it's not just targeting websites,
51:40
it's targeting internet service
51:42
providers, content delivery networks,
51:45
search engines, email services,
51:47
even a Starbucks location that's providing access
51:50
to Wi Fi. Because
51:52
what content is flowing through this pipes
51:54
could be sexually
51:56
explicit material. they now have
51:59
a duty to verify
52:01
users age and in doing
52:03
so like it's, it's different. When
52:05
we're talking about websites doing
52:08
this, but when we're thinking about every single
52:10
intermediary on the Internet having a duty
52:12
to verify users age, that
52:14
gets really problematic when we're thinking
52:16
about an open Internet first,
52:19
because most traffic
52:21
is encrypted, and it's not
52:23
possible for most infrastructure
52:26
intermediaries to know what's flowing
52:28
through the pipes, and even if they could, content
52:30
doesn't flow through the Internet as a
52:32
whole piece of content. It's packets.
52:35
And so it becomes extremely difficult
52:37
to, one, identify, what
52:39
is sexually explicit material and,
52:42
you know, is Canada's definition
52:44
of this type of content. And
52:46
then, second of all, having
52:49
to, decide to try
52:51
and identify that stuff means you can't use things like
52:53
encryption, which is the foundation
52:55
of security for every service
52:58
and user on the Internet. So
53:00
it's very concerning in that, the implications
53:03
for security online, but also
53:05
the fact that you're placing
53:07
huge barriers to access because now
53:09
is an Internet user, because
53:11
a lot of intermediaries. Won't know
53:14
what's sexually explicit material. They
53:16
might just start doing age verification
53:19
to Everyone and that means
53:21
I now, you know as somebody
53:23
who's based in canada would have to trust
53:25
a whole lot of entities
53:28
with Very personal information
53:30
whether it's government id or
53:33
you know biometrics i'm having
53:35
to trust a lot of third
53:37
parties Who I might not have?
53:40
You Any direct relationship with with
53:42
my personal information, which is a huge
53:45
barrier to privacy and anonymity
53:47
online. so this is, from
53:49
an open Internet aspect, this is super
53:52
problematic because you're creating huge barriers
53:54
to access. for people who
53:56
might not be able to get government issued I. D.
53:58
or might not be able to use the Internet without the
54:00
promise of anonymity, which
54:02
is a huge hurdle for a lot of people in marginalized
54:05
communities and young people as well. But
54:07
also the fact that this has huge implications
54:10
for security online. and
54:13
you know, not Enabling
54:15
intermediaries to use encryption
54:18
that could be extremely devastating to
54:20
people's security online and making people
54:23
vulnerable to a whole range of bad
54:25
stuff that I don't think was the intention
54:27
of this bill.
54:28
Yeah, I mean you, you mentioned just really
54:30
quickly. I wanted to follow up on one point. You mentioned
54:32
the fact that since it hits every intermediary,
54:35
the potential that, like a Starbucks,
54:38
if you wanted internet in a Starbucks, is
54:40
the idea there that, Starbucks would have
54:42
to not just check your ID, but like
54:44
record it somehow? Is that part of the fear
54:46
that if you want to get on the Wi Fi at Starbucks,
54:49
you have to first prove how old you are?
54:51
Well, this is the problem
54:53
is that the way that it defines
54:56
who this bill applies to is
54:58
internet service provider, but the definition
55:01
it uses for that is a
55:03
person who provides access to the Internet
55:06
Internet content hosting or electronic
55:09
mail. The person who provides
55:11
internet access could mean
55:13
Starbucks and could mean that, you
55:16
Starbucks has to know what are you
55:18
looking at when you're accessing the internet or
55:21
just say, just show us your ID if you want to use
55:23
the Wi Fi, right? So, problematic
55:26
in both senses and just a reminder
55:28
that please use VPNs. Okay.
55:32
So, I mean, so we're seeing this all
55:34
over, obviously, Australia and Canada,
55:36
as you guys discussed, we've discussed the UK in the
55:38
past. There are various bills in the U S that
55:40
also touch on this elsewhere around the world.
55:42
What should we make of this trend? What,
55:44
you know, what, why is this all, all of a sudden
55:46
happening? We've had the internet obviously for decades
55:49
now. Why is it suddenly that
55:52
everyone is trying to pass these kind of
55:54
age related bans or age verification.
55:57
ideas, John, do you want to start
55:59
there?
56:00
Yeah, I mean, to that point, it certainly feels
56:02
like everything is happening
56:05
all at once everywhere
56:07
in every corner of the world. I
56:09
mean, we're seeing this, as you mentioned, you
56:11
know, the UK has been working on this almost
56:14
a decade, but their age verification
56:16
guidance comes out in January. The
56:19
European Union is working on this. There
56:21
are almost two dozen
56:24
U. S. states that have passed age
56:26
verification for adult websites,
56:29
more with social media. it
56:31
really is happening everywhere. Why?
56:33
You know, it's a whole bunch of factors.
56:35
Um, one kind of interesting story
56:38
with the Australia legislation
56:40
is it seems to have stemmed depending
56:43
on who you ask, from Jonathan Haidt's
56:45
book. and, you know, there's,
56:47
has been a lot of discussion,
56:50
because of recent literature.
56:53
The Anxious Generation is Jonathan Height's
56:55
book, as I'm sure many of your listeners are
56:57
familiar with. There's just been more
56:59
discussion on this. There's been a lot of discussion
57:02
in the U. S. You know, the Kids Online Safety
57:04
Act, but there are, you know, true age verification
57:07
bills being introduced. There's
57:09
even a case going to the Supreme Court.
57:11
And that's the Free Speech Coalition v. Paxton
57:13
case. And that's something that the Internet Society
57:16
Weight in on with an amicus. We joined with the
57:18
Center for Democracy and Technology, New
57:20
America's Open Technology Institute, and
57:23
some academics on that case. And,
57:25
you know, the thing that we think doesn't get talked
57:27
about enough is that age verification
57:30
laws are not just about
57:32
young people.
57:33
to
57:34
verification laws are about everyone,
57:37
if not done right. You
57:39
know, this puts everyone's privacy
57:41
security at risk and
57:43
like Natalie already said, you know, this can be discriminatory
57:46
this the marginalized communities
57:48
and there's really interesting research on
57:50
this marginalized communities can benefit
57:52
the most from social media can benefit
57:54
the most from being online. It
57:56
can be more difficult for them to get online. There
57:59
can be more social factors that make it.
58:01
Um,
58:05
you know, generally their worlds, it's
58:07
more difficult. To make those types of connections.
58:10
so what we're really focused on
58:13
is that everyone can get
58:15
access to the Internet that is not. Gating
58:17
off access to the Internet,
58:19
to news, information, health, entertainment,
58:23
right? so this is, this is really
58:25
going to be a challenge and we see so
58:27
many pieces of regulation that are being
58:29
implemented and introduced right
58:31
now. luckily at the Internet Society,
58:34
we have an incredible community. for
58:36
instance, our Australia chapter.
58:38
Jumps to action. There was less than 24
58:40
hours to file, comments,
58:43
they made the deadline, got the comments
58:45
in, you know, and that's so important.
58:48
And so many of our, our local,
58:50
you know, regional chapters are engaging
58:53
with governments in
58:55
their home countries on this issue. So
58:58
on that point, you know, there's really good
59:00
debate on this. really great interaction.
59:04
More and more technologists are getting involved
59:06
in this, more standards development organizations.
59:09
So the engineers who actually make the internet
59:11
function, right, are getting involved
59:13
on this. So that's encouraging, but it really
59:15
is happening everywhere at once. And
59:18
we really need to make sure that those
59:21
who need the internet the most, can actually
59:23
be safe online and are not having their
59:25
privacy and security exposed through all
59:27
this.
59:29
you've both talked a little bit about sort of the
59:31
implications of this and the reasons
59:33
why there are concerns, but
59:36
Natalie, I want to finish up with you and just
59:38
say, you know, for policymakers, some
59:40
of whom might be listening to this, hopefully, um,
59:42
who are looking at these laws and thinking
59:45
about them, what kinds of.
59:47
Factors, what should they be considering before,
59:50
you know, proposing or voting
59:52
for these kinds of laws?
59:54
So, first, like, it's really
59:56
important to understand that the Internet Society works
59:58
to make sure that the Internet is for everyone.
1:00:01
And when we talk about an open
1:00:03
Internet, we're talking about making
1:00:06
sure we're lowering barriers to access to the Internet.
1:00:08
We also want a healthy Internet.
1:00:11
We are, you know, like John
1:00:13
mentioned, we're a huge community, not just the Internet
1:00:15
Society, but our chapters and members and organizational
1:00:17
members around the world. We all want
1:00:19
a healthy Internet and care about making
1:00:22
sure there are safe spaces for people online. But
1:00:24
we also want to make sure that, you know, things
1:00:27
like encryption and
1:00:29
the fundamentals of a secure Internet.
1:00:31
Are not undermined and that people
1:00:34
don't experience barriers to access,
1:00:37
that could be complete hurdles
1:00:39
to accessing the Internet in the first place. So,
1:00:42
I mean, it's not to say that there might
1:00:44
never be a solution for age verification
1:00:47
that could not hinder things like
1:00:49
security and open Internet. But
1:00:52
policymakers really do have to be thinking
1:00:54
through how could their proposals.
1:00:57
Impact, people's access to the Internet
1:00:59
and their safety online. And
1:01:01
we have a tool that helps us analyze
1:01:04
these proposals is called the Internet Impact Assessment
1:01:06
Toolkit. We think of it like an environmental
1:01:09
impact assessment for the Internet.
1:01:11
And so what we do, is, um,
1:01:14
Offer ourselves to work with governments
1:01:16
who are thinking about, whatever
1:01:18
issues are working out that may relate to the Internet.
1:01:21
And we'll often use this framework
1:01:23
that describes what the Internet needs to exist
1:01:26
in the 1st place and to be more open,
1:01:28
global connected, secure and trustworthy and
1:01:30
we help them think through how might. Particular
1:01:33
proposal impact these goals
1:01:35
for a healthy Internet. And
1:01:37
so always are available
1:01:39
to work with governments to think through these aspects,
1:01:42
but we have a toolkit that policymakers
1:01:44
can use themselves. And we
1:01:46
think that it's really important that,
1:01:48
We just don't jump to law
1:01:50
proposals that don't consider those impacts
1:01:52
on the internet because such as
1:01:54
a case in Canada, there can be
1:01:57
very extreme consequences for
1:01:59
people's access to the internet and their
1:02:01
very safety and security online.
1:02:03
All right. Well, Natalie and John,
1:02:05
thank you for coming on the podcast and thank
1:02:08
you for all the good work that the Internet Society
1:02:10
does. And, uh, I hope if
1:02:12
anyone's listening to this and you are, working
1:02:15
on a bill like this or thinking about these kinds
1:02:17
of laws that you, uh, listen closely
1:02:19
and, take a look at what the Internet Society is
1:02:22
doing on this and, and what, resources
1:02:24
they have available. Thanks again
1:02:26
for joining us.
1:02:27
Thanks Mike.
1:02:27
Thanks, Mike.
1:02:31
Thanks for listening to Ctrl-Alt-Speech.
1:02:34
Subscribe now to get our weekly episodes
1:02:36
as soon as they're released. If your
1:02:38
company or organization is interested in sponsoring
1:02:40
the podcast, contact us by visiting
1:02:43
ctrlaltspeech.Com. That's
1:02:45
C T R L Alt Speech. com.
1:02:48
This podcast is produced with financial support
1:02:50
from the Future of Online Trust and Safety Fund,
1:02:53
a fiscally sponsored multi donor fund
1:02:55
at Global Impact that supports charitable
1:02:57
activities to build a more robust, capable,
1:02:59
and inclusive trust and safety ecosystem.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More