Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
So Mike, I'm guessing you've probably used
0:02
at least one of the big Chinese platforms.
0:04
The marketplaces that everyone is, using
0:06
nowadays. Sheen or Timu. I
0:08
don't know. Maybe is that Mike stand
0:11
from one of those platforms?
0:15
No, no, no. The mic stand is not from one of those
0:17
five. I have used, the
0:19
original one of those, which is AliExpress.
0:21
Uh,
0:23
Oh yeah.
0:23
purchased many things from AliExpress, but
0:25
I have not gotten into the, Timu
0:28
Shian, uh, elements of this world.
0:30
Okay. Well, you don't need to anymore actually, because
0:32
Amazon, you may have seen late last year
0:34
introduced Amazon hall, which is
0:36
essentially the same thing. Super cheap
0:38
goods from China. You
0:40
know, at your convenience, basically. And I
0:42
thought I'd use their user
0:44
prompt, their homepage prompt the start of today's
0:47
control or speech podcast. Amazon
0:49
hall prompts you to find new faves
0:51
for way less.
0:56
Well, given how costly,
0:58
the folks ripping apart our government
1:00
is, I'm, hoping that we can, find
1:03
a, new governmental
1:05
system technology for,
1:07
for everything that is possible. Being, uh,
1:09
ripped apart in our government so that
1:11
we might have a functioning U S federal government.
1:16
that seems like a good trade. I'm not sure
1:18
if Amazon hall has that, but
1:20
Yeah. What, what about you? What, uh, what new
1:22
fav do you want?
1:23
well, I'm actually in need of a new.
1:26
Hat, a new hat. It's pretty cold
1:28
over here in the UK. So I'm going to, buy myself
1:30
a new fave from Amazon hall. maybe
1:32
the logo that says control alt speech, we
1:35
make sense of the chaos.
1:38
There we go.
1:48
Hello and welcome to control alt speech,
1:51
your weekly roundup of the major stories about
1:53
online speech, content moderation,
1:55
and internet regulation. It's February
1:57
the 6th, 2025. And this week's
1:59
episode is brought to you with financial support
2:01
from the future of online trust and safety fund. My
2:04
name is Ben Whitelaw. I'm the founder and editor of
2:06
Everything in Moderation. And I'm
2:08
with Mike Masnick, editor and founder
2:10
of Tector, who has
2:12
had a hell of a week.
2:17
Yeah. I mean, you know, there's
2:19
a lot of stuff going on.
2:20
Yeah, yeah, I think that's fair to say, Mike, I
2:22
mean, not a politics podcast,
2:25
but we should kind of address the elephant in the room,
2:28
which is the fact that, Elon Musk
2:30
and others in his immediate circle
2:32
are getting access to the federal government. In
2:35
the States and doing crazy shit with it.
2:38
Yeah. Yeah. Which is, has been
2:40
interesting. And, and, you know, one of the interesting
2:42
things is that. Because so much of
2:44
what is happening is sort of Elon Musk driven,
2:47
the folks who spent a lot of time
2:49
following what he did with Twitter
2:52
in 2022, 2023
2:54
and so forth, seem to recognize
2:56
what's actually happening much faster
2:58
and, and much more clarity and
3:01
depth than political reporters
3:03
who are treating this as sort of an ordinary transition
3:05
in power. And. You
3:08
know, I think the best reporting on it really
3:10
come from Wired, you know, which is
3:12
famous technology magazine, old school
3:15
technology magazine. and some of the other
3:17
tech reporters as well, actually have
3:19
a deeper, better understanding of
3:21
Elon Musk, the way he works and what it
3:23
is that he's doing and how incredibly
3:25
crazy it is. Because as folks
3:28
will remember who followed this stuff, I mean, he bought
3:30
Twitter and he had every right to do this, but he
3:32
went in and just, quickly made it
3:34
clear that he had no care
3:37
and no intellectual curiosity to learn how
3:39
things worked, just assume that he knew
3:41
how it must work, was wrong
3:43
on almost all of those things, but just made a whole bunch
3:45
of very, very quick decisions, fired
3:48
a ton of people, broke systems, ripped
3:50
out servers, and just assume this is
3:52
all stupid. Nobody was here before
3:55
was smart. They were all, you know,
3:57
woke idiots. And therefore, you
3:59
know, we can just rip stuff up and nothing will go wrong.
4:02
And of course, lots of stuff went wrong and they lost all
4:04
their revenue and a whole bunch of other problems
4:06
cropped up. And he's using the exact
4:08
same playbook on the U. S. government.
4:10
and that is absolutely terrifying for
4:13
a long list of reasons.
4:15
Yeah. I mean, you wrote about it this week on tech, the
4:17
kind of Twitter deconstruction playbook.
4:19
I mean, we could spend hours and hours
4:21
talking about just that single
4:23
story. We're going to try and, branch
4:26
out a little bit because it has been covered so extensively,
4:28
but just talk a little bit about what it's been like. Kind
4:31
of writing on tech that this week,
4:33
because you have seen this huge increase
4:35
in readers and
4:37
we've seen a bunch of people come to old
4:39
episodes of the podcast and listen to that
4:41
last week's episode with Rene D'Arresta was one
4:44
of our largest yet. So
4:46
just talk a little bit about what that's been like, from
4:48
your own perspective.
4:49
Yeah. I mean, it's been, it's been overwhelming
4:51
and crazy. I'm, I'm working on less sleep than usual
4:54
and just trying to, to keep up with everything. I
4:56
mean, there's way more stuff than, anyone can possibly
4:58
keep up with. Even like, larger newsroom,
5:01
will have trouble keeping up with this stuff. but it's,
5:03
been fairly overwhelming and sort of trying to,
5:05
figure out which of the things that, you
5:08
know, The traditional media is having trouble sort
5:10
of putting into context and figuring out,
5:12
you know, can I take that and
5:14
put it into context and speak to experts
5:16
and find out what's really going on and then figure
5:18
out how to explain it in a way that makes sense. And,
5:21
that has been a lot of my week
5:24
and it's somewhat terrifying just because,
5:26
we've seen how this plays out. In
5:28
a way that is doesn't
5:31
really matter, right? You know, with just a
5:34
second tier social media network, an important
5:36
one, an important one for voice. And we've sort of
5:38
seen how that up allowed
5:40
for things like, blue sky again
5:43
on the board disclaimer, blah, blah, blah, but
5:45
also mastodon and everything else we've
5:48
seen how that enabled those things
5:50
to spring up, which was interesting, but what
5:52
is the, blue sky of the United
5:55
States, right? Like there's, you
5:57
don't have the same exit path, that
5:59
you do with, a social network. And
6:01
there's a whole bunch of really important
6:03
things. And, the fact that we're seeing
6:05
over and over again, the same thing where Musk
6:08
and his team insist
6:10
they know what's going on and don't understand
6:12
it and have no interest in learning why things are
6:14
actually done the way they are. And, you know,
6:16
it sort of culminated with one of his,
6:19
he has this 25 year old kid who worked for
6:21
him, who was given,
6:23
first of all, terrifyingly enough, any access
6:26
to the treasury nonpolitical
6:29
thing. It was run by a guy who had worked there
6:31
for 40 years and who was, somebody
6:34
in the former Trump administration said, you
6:36
know, he was. You know, at no point did I have any
6:38
sense of what his politics were because his job
6:40
there is to be the sober person in the
6:42
room. He, that person got pushed out
6:45
and instead this 25 year old, kid who
6:47
worked for Elon Musk was given
6:49
full access to the system
6:51
that handles U. S. government payments. Six
6:53
trillion dollars go through it and
6:57
was apparently given right access to
6:59
it. It's this old system that is, you
7:01
know, using COBOL code
7:04
and the kid was, was making changes
7:06
to it. without testing it. There
7:08
are all these terrifying things of things that could
7:10
go wrong. The system is set for migration
7:12
this week. Nobody knows how that's going to work. This
7:15
morning, a court stopped them and said that
7:17
they have to have read only access,
7:19
which is already terrifying enough. So he can't
7:21
write to it anymore. and I've seen
7:23
reporting saying that the right access has been turned
7:26
off, but this is, is terrifying.
7:28
It's not, It's not, Twitter,
7:30
right? Like you could break Twitter and the world will move
7:33
on. You break the entire United States
7:35
the world becomes a very different place.
7:38
And you're right. There's eerie. parallels
7:40
between what he did at Twitter
7:43
X, you know, going in, firing the whole
7:45
trust and safety team, getting rid of the kind of
7:47
adults in the room and what he's doing here
7:49
within the kind of department of government efficiency.
7:51
And, And this will probably be out of date by the time
7:53
this, podcast goes out, but like, what,
7:56
what are you thinking will happen in the next kind of seven days,
7:59
like by the time we next record, like what, what
8:01
are you kind of expecting will happen?
8:03
it's just more craziness, right? I mean, we,
8:05
we don't know, right? I mean, the other reporting
8:07
that came out last night was another,
8:09
you know, Musk employee, a kid, was,
8:13
basically put in charge of controlling the systems
8:15
from the, NOAA, which
8:17
is, you know, oceanic and handle
8:20
weather stuff and all of these things. And
8:22
then they, everybody who works for NOAA
8:24
were, was given an order not to communicate
8:26
with any foreign national, which
8:29
like. that's what they do, right?
8:31
They have to like communicate
8:34
with other people about, weather patterns
8:36
and, things that are happening and like, are
8:38
there hurricanes in the Caribbean?
8:40
Is there a typhoon or in the Pacific?
8:43
Like all of these things that's, that's
8:45
their job. And you have these kids
8:47
who are like, think they know. Everything.
8:50
I mean, at one point, you know, some idiot
8:52
on Twitter, Twitter, X, whatever,
8:55
was saying, Oh, you know, Elon Musk
8:57
is so brilliant. the first thing he did was come
9:00
in and say, like, well, we have to see
9:02
where the payments are going. No one has ever
9:04
thought that. What do you mean? No, one's ever thought that.
9:06
Of course, I've thought that like this, all
9:09
of these systems in place, they just act
9:11
as if, Everybody else must
9:13
be an idiot. And they are the only ones who
9:15
think of these things. And what the reality is, is they
9:17
don't understand all of the different
9:19
reasons why things were done. You
9:22
know, is there waste in government? Of
9:24
course. Right. Like, is there fraud and abuse in government?
9:26
Absolutely. But There are reasons
9:29
why systems are in place and if you're going to change them,
9:31
you have to understand them first.
9:33
And no interest in that. there's
9:35
not coming in to try and find out why
9:38
are these things in place. The only interest that they're
9:40
having, and again, there's like this 19 year
9:42
old kid who recently graduated high school
9:44
who was going in and demanding people explain
9:46
their jobs, but not to learn from them, but
9:48
to see whether he could fire them, basically.
9:51
I mean, whole thing is just,
9:53
it's so scary.
9:54
Yeah. There's something very apt about the United States
9:56
not knowing figuratively and,
9:59
in real terms when a typhoon
10:01
or a hurricane is going to hit the
10:05
country because of, an edict that says you can't
10:07
speak to Foreign workers abroad about this stuff. I
10:09
think that is, it is insane. I
10:11
would also just, you know, shout out to the
10:13
media organizations, as you say, that have been doing
10:16
this work this week. I mean, we
10:18
cover a lot of their work. we
10:20
analyze a lot of the reporting that comes out
10:22
of, of outlets that you've mentioned. And we obviously
10:24
you do a lot of writing yourself on this, Mike. So I
10:27
think it's worth just, you know, shouting out those guys for.
10:29
covering this in a really crazy time.
10:32
and also we spent the best part of two hours
10:34
prior to recording trying to figure out
10:36
what stories to cover and
10:39
how to make sense of it for listeners. So it's
10:41
been a crazy week or two and it's
10:43
likely to be, like that for a while. So that
10:46
I think probably is a point
10:48
where we can dive into a part of the U S government,
10:50
Mike, that we've touched on before.
10:53
and somebody we've, come back to time and time again
10:55
on controllable speech. before we do that, did
10:57
you know that Jim Jordan is a two time wrestling
10:59
champion?
11:01
Unfortunately, I do know that
11:03
you, I was reading his Wikipedia and I was like, this
11:06
guy's got range.
11:07
well, the history though is right, like, you know,
11:10
he became famous before he
11:12
was elected to Congress, he was a wrestling
11:14
coach Ohio, but
11:16
there's a huge scandal of
11:19
abuse involving his wrestlers.
11:22
That it is reported repeatedly
11:24
that he looked the other way on and
11:26
was aware of. And these are like, one
11:29
of the scandals about Jim Jordan, is
11:32
how, under his leadership
11:34
as a coach, there were some sexual
11:36
abuse scandals that. He never
11:38
took responsibility for. So yeah, he's,
11:41
you know,
11:42
okay. Okay. There's a pattern. Let's
11:44
say there's a pattern. Well,
11:45
yeah,
11:46
he's in the news this week again, for
11:48
a letter that he sent to the European
11:50
commission's Henna Verkanen, who
11:52
is, the kind of tech, uh, Boss
11:55
the EU and took over from Thierry Breton
11:57
last year. This is one of a number
11:59
of letters that he sent EU over the last six months
12:01
or so, Mike, but it represents a bit
12:04
of a change intact talk
12:06
us through kind of what you read about
12:09
the letter.
12:10
I mean, you know, I don't know. Is it
12:12
a change in tech? I don't know. I mean, it's
12:15
like, basically is, you know, screaming
12:18
at the EU about the DSA and how
12:20
it is a censorship bill. And
12:22
I mean, if you've listened to me on this podcast
12:25
and on TechTurt for the last few years,
12:27
like there's some stuff in here that, you I
12:29
am concerned about the DSA and how it can be
12:31
used for censorship. And I've called out Terry Breton
12:33
and, and his attempts to use
12:35
it. And so some of that is, repeated,
12:38
but this is a fairly aggressive letter,
12:40
which is also in sort of Jim
12:43
Jordan style, but, you know, he
12:45
talks about how. The DSA requires
12:47
social media platforms to systematic
12:49
processes to remove misleading or
12:51
deceptive content, including so called disinformation,
12:54
even when such content is not illegal,
12:56
and says though nominally applicable
12:58
to only EU speech, the DSA
13:01
has written may limit or restrict Americans
13:03
constitutionally protected speech in the
13:05
United States. And so he
13:08
sort of, you know, Calling out these
13:10
things and basically, in a
13:12
very, accusatory way
13:15
saying like, if anything
13:17
DSA related leads to the suppression
13:20
of American speech, then
13:22
the U S Congress might take action
13:24
in some form.
13:26
Yeah. he calls for a briefing. Um,
13:28
as part of the letter, doesn't he? So he says to work
13:30
and, and, you know, we need a sit down
13:32
for you to talk about what the DSA means
13:35
for us. And we need it by the 14th of February.
13:38
which I think is, you know, romantic. Yeah.
13:41
Valentine's day, uh, meeting
13:43
Yeah. maybe
13:45
we can kiss and make up. Um, but
13:47
yeah, there's a kind of like, we need to be told what
13:49
the plan I
13:51
think that's for me where there's a slight change.
13:53
It's, you know, it's on the front foot. It's trying to
13:55
demand, seize control. And I
13:57
think in the way that prior to Trump
14:00
being elected, there was maybe slightly
14:02
less of, uh, of backing from
14:04
the
14:04
Yeah. and I actually do think that element
14:06
is really important, right? So, before,
14:08
even though in the house the Republicans
14:11
were the majority and therefore had power within the
14:13
house, that power was limited
14:15
to what the House had, right? They didn't have power
14:17
in the Senate and they certainly didn't have power at the executive
14:20
branch. Now, Republicans
14:22
have all three, and so the nature of this letter is sort
14:24
of reflecting that, which is that the whole
14:26
of the US government may actually listen
14:28
to Jim Jordan rather than him
14:30
just sort of making noise and being obnoxious
14:32
in the way that he is normally obnoxious. Now,
14:35
there was another thing which is not
14:37
in, the letter itself is interesting.
14:40
And we talked about that. There's this sort of political article
14:42
that alerted us to the letter. There's another
14:44
political article, which I actually think
14:46
might've been interesting if they'd combine the two,
14:49
which is saying, Musk
14:52
and Jim Jordan have had a mind meld
14:54
that the two of them are working very closely
14:57
together And so suddenly you start
14:59
to put this letter into a slightly different
15:01
context, which is Is Jim
15:03
Jordan doing this on behalf of?
15:06
on behalf of one U. S. citizenry? Or is he doing
15:08
this on behalf of one Elon Musk,
15:11
who is, as we have just discussed
15:13
now sort of running large elements
15:15
of the government while also running
15:18
a social media platform that is being
15:20
targeted by the E U and the D S A
15:23
and to Whose advantage is this?
15:25
and, you know, there are all sorts of issues about
15:27
conflicts of interest, but this article
15:29
talks about how Elon and Jim Jordan are,
15:32
buddy buddy and are constantly working
15:34
with each other now.
15:35
Interesting. I hadn't seen that. That's a really interesting,
15:38
actually saw it like literally as we started recording
15:40
and I was skimming it during the,
15:42
the intro here.
15:45
how fresh this podcast is. and
15:47
obviously, you know, last year we talked
15:49
at various points during the year about
15:52
Jim Jordan and Elon
15:54
Musk and GARM and, you know, the reports
15:56
that, his subcommittee wrote about
15:58
the, kind of awful effects of GARM and the kind
16:00
of, again, the industrial complex
16:02
that was surrounding the advertisers. coming
16:05
away from Twitter, taking away their spending from Twitter.
16:07
And so again, there are continuations
16:10
of a theme, but there was also a kind of emboldening,
16:12
I think of a message here, which is,
16:15
DSA, and the we're
16:17
not happy, we need to be told what's what. The
16:19
irony about all of that, Mike,
16:21
is that in the very same week,
16:24
Joel Kaplan, who is
16:26
the new global policy chief at
16:28
Meta and obviously a Republican,
16:31
in terms of his political leanings and the new
16:33
Nick Clegg was in Brussels,
16:35
essentially, defining how EU
16:38
speech should, work. You know,
16:41
he, he was, he was on a live stream.
16:43
In Brussels and was declaring
16:45
to a bunch of, Brussels wonks
16:47
about how the announcement that Meta made earlier
16:49
this year around, changing
16:52
the policies on the platform, getting rid of fact checking,
16:54
adding community notes was also going to
16:56
be rolled out in the EU. He predicts in 2026
17:00
and that he was ready to work with regulators
17:02
in the EU about how that would work. So. The
17:05
timing is very funny. You know,
17:07
you have Kaplan on behalf of Meta in
17:09
Brussels saying, this is how we're going to do things.
17:12
We expect this to be happening in the next 18 months.
17:14
And then we have also Jim Jordan, requesting
17:17
more information from the DSA because he
17:19
feels, the EU is, infringing
17:21
on, you know, the way that US companies work.
17:24
What did you make of the Kaplan stuff?
17:25
Yeah. I mean, it is a massive
17:27
step up in terms of the aggressiveness of
17:30
meta on the EU. Right. I mean, we've
17:32
had years now where meta
17:34
has been, I honestly think too
17:37
compliant with the EU on stuff. Right. I
17:39
mean, they've, there've been different things and pushback
17:41
and fines, but for the most part,
17:43
like meta has, been it. really
17:45
willing to kind of bend the knee
17:48
to the EU when the EU says we're going to regulate
17:50
social media these ways, the notable
17:52
thing is, media reporting
17:54
on it has been, it's always set up as like, Oh,
17:57
you know, the EU is cracking down on meta, but
17:59
meta has been sort of a happy participant,
18:02
and, really sort of embrace the
18:04
DSA approach to regulation because
18:07
They had indicated that they could handle
18:09
it, and they sort of recognized that smaller
18:11
competitors would have a more difficult time.
18:14
And in fact, you know, within the U. S. context,
18:16
MEDA had been really kind of going around
18:18
to both the federal government various
18:20
state houses and kind of hinting at the fact
18:22
like, look, you pass a law that's kind
18:25
of the same thing as the DSA, we're
18:27
not going to complain about it. and,
18:29
the unspoken part of that was
18:31
like, look, we're already complying with the
18:34
EU. We'll comply with the U S ones.
18:36
And we know that it'll cause trouble
18:38
for upstart competitors. And that's
18:40
the thing that Meta has been scared of, right? Because their
18:42
growth had sort of stalled out and
18:44
you saw the rise of things like Tik
18:46
TOK, that came in that. Really took medicine
18:49
by surprise. So their move
18:51
on the political front had been let's
18:53
embrace regulations because
18:55
that makes it harder for upstarts to compete.
18:58
And so this is a big change. And I think,
19:00
This also came up in the, Joe Rogan,
19:02
Mark Zuckerberg interview talking
19:04
about the EU and him and Zuckerberg
19:06
saying like, any kind of foreign regulation
19:09
is the equivalent of a tariff. And
19:11
we expect the U S government to
19:13
basically have our backs and, protect
19:15
and defend, domestic industry,
19:18
which is a very different way of looking at
19:20
it. you know, the tech industry has looked
19:22
at international trade, around
19:24
technology and internet issues forever,
19:27
right? Since the beginning of the internet industry.
19:29
And so looking at it as, as industrial
19:32
policy and being able to fight,
19:34
and so Joel Kaplan, then going to the
19:36
EU and basically saying this, he doesn't
19:38
do that if he doesn't think that
19:40
the whole of the US government under
19:42
Donald Trump has his back and.
19:45
we've had a couple of, like, weird
19:47
skirmishes around tariffs, uh,
19:49
in the U. S. in the last week as
19:51
well, that, that
19:52
I've been, I've been following.
19:53
Yeah, it's just, you know, again, sort of outside
19:55
the purview of this podcast, but like, you
19:58
know, among those things, Trump did
20:00
suggest that, he's also looking
20:02
at tariffs on the EU. And so, it's all
20:04
sort of part of this negotiation. And
20:06
Kaplan and Meta seem to think like, well, we
20:09
can go to the EU, make our demands, and
20:11
we'll have Donald Trump and Elon
20:13
Musk and Jim Jordan to effectively
20:15
back us up. And so, for all their talk
20:17
over the last four years about how perfectly fine.
20:20
And perfectly in compliance they were with the
20:22
DSA and the DMA and related laws
20:25
in the EU You I think they just see
20:27
this as look we have
20:29
this very stupid very brash
20:32
willing to smash and break things
20:34
person in the white house right now Let's
20:36
take advantage of that and see if we can
20:39
smash and break All these other
20:41
things in a way that just favors us and
20:43
this is how we're seeing it play out
20:45
Yeah. I mean, I can hear almost
20:47
the European listeners of control or speech
20:49
say, Hey, actually the EU
20:52
is 27 States and almost 500
20:54
million people. Naturally matter
20:56
is going to have to kind of negotiate in the way that
20:58
it has done in the past, you know? so I
21:00
think there's, that, but you're right. The, you know,
21:02
there has been a. Compliance there,
21:05
to use a term that kind of Daphne Keller has been using
21:07
increasingly, because there are competitive
21:10
benefits to doing so. and that seemingly
21:12
changed. the other thing that was interesting to me
21:14
was how it was a note in the, reporting
21:17
on Kaplan's comments about the code
21:19
of conduct, which we've talked about in
21:21
the last couple of weeks as well, you know, he
21:23
was referring to the AI act
21:26
of code conduct that's accompanying
21:28
official legislation that was.
21:31
past last year, and it came into play this
21:33
year. there's apparently a code of conduct that
21:35
sits alongside that, which is additional voluntary
21:38
obligations that the companies can sign
21:40
up to. He's basically said that we're not going to sign up
21:42
to those. And it made me think
21:44
about the fact that we have, the hate speech
21:46
code of conduct, the disinformation
21:48
code of conduct, one of which has already been
21:51
subsumed into the DSA, the other, which is
21:53
likely to be subsumed into the DSA, and
21:55
how, again, The relationships
21:57
are becoming much more tense, much
21:59
less about voluntary agreements and
22:02
signing up to stuff willingly and
22:04
the hammer of, of regulation being used instead.
22:07
Yeah. this is all uncharted territory
22:09
and in lots of ways. And, I
22:12
obviously, part of my role on this
22:14
podcast is to be the critic of
22:16
EU regulations, because
22:18
I, I have my concerns about EU regulations.
22:20
And so I'm in this weird position where I'm reading some of this
22:22
stuff and it's There's this kernel
22:25
of truth within the complaints that
22:27
the American companies have with regulations,
22:30
their complaints that I have made
22:32
about these same regulations. And
22:34
so there is this truth, but they
22:37
are not doing this. for
22:39
honest and good purposes, right?
22:41
This is just like a smash job,
22:44
right? Like we're just going to break these things because
22:46
we have the opportunity to, and
22:48
honestly, I'm torn because like,
22:50
would I like these laws to be better? Absolutely.
22:53
Do I have concerns about how these laws work?
22:55
100%. I've been calling them out since
22:57
the DSA was first being discussed. and
23:00
those, Concerns still stand there, but
23:02
I am perhaps equally,
23:04
maybe more concerned about what
23:06
happens if the EU caves to these
23:09
demands, because.
23:11
that will embolden them to go further
23:13
and do more and make things
23:15
even worse. And so again,
23:18
this is part of my, distress
23:20
at,
23:24
at this moment, which is like, there
23:26
are reasonable arguments behind
23:28
all this, but they're going about dealing
23:30
with it in the most destructive,
23:33
damaging, horrific ways possible
23:35
that will have long term. Potentially
23:38
extraordinarily negative impact.
23:41
and so, and you can take any of these
23:43
quotes or comments from Kaplan out
23:45
of context. And, and I might
23:47
agree with it and say, yeah,
23:49
some of these EU regulations are really problematic,
23:52
but the intention here is
23:54
basically to destroy any,
23:57
any sort of regulatory state, any
23:59
sort of administrative state. And
24:01
where that leads is extraordinarily
24:04
dangerous.
24:05
yeah. And we'll hopefully get a
24:07
readout next week of this briefing that
24:09
Jim Jordan has asked for, and we'll know a bit more then.
24:11
But I also am concerned about if
24:14
Virkin and then the EU, do cave.
24:17
it seems crazy that we're going to talk about meta
24:19
again. A friend of mine, Mike
24:21
was. At the event that we ran last
24:23
week in London. And she was saying that
24:26
right now feels like a kind of meta dead zone.
24:28
We've just been talking about it forever.
24:31
And in some ways, today's kind
24:33
of second big story is a continuation
24:35
of that theme. You and Renee talked
24:37
about it last week as well, but it's the,
24:39
reporting we've seen come out this week around,
24:42
advertisers and their feelings
24:44
towards the platform. In
24:46
the wake of Mark Zuckerberg's announcement, a
24:49
couple of new stories from
24:51
Digi day and from marketing week,
24:54
essentially saying that. Within the
24:56
kind of advertising ecosystem, there's
24:58
been really no discernible impact
25:00
on spending, by major
25:02
players. And yeah,
25:04
you touched on it last week. They're unlikely
25:07
to say so. because of the risk
25:09
of being targeted, but we also saw in the
25:11
earnings call, Meta's earning call last week,
25:13
that they have also seen no change
25:16
in advertiser spending in the short time since that
25:18
announcement was made. At the same
25:20
time, there's. interesting kind of sub thread
25:22
that is in a couple of those stories and
25:24
in a story from the Wall Street Journal around some
25:27
advertisers going back to X
25:29
slash Twitter. so the Wall Street Journal
25:32
report that Amazon has started to raise its
25:34
ad spending on X.
25:36
in recent months and a few
25:38
larger, brands who, decided that
25:40
X was kind of persona non grata for a
25:42
while considering once again, spending on
25:44
a platform. And I wondered what you thought
25:47
were the, maybe the kind of drivers of this, Mike, you know,
25:49
there's a cynical take, which I imagine you might give
25:51
us. Um, but where
25:55
cynical. No,
25:58
where do you see, these shifts happening
26:00
and when do you think we'll know fully whether, you know, the
26:02
question of Where the brand safety is still,
26:05
you know, alive and well.
26:06
look, I so much of this is just sort of
26:08
marketing, right? I mean, it's for show
26:11
brand safety is still a thing. Brands. Matter,
26:14
right? And if, something bad is happening with the
26:16
brand that harms the bottom line and every company
26:18
will react that way, the major difference.
26:20
And this is what we discussed last week with Renee is
26:22
that companies aren't going to talk about it.
26:24
And so it's not a surprise to me
26:26
that companies are taking a wait and see approach. I mean,
26:28
you and I talked about this, last month at
26:30
some point around, ROI
26:33
on meta ads is way better than anything
26:35
that they ever were on X. And so they're
26:37
important. as a part of strategy. And
26:39
so I think a lot of companies and
26:41
a lot of advertisers are concerned
26:44
about this, but it was one of these things where
26:46
it's like, well, let's, not make a big
26:48
show of this because that leads
26:51
to attacks from Jim Jordan,
26:53
and, Elon Musk and potential
26:55
lawsuits and all this stuff. So let's
26:57
take a wait and see approach. Let's see how this
26:59
actually plays out. We don't need to make a
27:01
big stand. The sort of public
27:03
sentiment at the moment feels like
27:06
they don't want us taking a stand on this. In the
27:08
past, when we did take a stand, it was because public
27:10
sentiment was in one direction and now it's
27:12
in another. So I sort of understand the kind of wait
27:14
and see. But if there is like actual brand
27:16
safety stuff, I think that
27:19
companies will focus on their bottom
27:21
line. And if it is damaging to their brand
27:23
to be advertising on a certain platform, they
27:25
will start to move away. Some of that will just be
27:27
general public sentiment. That is
27:29
a different story than the X. Story.
27:32
And this story of advertisers
27:34
potentially returning X. Now,
27:36
there have been similar stories
27:39
for over a year, every few months,
27:41
there's a major publication that writes a story
27:43
saying so and so advertisers are
27:45
returning to X. And it is touted often
27:47
by Elon himself and his fans is
27:49
like, Aha, see, everything's coming back
27:51
to normal. The details
27:53
are always less
27:55
than Enthusiastic about it. They're often
27:58
examples of them returning at
28:00
much lower volumes, 5
28:02
percent of what they used to spend 10 percent
28:04
of what they used to spend. And,
28:07
you know, obviously the overlying,
28:09
issue right now is the fact
28:11
that it is all just like to get
28:13
into the good graces of the
28:15
person who is currently running the U S government,
28:17
which is Elon Musk. And
28:19
so like. Jeff Bezos has done a
28:21
bunch of things with the Washington
28:24
post lately to signal that
28:26
they're going to be much friendlier to the Trump administration.
28:29
So another way to indicate that
28:31
and to give a public signal of
28:33
that is to say, Oh yeah, we're going
28:35
to bring Amazon back to advertising on X.
28:37
We don't know how much, we don't know how involved,
28:40
but sure. let's do it. And I think that
28:42
is what other companies are doing too. It's like, well,
28:45
right now, Elon Musk has so much power.
28:47
If we don't want to be in the crosshairs, if
28:49
we don't want to get sued by him, maybe
28:52
the easiest thing is to like take
28:54
some advertisements out. Now this is again, horrifying
28:56
in general, if you think about free speech
28:58
and the fact that, people are effectively being
29:01
coerced into giving money to the world's
29:03
richest man, who is also running the government
29:05
at the same time. There are all sorts of reasons
29:07
to be terrified by that. You know, when you put
29:09
it that way, it seems pretty bad, right? it's
29:12
problematic on a variety of
29:14
levels, but I think the reasons why advertisers
29:17
are doing what they're doing on meta
29:19
versus X are two
29:21
different things. You know, one of it is sort of
29:23
currying favor and the other is kind of,
29:25
uh, you know, this platform has been okay for
29:27
us. Let's wait and see if it actually turns
29:30
as bad as it does. But if there
29:32
are moments of actual attacks on
29:34
brands or brand safety problems, then
29:36
I think Companies will act the way that they
29:38
always act.
29:39
Yeah, talking of, lawsuits
29:41
and bringing claims against advertisers
29:44
only this week, Musk and
29:46
X brought a whole bunch of new
29:48
advertisers brands into, a
29:50
filing that they'd, uh, Submitted last year.
29:53
And so we've got new conglomerates,
29:55
corporates, Shell, Nestle, Palm
29:58
Olive, Lego, who've been added to
30:00
this as well. And again,
30:02
like there's, there's a kind of, I guess, willingness
30:05
to, roll out this playbook for
30:07
whatever brands that he sees fit, you to
30:09
kind of force them the platform,
30:12
it
30:12
Yeah, it was interesting. I mean, it's,
30:14
the same lawsuit as from last
30:16
year. The one that, that effectively brought down Garm.
30:19
it's just been expanded. It's this, their proposed
30:22
second amended complaint. and
30:24
so the court still has to accept it and add in these
30:26
new defendants. but that'll probably
30:28
happen. now remember this case
30:31
was originally filed with a very friendly
30:33
judge, Reed O'Connor, who was sort of willing
30:35
to bend over backwards, but he had recused
30:37
himself from the case because he held
30:39
stock in one of the advertising
30:42
defendants. So it is
30:44
now in front of a different
30:46
judge. Uh, who is not known
30:48
as being quite so partisan,
30:52
let's say. and so, the case may go,
30:54
you know, a little bit more seriously than other
30:56
cases, but we'll have to see, it would still roll
30:58
up to the fifth circuit, which is crazy
31:00
as it gets there, it was interesting to me that among
31:03
the companies that were added here were.
31:05
Twitch, which is owned
31:07
by Amazon. We were just talking about Amazon
31:09
and Jeff Bezos and caving. So it'll
31:12
be interesting to see how that plays out
31:14
if Twitch stays in the lawsuit,
31:16
but also Pinterest. And so to
31:18
me, both Twitch and Pinterest. are
31:21
in some ways competitive to x
31:23
and the idea that they should be forced to then advertise
31:26
on x Seems really
31:28
problematic from a whole
31:31
wide variety of things And
31:33
in fact I was sort of confused
31:35
by their inclusion in the lawsuit. And even
31:37
the way it was written, like, they're talking about how
31:39
much they normally advertise and how much
31:41
they spend on promoting themselves, but
31:44
is that always on a competing platform?
31:46
I don't, I don't know. The whole thing just felt a
31:48
little weird. Like maybe this was like a backdoor
31:50
way to attack. competitors for social
31:53
media attention. and so it'll
31:55
be interesting. It is still just, it
31:57
is a ridiculous lawsuit
31:59
in so many ways. I, I need to stress that because
32:01
I feel like we sort of skipped over that part. I know we
32:03
talked about it last year, the
32:05
idea that, Choosing not to advertise
32:07
on a platform represents an illegal
32:10
boycott is absolute nonsense.
32:12
There are Supreme Court rulings on
32:14
record about how boycotts around
32:16
things like this are speech. They
32:18
are a form of speech. They are protected by the First
32:20
Amendment. The kinds of boycotts that are
32:22
illegal are ones that are done for anti
32:25
competitive purposes, which maybe that's
32:27
why he's adding Twitch and Pinterest to it
32:29
and going to try and claim that there's like an anti
32:31
competitive element to it. But that
32:33
is. just obviously ridiculous
32:35
and silly. and so, we'll
32:38
have to see. I'm hopeful that before
32:40
a more reasonable judge that, quick
32:42
work is made of this case and that it gets dismissed
32:44
quickly. But then of course it'll be appealed to the fifth
32:46
circuit where, anything goes.
32:49
yeah. Okay. So yeah,
32:51
we, we, we'll see how this pans out. I mean,
32:53
I, for one might wonder to
32:56
what extent we're going to see a big brand
32:58
safety. Event like
33:00
we did in the mid 2010s,
33:03
that's what I'm, I'm wondering is, is in the
33:05
kind of near future, if you remember, like
33:07
you would be very privy to this, but the, big
33:09
YouTube story, 2015, I
33:12
think it was 2016, 2017
33:14
was, terrorist videos, ISIS
33:16
videos being uploaded onto,
33:19
YouTube and other platforms and
33:21
having large Brands
33:23
advertised next to them in a way that was completely
33:26
unsuitable and led
33:28
to massive changes to the way that the platforms
33:31
vetted advertising and tried to provide
33:33
a kind of brand safe environment, it feels
33:35
like we're going down a similar route. It
33:37
feels like there's going to be a similar kind of event
33:40
like that, but it does rely, as you mentioned
33:42
on. Media reporting it in a way
33:44
that is kind of nuanced and recognizes
33:47
that, you know, this is happening. I
33:50
wonder it and worry actually that it's
33:52
going to get lost in the noise of everything else.
33:55
that this is no longer something that, we
33:57
can expect from platforms that if you want
33:59
to advertise in a way that is. allows
34:02
your brand to sit next to content that is,
34:04
you know, not egregious and not harmful
34:06
and not, you know, offensive to users, we've
34:09
kind of lost sight that that's even possible. so
34:12
yeah, it goes back to your point earlier about how
34:14
the reporting of the likes of 404
34:16
and, and other tech sites who do
34:18
really do good work here is going to be so key over
34:20
the next few months, I think.
34:22
Yeah. I mean, we'll have to see, right. And, and,
34:24
the whole thing with like advertising and brand
34:26
safety is that it, you know, at
34:29
times it did get overheated and, and
34:31
right. Like, I think most users
34:34
will often recognize that, Advertising
34:36
that appears next to any particular piece of content. There's
34:39
an algorithmic component to it. It's not
34:41
like, AT& T is choosing
34:43
to advertise next to
34:45
pro Nazi content or, or whatever.
34:48
And so like the actual impact brands.
34:50
maybe overstated, but
34:53
there is this general sense of like, if an
34:55
entire platform is
34:57
supporting the fascist takeover of the United
34:59
States, do you want to be helping to fund
35:02
that? and that is the type of thing
35:04
that could boomerang. And, you know, I remember
35:06
going way back into like early two
35:09
thousands where there was this big freak out
35:11
of a. programmatically
35:13
delivered advertisement for
35:16
a set of steak knives or something
35:18
showing up, I think it was on the New York post,
35:20
next to an article about a stabbing
35:23
right.
35:23
was like the beginning of the list. Like, Oh, when
35:26
we have these algorithmically driven advertisements,
35:29
They might show up in a way that it feels
35:31
inappropriate. And then the media can mock
35:33
it. And then the companies get all worried. Cause they're like,
35:35
we don't want to be responding to media
35:37
requests about how come our knife
35:39
ad is showing up next to a stabbing. So
35:42
those kinds of things will happen. It will depend
35:44
on the media, but the media has a lot of other
35:46
things that they may be focusing on that are probably
35:48
more pressing than, whose ads
35:50
are showing up where. And so we'll see
35:53
how that plays out. and
35:55
sort of where things go. I will note, this is
35:57
a very, very small sample size and
35:59
probably not indicative of anything, but,
36:02
my office, right down the hall from
36:04
my office, sort of across the way from my office,
36:07
there had been a brand safety company. and
36:09
I noticed recently that it's gone. So
36:11
maybe, maybe
36:15
they may have moved. I have no idea. I
36:17
they've tripped. They've trebled in size.
36:19
perhaps, perhaps I just noticed that they were no
36:21
longer, on my floor in my office
36:24
building.
36:24
Right. All right. Yeah. I mean, this
36:27
will be an ongoing conversation we have, I think. And
36:29
there are big brand safety summits coming up in
36:31
the next couple of months as well. So, you know, I would
36:33
love to hear from listeners if they
36:35
are attending those or they're, they're in the industry
36:38
and want to share their thoughts. Cause, it's only a bit of
36:40
a kind of knife edge as to where it goes.
36:42
yeah, and I'm sort of curious, like, you know, the way
36:44
that, Musk and Jordan have sort of attacked the whole
36:46
concept of brand safety, you know,
36:48
how that conference is going to be like, how big
36:50
is that conference or are people going to be afraid
36:52
to go to that conference and attend? Because
36:55
it feels like, you know, similar to the way
36:57
that the whole concept of trust and safety has
36:59
been sort of negatively attacked. so
37:01
too, will the space of brand safety. And that,
37:04
that seems like a concern.
37:05
Yeah. Okay. let's move on to a story that has
37:07
been kind of bubbling along in the background the last few
37:09
weeks, but we haven't necessarily talked about
37:11
actively on the pod. this is the story
37:14
of deep seek, which many of our listeners will
37:16
have heard about this, new. Model
37:18
that's been trained at a fraction of the cost of some of the
37:20
other big general LLMs. and
37:23
what we're seeing this week is a couple of parts to
37:26
the deep seek story, Mike, but you picked out
37:28
a story from 404, about
37:30
a possible lawsuit, being
37:32
filed for people who download deep seek.
37:35
Which would include you, I think.
37:37
Well, that depends. Depends on how you define
37:39
all of this. And so it's not it's not a
37:41
lawsuit. Just to clarify here.
37:43
Josh Hawley, Senator Josh Hawley, who's problematic
37:46
in all sorts of ways as well, introduced
37:48
a law, right? That that would effectively
37:51
extend, a ban on
37:53
import export. To cover
37:55
Chinese related AI, which would include
37:58
deep seek code. Now, then the question
38:00
is, what does that actually cover? And
38:02
that becomes a lot more complex because what is actually
38:05
deep seek, right? You know, what everybody is talking about
38:08
with deep seek is this model, but the model
38:10
is open source. but also
38:12
DeepSeek itself as a company
38:14
offers a hosted model, just like chat GPT
38:17
or, Google Gemini or whatever that
38:19
you can go log into. But because the code
38:21
is downloadable and you can run it, there
38:23
are lots of people who are running it. on
38:25
their own laptops, if you have a powerful
38:28
enough laptop, which is actually not that powerful,
38:30
or there are lots of people who are hosting versions
38:33
of the deep sea code. They downloaded it and
38:35
then created their own hosted version of
38:37
the same model, or perhaps a slightly adjusted
38:39
model. You can take it because it's open sourced. You can adjust
38:42
some of the weights and, mess around with it. I
38:44
know Microsoft recently announced that they're
38:46
launching a version of it. I
38:49
have played around with two different us
38:51
hosted versions of it. rather
38:53
than the Chinese hosted version. So
38:55
it's unclear to me, Under
38:57
this law, if it passes. And again, it's
39:00
just been introduced the likelihood of it actually going
39:02
anywhere. I don't know, but it would criminalize
39:05
the downloading of designated
39:08
Chinese AI. so you could
39:10
go to jail in theory for
39:12
using open source code or even for downloading
39:15
open source code, which is very
39:17
problematic.
39:18
yeah. 20 years you could be
39:20
imprisoned for, or fined
39:22
not more than a million dollars, which is,
39:25
which is nice to cap it at a million.
39:27
And, speaking of 20
39:29
years, it has been, well, it's actually been more than
39:31
20 years since we fought
39:33
some of these battles around,
39:36
import export controls and software
39:38
code over encryption.
39:40
So in the 1990s, there was
39:42
encryption software and the U S under
39:44
the Clinton administration tried to ban
39:46
the export of, high grade encryption
39:49
saying it was a form of a weapon.
39:51
and. effectively tried to block
39:53
it. And the court said no.
39:56
and basically gave us one of the first rulings
39:58
that indicates that software code is
40:00
speech. And under the first amendment,
40:02
you can't limit it in this way. And
40:05
so I think that kind
40:07
of fight and those kinds of rulings come
40:10
back into question if this passes.
40:12
But now at the same time,
40:14
we're living in the wake of the ruling on TikTok,
40:17
which we have talked about, where suddenly
40:19
the Supreme court sort of, uh, The
40:21
first amendment doesn't matter as much when we're talking
40:24
about China, so how does this
40:26
play out? If this goes into law,
40:28
does the Supreme Court say it's okay?
40:30
And then what does that mean? Because it's been
40:32
downloaded so many times and
40:34
Microsoft is offering it and other services,
40:37
multiple other services are offering a version
40:39
of DeepSeek.
40:40
mm
40:40
and then on top of all that, right. you
40:42
know, we didn't talk about like deep seek
40:45
itself in terms of how it works.
40:47
I mean, you talked about how, was. trained
40:49
in a way that was much cheaper, which,
40:52
you know, it's being referred to by lots
40:54
of people sort of like a Sputnik moment,
40:56
this sort of realization, that might
40:58
be a bit of an exaggeration, but
41:00
it is a wake up call for the
41:02
belief that, so called frontier
41:05
models know, the, the
41:07
big models, which know, uh,
41:09
open AI, anthropic, Google Facebook's
41:12
llama that only those companies
41:14
can afford to actually do the training
41:16
which cost hundreds of millions of dollars.
41:19
Whereas, deep seeks basically
41:21
saying we could do this with cheaper GPUs
41:24
because the US already bans export
41:26
of the powerful GPUs. And,
41:28
you know, we can make this model a lot simpler and
41:30
a lot lighter. Now I'll note just as an aside,
41:33
because it'll come up no matter what
41:35
among commenters, OpenAI
41:37
effectively claims that part of DeepSeek's
41:39
secret sauce is that
41:42
they distilled an OpenAI model.
41:44
Not going to get into what distillation is, but it
41:46
is an important part of DeepSeek. element of sort of how
41:49
this model was chained, trained.
41:51
And we know that they use distillation. They admitted
41:54
that the question is, did they do the distillation
41:56
on, open AI's model? And then
41:58
what does that mean? Does that violate a contract? All
42:01
those things, like we're going to leave that aside, but
42:03
the model itself is very lightweight. Very
42:05
cheap to train and then incredibly
42:08
powerful in terms of, what it does.
42:10
And as, as I mentioned, I've been playing around with it
42:12
and it's a really good AI
42:14
model. it's very impressive. It
42:17
has limitations. the default
42:19
model has somewhat famously
42:21
been directly coded in a very,
42:24
very obvious, clunky and
42:26
awkward manner to not
42:28
talk about things like Tiananmen
42:31
hmm.
42:32
Taiwan sovereignty. Um,
42:35
and you know, some of the people who have downloaded
42:38
versions and run them locally
42:40
have figured out how to extract
42:42
those restrictions from it
42:44
and, get the code to actually comment
42:46
on reality properly. But,
42:48
there are some concerns there. And then obviously
42:51
also there are some other concerns.
42:53
And I think you had found this, report
42:55
on a study on DeepSeek. Did you want
42:57
to talk about that
42:58
yeah, I think there's been a, wide conversation about
43:00
how safe is this model, when
43:03
actually, you know, it's been trained,
43:05
like you say, pretty quickly and, and so cheaply
43:07
compared to other, frontier models. And there's a,
43:10
a piece of analysis that was put out by
43:12
a company called Encrypt ai,
43:15
which calls itself, you know, like they all do the
43:17
a leading security and compliance platform
43:19
and.
43:20
leading. We're the leading podcast,
43:22
Exactly. Exactly. Um, they should come and
43:24
sponsor an episode of control or speech. Um,
43:27
but did a kind of red teaming analysis
43:29
and found that the model was
43:31
much more likely to generate harmful content
43:34
than, open AI is Oh one
43:36
model and. threw
43:38
out basically lots of bias and
43:41
discrimination. there was cybersecurity
43:43
risk. It was spitting out kind of malicious code
43:45
and viruses much more readily
43:48
than some of the other models that it has tested
43:50
in the past and was much
43:52
more likely to, produce responses
43:54
that were toxic, that contain profanity,
43:57
hate speech, and kind of extremist
43:59
narratives. so again, you know, the trade
44:01
offs between producing lightweight models
44:04
that are very kind of agile
44:06
and can do things very quickly. quickly and
44:09
safety side, which obviously some of
44:11
the big frontier models have had to focus on because
44:13
of the companies that they're coming out of is
44:16
an interesting dynamic that we're, we're seeing,
44:18
and I think we'll continue to see play out. and
44:20
it's worth going to have a read.
44:22
Yeah. And I think like The fact that
44:24
the model, it doesn't have as many
44:26
safety things built in. Didn't strike
44:28
me as surprising. I mean, I sort of expected that,
44:30
right? I mean, if it's a cheaper model, the
44:32
company is, they're not commercializing
44:35
deep seek right now in any way. And in fact,
44:37
that's raised a bunch of questions about what is the real purpose
44:40
of this? the company CEO has
44:42
sort of come out and sort of expressed like,
44:45
traditional, open source.
44:47
helping the world. We're just want to make this
44:49
available to everybody. People question
44:52
how, you know, truthful that
44:54
is there's the argument because it's attached
44:56
to a hedge fund that, maybe
44:58
they were doing this to short
45:00
sell a bunch of
45:03
American companies that may have
45:05
lost a lot of money when deep seek suddenly
45:07
became popular, there's all sorts of conspiracy
45:09
theories, who knows what the reality
45:11
is. But they have less incentive
45:13
to actually spend the time on
45:15
building safety aspects of it. And
45:18
I think the important thing to think about there is that
45:20
that is the future, right? We're going
45:22
to see more lightweight. But
45:25
very effective models and
45:27
there is going to be less incentive for them to have
45:29
the safety training and so You know the thing
45:31
that I would say folks who are listening to this who are interested
45:33
in safety stuff specifically
45:36
is that relying on the
45:38
model makers to build in safety
45:40
is probably not going
45:43
to work and
45:45
that we have to start thinking about safety at different levels
45:47
of the stack in terms of who's actually
45:50
implementing and using these models. As I mentioned, like
45:52
the deep sea code Microsoft is, hosting
45:54
a version of it and others are as well, we
45:56
might need to start looking at who is hosting these
45:59
models and who's providing access to the models
46:01
rather than the model makers themselves for
46:04
safety stuff. But on top of that, we
46:06
also need to get more.
46:08
literacy among users, right? I mean,
46:11
this is where it always comes back to, which is, you
46:13
know, at the end of the day, the users
46:15
of these products need to understand the safety
46:17
implications of what they're doing. And,
46:20
we may be over correct when we expect
46:22
other intermediaries to, stand in
46:24
the way and, be the safety cops.
46:27
Yeah. I mean, how does open source. play
46:29
into that, because so many of the, the models are open
46:32
source metas, Llamas, open source, deep
46:34
seek, you know, you kind of suggesting that
46:36
actually part of the problem is that when
46:39
somebody kind of takes the code and, and,
46:41
you know, host it themselves, that they don't have
46:43
the necessary awareness of the,
46:46
of the safety challenges, or like how does
46:48
the open source element play out against the safety
46:50
element?
46:51
that's a trickier question to answer than
46:54
you would like, right? So like open source in the AI
46:56
world is also slightly different than open source
46:59
elsewhere, right? So open
47:01
source AI models,
47:03
we still don't know the training data.
47:05
You don't, it's not like, You know, open
47:08
source code in most contexts, as you get access
47:10
to all of the underlying code, with
47:12
the AI models, it's really, it's the weights
47:14
that you're getting access to. And so you can implement
47:17
it yourself, but you still don't know the underlying, how
47:19
it was trained. And so like what biases
47:21
were built in, you still have to discover
47:23
that separately. And then you can adjust and
47:25
change some of the weights yourself. maybe
47:28
deal with the safety aspect where you can add an overlay
47:30
or filters or other things along those lines
47:33
to deal with the safety questions related
47:35
to it. but you're dealing with
47:37
multiple different things, right? So,
47:40
the underlying training is still.
47:42
a secret, right? This is why came
47:45
out and accused DeepSeq of basically
47:47
building off of their own work. and
47:49
so, again, it's just a question of like, where
47:52
and how do you insert safety
47:54
related features? And it could be in multiple places
47:56
and, how that's done, we'll see. Whereas
47:59
like, really at this point,
48:01
you have llama and, and,
48:04
meta who are trying to build in safety, or
48:06
at least were, you know, maybe now
48:08
with the new super enlightened,
48:11
Mark Zuckerberg, they will stop those, those
48:14
practices. I mean, you had other
48:16
open source models. There was mixed role, which
48:18
came out of France.
48:20
Mm hmm.
48:20
though they've moved from open source to a more
48:22
closed source approach as well, you
48:25
know, so You know, this is all sort of
48:27
in flux right now, but we'll
48:29
see. But just the fact that DeepSeq was able to do what
48:31
they did means it's almost
48:34
certain that we're going to see more open source models
48:36
that are released, that are powerful,
48:39
that are way cheaper than the
48:41
existing costs of the frontier models, and
48:43
are going to lead to. some changes
48:46
to how the market is viewed. And
48:48
there will be tremendous temptation to
48:50
make use of these models if they're much
48:52
cheaper to use on a, just use basis.
48:55
so, where the safety aspects
48:57
come in is, going to be a big deal and something people
48:59
need to think about.
49:00
Yeah. I'm waiting for the story. The, you
49:02
know, the first story where deep seekers used for some
49:04
sort of, giant, don't know,
49:06
attack on the U S federal government. Um,
49:09
it's
49:09
something like that will happen. Absolutely.
49:11
Yeah. Okay, Mike, what we've got
49:13
probably time to mention one more story very briefly,
49:16
which is one that I, stumbled
49:18
across, which takes us back into
49:20
Europe and to the regulatory regime
49:22
there. This is a story, in which
49:24
Apple is very kindly warning
49:27
us about the, perils
49:29
of having other app stores on
49:32
its app store. so there's
49:34
an app store called Alt Store
49:36
Pal. Which is actually an open
49:38
source, app, and it's designed
49:40
for kind of sideloading other apps onto,
49:43
the iPhone. It gave a story
49:46
to Bloomberg this week in which it said it was worried
49:48
about the fact that out store
49:50
has, got a porn app. so
49:52
users can now use out store to download
49:55
a porn app, which aggregates lots of pornography
49:58
from various different sites, and,
50:00
uh, you It's kind of suggesting actually, this is
50:02
a problem for context
50:05
last year. And over the last few years, Apple's
50:07
come under fire from the digital under the digital
50:09
markets act, which is suggesting
50:11
that actually Apple has a dominance in terms of
50:14
the marketplace for apps. And
50:16
in only last June did an investigation
50:18
into Apple's dominance of
50:21
the place. And so this
50:23
is essentially kind of Apple trying to push
50:25
back against that a little bit. It's
50:27
a slightly cynical attempt to make everyone worried
50:29
about the DMA. And it suggests
50:31
that actually, you know, users are going
50:33
to become less safe. If it's
50:36
allowed to do, these kinds of sideloading
50:39
apps via the app store. So again,
50:41
little bit naughty from Apple, suggesting
50:43
that this is going to be, it's going to open up a whole
50:45
range of harms to users in the EU.
50:48
What do you think about it, Mike? Very
50:49
Uh, it's, it's nonsense. I mean, I,
50:52
I, I, I have, I have all sorts of criticisms
50:55
about the EU regulatory regime, as I've talked
50:57
about. this is not it. This is, you know,
50:59
I think Apple's is, this is
51:01
the fainting couch, Oh my goodness. Like
51:03
somebody could install a porn app. You know what? You
51:05
have a browser on your phone. You can
51:07
access all sorts of pornography
51:09
if you want. Right. this is not
51:12
the concern. The fact that you have alternative app
51:14
stores, like, I don't know why Apple's making a big deal
51:16
of this. Like Android on Google Android,
51:18
you can have alternative app stores. They have existed
51:20
for a long time. All of these fears
51:23
have never been proven to be true. Most people
51:25
don't use them. Most people are not going
51:27
to alternative app stores to download dangerous
51:29
apps. There are all sorts of reasons to complain
51:31
about the EU. Maybe this ties
51:33
back to the story where like Apple sees
51:35
this as an opportunity to just
51:37
try and smash EU regulatory
51:39
regime right now, but this is not it,
51:41
it seems silly, it seems like, Oh
51:43
my, Oh, you know, very, very prudish.
51:46
Like, Oh my gosh, someone might be able to get
51:48
porn on an iPhone. Oh, come
51:50
on. Like, uh, this,
51:53
I don't know. I don't understand why Apple's doing
51:55
this. I mean, I think everybody should be able to see
51:57
through this as it's just a nonsense
51:59
complaint.
52:00
Yeah. Yeah. It's big companies pulling up the moat
52:02
again, isn't it? which is something
52:04
we've talked about, So are kind of
52:06
four stories for our listeners
52:08
this week. we've tried to kind of distill everything
52:11
that we've heard. and you've written about
52:13
this week, Mike, it's been a hell of a, of
52:15
a pod this week. thanks to trying to unpack
52:17
everything for us and sorry, you've had
52:19
such a kind of shitty
52:21
week, basically. listeners, thank
52:24
you for taking the time to, listen this week
52:26
and for staying with us all the time.
52:28
we're going to be back next week, but we're going to be recording
52:31
on Friday rather than our usual, our new
52:33
Thursday slot. So tune in
52:35
then. And, if you enjoyed this week's
52:37
episode and all of the episodes that we put out, do
52:40
rate and review us on your favorite podcast
52:42
platform really helps us get discovered. thanks
52:44
for joining us this week. Take care. Bye.
52:49
Thanks for listening to Ctrl-Alt-Speech.
52:52
Subscribe now to get our weekly episodes
52:54
as soon as they're released. If your
52:56
company or organization is interested in sponsoring
52:58
the podcast, contact us by visiting
53:01
ctrlaltspeech.Com. That's
53:03
C T R L Alt Speech. com.
53:06
This podcast is produced with financial support
53:09
from the Future of Online Trust and Safety Fund,
53:11
a fiscally sponsored multi donor fund
53:13
at Global Impact that supports charitable
53:15
activities to build a more robust, capable,
53:17
and inclusive trust and safety ecosystem.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More