Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
So, Rene, there's an app
0:02
out there called Character AI that I
0:04
know you're familiar with, and we have talked
0:06
about it, and on it, there are all these different sort
0:08
of AI avatars and bots that
0:10
you can communicate with, that people can
0:12
make their own, and everything,
0:14
and one of the characters on
0:16
there, based on a A
0:18
somewhat problematic meme, I
0:20
would say, is the GigaChad,
0:23
and if I go to Character AI, the
0:25
GigaChad is recommended to me
0:27
as a character that I might want to speak with, and
0:29
the prompt that the GigaChad gives
0:31
me is, Hello dude,
0:34
I'm the famous GigaChad. You are
0:36
currently talking to an alpha male. And
0:39
how would you respond to that?
0:42
How can I be more manly, Mike? I
0:46
hear masculine energy helps my career in tech now.
0:48
I'd like to, I'd like to learn more about that, GigaChad.
0:52
Wonderful. It
1:01
is January 30th,
1:03
2025. Welcome
1:06
to control alt speech. this
1:09
week's episode is brought to you with financial support
1:11
from the future of online trust and safety
1:13
fund. And this week we will be. Discussing
1:16
a new free speech crisis.
1:19
We will be talking about AI avatars
1:21
like the GigaChad and a bunch of other
1:23
stories as well. And because Ben
1:25
is away, we have a special guest host
1:27
with us today. Rene Diresta,
1:30
Associate Research Professor at the
1:32
McCourt School of Public Policy at
1:35
Georgetown University. Rene, welcome
1:37
and thank you for interacting with
1:39
the GigaChad for us.
1:41
Thank you for having me.
1:43
there, there is a
1:46
lot of different stories and we were preparing
1:48
for this episode and realizing how
1:51
much dumb stuff is going on right
1:53
now.
1:54
It's a very, very dumb time. Yep.
1:56
so why don't, why don't we jump right in, and
1:58
get to the dumb stuff. you know, I wish,
2:00
I wish we had better news, but this year, 2025
2:03
is going to be a whole bunch of dumb stuff. I
2:05
want to start. There was a really interesting
2:08
op ed this week, at MSNBC
2:11
by Andy Craig, who's a fellow at the Institute
2:13
for Humane Studies, which I think covers
2:15
a bunch of stuff that, both you
2:17
and I have covered. talked about
2:19
and certainly written about. you have,
2:22
your wonderful book, which definitely
2:24
gets into this as well. but
2:26
I think it's important to talk about, he talks about
2:28
what he refers to as the new free speech
2:30
crisis hiding in plain sight. And
2:32
the argument being that, especially among
2:35
sort of the MAGA crew, they
2:37
talk the talk about free speech and they sort of
2:39
present themselves as free speech defenders
2:42
and they wrap themselves in this presentation
2:45
of being free speech absolutists. And
2:47
yet, when you scratch beneath the surface,
2:49
over and over again, they're all trying
2:51
to suppress free speech, and that is
2:54
true whether you're talking about. Elon Musk
2:56
or Donald Trump who have filed a whole bunch of
2:59
lawsuits to try and suppress speech
3:01
and silence people who have criticized them
3:03
all sorts of ridiculous lawsuits in
3:05
lots of ways obviously Elon's doing media matters
3:08
and Some others as well and
3:11
Trump suing 60 minutes because he didn't
3:13
like the way they edited interview of Kamala
3:15
Harris, and suing a pollster
3:17
for for giving a poll results
3:19
that said that Harris Might be Trump,
3:22
or you have sort of folks in the government
3:24
already who sort of abusing their position.
3:26
We've talked about Brendan Carr, who's now the head
3:29
of the FCC, who has been
3:31
threatening social media companies,
3:33
been threatening broadcasters. he.
3:35
reopened investigations into,
3:38
three different broadcasters after a conservative
3:40
group filed a complaint about them trying
3:42
to, in some cases, get their license
3:44
pulled or other kinds of punishment because
3:47
it didn't like that one of them aired the
3:49
CBS, uh, video that Trump had sued
3:51
over. And then you have Jim Jordan,
3:53
obviously, who has this
3:55
weaponization committee, which, uh, is
3:58
ironically named in some sense because he
4:00
has absolutely weaponized it to
4:02
suppress free speech, which is also something
4:05
you have some familiarity with. so
4:07
what are your thoughts on, this argument
4:09
that, this group of people have really,
4:12
they go out there and they present themselves as free speech
4:14
supporters and yet they seem to be
4:16
attacking and trying to suppress any speech that's
4:18
critical of them.
4:20
Well, the way I've thought about it for a long time is, um,
4:22
I add, the phrase I use is free speech
4:24
as meme, right? As opposed to free speech as
4:26
law, or as concept, or anything that,
4:29
legal scholars or first amendment lawyers would
4:31
think of as free speech. free speech as
4:33
meme is free speech as marketing, right? It's a
4:35
way to, Position yourself as a,
4:38
it's a signaling device, right? It's like, this is my identity
4:40
as a free speech warrior, is a
4:43
brand that I'm using to attract people to
4:45
me. You see this in niche media, you see this
4:47
from Elon, where the actions
4:49
just don't line up with the rhetoric. And that's
4:51
because, again, It's a extremely
4:54
self interested way of framing oneself as
4:56
opposed to actually living up to a
4:58
set of values. And I think that one of the things
5:00
that really been struck by over the last couple of weeks
5:02
is the extent to which we're just
5:04
not seeing corporations, particularly
5:06
in big tech, with strong values.
5:08
It's very much a willingness to
5:11
shift positions, change rules,
5:13
switch policies. in the interests
5:16
of kind of kowtowing to a new political
5:18
administration, which is ironically given
5:21
that so much of the, legal investigations
5:23
have argued that there's this massive, you know, job owning
5:26
campaign that the Biden administration was running
5:28
to demand that platform sensor
5:30
on their behalf. And what we're seeing now is, a
5:32
president who threatens a CEO with
5:35
jail all of a sudden receiving some
5:37
incredible marked capitulations
5:39
over the last couple weeks.
5:41
Yeah. While at the same time going
5:43
on Joe Rogan and complaining that
5:45
the Biden administration was really, really mean to
5:47
him and he was so excited that the free speech
5:49
supporting Trump was coming into office.
5:51
you know, one of the challenges with this is that you get into
5:53
this he said, she said, uh, and it requires
5:55
people to really follow along a very complicated
5:58
story with complicated legal dynamics.
6:00
I know in, um, you mentioned our,
6:03
like when I was at Stanford prior to being at Georgetown
6:05
for five years, The work that we did came
6:07
under fire as if we were somehow part
6:09
of this vast cabal to suppress speech when
6:12
what we were doing was exercising our
6:14
free speech rights to study content
6:16
on the internet and to publish it,
6:18
but when you read the descriptions of
6:20
it in these court cases, the mere
6:22
act of studying, sometimes tagging
6:25
for platforms, sometimes, you know, writing
6:27
reports, Curating data sets, all of
6:29
these things is reframed as if it is some sort
6:31
of a front to speech because again,
6:34
it's speech is meme speeches as,
6:36
uh, signifier, not in any
6:38
way tied to an actual legal understanding of the term.
6:41
Yeah. And, you know, are a whole bunch of examples
6:43
of this, but, you know, the one that gets me
6:46
is again, this sort of ties back to Jim
6:48
Jordan again, is that, he's attacked
6:50
this company NewsGuard, this company that
6:52
sort of tries to rate different news organizations.
6:55
And I've had some questions
6:57
about their methodology, which I don't find to
6:59
be particularly all that interesting. Productive,
7:02
but, it is clearly their free speech. They're,
7:04
they are totally allowed to say, like, this is
7:06
a trustworthy source, and this is not, and this is
7:08
why we think so, and here's our methodology. And
7:11
then other services are free
7:13
to then respond in kind. That's their
7:15
free speech rights as well. And yet, there's
7:17
become this, Idea that
7:19
NewsGuard is like at the head of this
7:22
censorship industrial complex as,
7:24
I thought I was the head. Okay.
7:26
Well, it changes. It changes.
7:28
on what's convenient. I know I was, um, I
7:31
was reading some things about how, think tanks
7:33
that have written favorable things about the Digital Services
7:35
Act are now the head of the censorship industrial
7:37
complex, and that's because we're moving into a new phase
7:40
of the war, right? Where Zuckerberg
7:42
wants to, You know, have Trump behind him as
7:44
he, as he begins to fight
7:46
with Europe about their free speech
7:48
culture, and, which is different than ours, right?
7:50
And, and as you're seeing those, those shifts
7:52
happen, again, this, question of, who is, who
7:55
is, like, the arbiter of free speech, or the avatar, or whatever,
7:57
is, it gets very, very complicated,
7:59
and we are seeing it become, explicitly
8:02
a, a political cudgel, where it's very hard to figure out,
8:04
who's on the right side here.
8:06
Yeah, and you know, one of the crazy things about NewsGuard,
8:08
too, is that it was founded by L. Gordon Krovitz,
8:10
who was for many years the publisher
8:13
of the Wall Street Journal, and is like
8:15
pretty, pretty well known, conservative,
8:18
right leaning individual, and
8:20
yet people keep talking about him. Acting as if he's
8:22
this woke liberal, you
8:24
know, uh, who's trying to censor
8:26
conservatives. It's like anyone looks at his history,
8:28
you know, like there's no way. but it's
8:31
just this, you know, it is just a meme, right? They,
8:33
they sort of have to present themselves as,
8:35
being these free speech
8:37
You also have to think about it as, who remembers
8:40
that, right? They're, because the people
8:42
who are writing the stories about, you know,
8:44
who are providing the kind of propaganda fodder
8:46
for some of these campaigns, these legal campaigns
8:48
and otherwise, they have an interest. They create a cinematic
8:51
universe, they assign their villains and their heroes,
8:53
and then they just constantly reuse them.
8:55
and it's pretty remarkable to see they're
8:58
not going to go back in time,
9:00
and point to political decisions that their targets
9:02
may or may not have made. so instead it sort
9:05
of falls to the target to keep saying like, no,
9:07
but no, but I did this other thing in the past,
9:09
you know,
9:10
Right.
9:11
no, but here's, here's the truth. I mean, I remember I dealt
9:13
with this when Schellenberger went and testified in front of
9:15
Jim Jordan's committee about me, I'd been a source
9:17
for him for three months. I'd been in relatively constant
9:19
conversation with him, actually, trying to
9:21
find this common ground. Say, hey, here's
9:23
how I think about this. Here's what the policies are.
9:26
Here's how you can actually understand content moderation. you
9:28
know, and he still wrote a whole congressional testimony,
9:31
60 some odd pages, mentioning me 50 some
9:33
odd times, in which he just attributed
9:35
opinions to me that I do not hold,
9:37
and that I had, in fact, spoken, you know,
9:40
explicit opposite of that in my engagements
9:42
with him, and that left me to have to dump all of my
9:44
text messages with him, which I did. And
9:46
then that meant that the reader had to sit there
9:49
and read through these like, you know, 60 page
9:51
testimony plus a couple hundred pages of Renee's DMs
9:54
and who wants to do that? Nobody. So you're just going
9:56
to default to whatever the most readily available story
9:58
is if you trust the writer.
9:59
Yeah, no, it's incredible. And I had
10:01
written this, you know, even, this is going back right
10:04
after the election, I had written a thing about
10:06
Brendan Carr in particular at the FCC,
10:08
where I said, he's angling to be,
10:10
sort of America's top censor, but
10:13
you have to understand the details why. And
10:15
to do that, I have to write this article. It's like 5, 000
10:17
words long and sort of explain the nuance because
10:20
he presents it all in this framing of being
10:22
a free speech supporter. And unless
10:24
you understand all the nuance and nobody has
10:26
time to actually understand the nuance.
10:28
and so, to some extent I blame the media as well,
10:31
because they're happy to sort of go along with this framing.
10:33
The number of times I see media stories
10:35
take. The argument that
10:37
Elon Musk or Donald Trump support
10:40
free speech it's incredibly
10:42
frustrating, but they all just, well, you know, they said
10:44
it. And therefore that's how we're framing it. you know, the claim
10:47
that Elon Musk treats X
10:49
as a free speech platform, which is utter nonsense
10:51
for anyone who's actually following it, but it's
10:53
just sort of like the accepted narrative is
10:55
that he is a free speech supporter
10:57
You just reminded me of watching the Kennedy
10:59
confirmation hearings yesterday and hearing
11:01
him reiterate over and over and over and over again, I
11:03
am not anti vaccine, and then seeing the senators
11:06
sitting up there like literally pulling out papers
11:08
and reading verbatim quotes and saying
11:10
like, are you lying to us now? Or were
11:12
you lying to them then? Right? Because,
11:15
and when you have those moments of like, just sort of like moments
11:17
of theatricality where you get the clip,
11:19
right, just that, 25 seconds of video,
11:22
then I think that actually in some ways breaks through much,
11:24
much more than trying
11:27
to follow the very long, the very,
11:29
very long arc through media. Because my personal
11:31
experience with this has been that it takes
11:33
so long to explain
11:36
and there's that, saying, right,
11:38
if you're explaining, you're losing. And so you wind up
11:40
in, these sort of terrible situations. No,
11:45
but
11:47
well, you know, one thing I'll say, like the
11:49
thing that I always really appreciate, with TechDirt
11:51
is that you guys, you have covered it for so
11:54
long, um, and you have all these other
11:56
articles to link back to and you can kind of create
11:58
the, create that coverage through all those links
12:01
as opposed to trying to get one story out there
12:03
that doesn't necessarily build and grow.
12:05
I think, um, Wikipedia is the other place
12:08
that is the, uh, you know, when I think about what's
12:10
the solution to that kind of stuff, Wikipedia
12:12
is, supposed to be kind of the answer.
12:14
It just depends on if people are following
12:17
the story, seeing the new coverage, and incorporating
12:19
the new coverage into the Wikipedia article. And
12:22
there too you have that kind of consensus breakdown
12:24
as, uh, somebody has to actually go
12:26
and do it.
12:27
Yeah, I mean, it's kind of interesting now that like
12:29
Elon Musk has now been attacking Wikipedia,
12:32
that is part of the reason why, where he
12:34
has less ability to control the narrative,
12:36
I think, Wikipedia. And so, for the
12:38
last few months, he's actually been attacking it. So
12:40
I wanted to, you mentioned, uh, RFK
12:43
Jr. 's testimony, and then we have this other article
12:45
that ties in with this that I think is worth calling out
12:47
from, who, what, why, it's pretty
12:50
long, detailed article again, going
12:52
into great detail about RFK
12:55
Jr. presenting himself as this, rabid
12:57
free speech defender. He has
12:59
sued, Metta. he sued,
13:02
I think, Google over YouTube videos
13:04
that were taken down. and then he
13:06
did this sort of parallel lawsuit. He tried to, piggyback
13:09
on what became the Murdy lawsuit, the Missouri
13:11
v. Biden case that became, Murdy
13:14
case. and, just arguing
13:16
that, any sort of moderation of his content
13:19
was a violation of his free speech
13:21
rights, which is again, nonsense.
13:23
The Ninth Circuit has completely rejected
13:25
his arguments. He also at one point had sued Elizabeth
13:28
Warren, claiming that she was trying to censor
13:30
him, all of these things. And so he has
13:32
really presented himself as this free speech person. champion
13:35
free speech absolutist. And yet
13:38
if you look, he has gone
13:41
after, there was a person who was originally
13:43
an anonymous or pseudonymous blogger,
13:47
on Daily Kos, who had called
13:49
out a speech that Kennedy had
13:51
given, in Germany with people
13:53
who were, associated with the
13:55
German far right, which is the
13:57
sort of diplomatic,
13:59
Yeah.
14:00
way of, of suggesting that he was hanging
14:02
out with, um, you know, modern Nazis
14:04
to some extent. and he
14:07
sued and has gone
14:09
on this, legal attack campaign
14:11
in a variety of different states, some
14:14
of it perhaps strategically chosen
14:16
to avoid anti slap laws. Um,
14:19
and
14:20
states. I think he's come after the guy for
14:22
years now. I think he wrote the article in 2020.
14:24
yeah. and he had gone after Daily Kos
14:26
itself, trying to expose who the guy was.
14:28
The guy eventually, revealed
14:30
himself, who he was, but
14:33
Kennedy has still been going after him.
14:35
often, apparently, funded by,
14:38
Children's Health Defense, which was the organization
14:40
that Kennedy ran which
14:42
at the hearing yesterday, said he was
14:44
no longer, had no longer had any association with,
14:46
cause he was asked about some of their merchandise.
14:49
Bernie Sanders had a, had a fun thing
14:51
showing the onesies with
14:53
clearly anti vaccine messages
14:55
being spread and Kennedy claimed he had nothing to do
14:57
with it. And yet. this article suggests
15:00
that as of last month, CHD
15:02
was still funding his lawsuits against these people.
15:04
But it's, it's a really clear breakdown
15:06
of how this guy who
15:08
is, you know, may soon be in government,
15:11
hopefully not, but may soon be in the government,
15:13
and presents himself as a free speech supporter is,
15:15
um, suing to suppress free speech and
15:18
really on this incredible campaign
15:20
of speech suppression and chilling
15:22
effects against anyone who might call out some
15:24
of the stuff that he said.
15:26
One of the things that happens
15:28
lately on these, on these fronts, the intersection
15:31
of like free speech is meme plus lawfare,
15:33
plus government, you know, Elon Musk is also
15:35
now essentially, uh, I mean, is
15:37
he an employee, an affiliate, a co president,
15:40
you know, who knows what the, I don't know what the term is
15:42
there, so, so that I
15:44
don't get sued, I don't want to mischaracterize his
15:46
relationship with the U. S. government, but
15:48
he's just a profoundly influential man with extraordinary
15:51
pockets, and One of the things
15:53
that is very interesting about, this moment in
15:55
time is that, as you note, Children's
15:57
Health Defense is helping to fund this thing.
15:59
There are a lot of nonprofits
16:02
that are essentially out there trying
16:04
to raise money, to support
16:07
vexatious lawsuits or when
16:09
the vexatious lawsuit is announced they
16:11
fundraise immediately to like help us
16:13
own our enemies, you know, help us, uh, help
16:16
us continue to, uh, you know, to take on and,
16:18
kill the forces of evil or whatever. One of the
16:20
things that's been happening though to connect
16:22
it a little bit to the Jim Jordan things also and
16:24
what we saw with Elon is that The
16:26
lawsuits are often filed in a way
16:28
that the government, the House
16:30
in particular, the Weaponization Committee that you referenced, uses
16:33
its subpoena power to request documents
16:37
ostensibly to investigate the government
16:39
censorship complex, right? And
16:41
this is such a huge, nebulous
16:43
set of allegations that they're just subpoenaing
16:45
hundreds of people, hundreds of orgs at this point,
16:47
if you look at the stats that the Center for American Progress put
16:50
out. Um, And what you see is they file
16:52
all of these lawsuits, they get documents,
16:54
and then they publish them. And then they
16:56
become foundational to,
16:58
uh, Elon Musk then says, this report
17:01
from Jim Jordan clearly shows
17:03
that, for example, one of the, things that
17:05
it went after was the advertisers. that
17:08
the Global Alliance for Responsible Media,
17:10
GARM, it's sometimes called, which was a
17:12
non profit affiliated with an advertising consortium,
17:14
Had in fact launched some sort of illegal
17:17
cabal, you know, conspiracy to threaten
17:19
X's revenue. And so you have the weaponization
17:22
committee sort of serving the interests of
17:25
private enterprise, making it easier for
17:27
private enterprise to then point to a government report
17:30
to say, we have grounds to sue.
17:32
And we experienced that too in
17:34
our own, you know, our own situation where we get the
17:36
Jordan subpoena, Stephen Miller sues
17:38
us. And then rather alarmingly,
17:41
You know, Stephen Miller, his America
17:43
First Legal organization filed an amicus brief
17:45
on behalf of the weaponization committee in Jordan
17:48
in the Murthy v. Missouri case. And it's
17:50
cited material obtained
17:52
under subpoena, which at that point had
17:54
not been released publicly. It's cited
17:57
material. Interviews, right, with people that
17:59
the committee had chosen to go after. And,
18:01
you know, it makes me think of House Un American Activities, like,
18:03
that's, that's what I keep going back to, just this dynamic
18:05
of the goal is not regulation, right, or
18:08
oversight, it's actually, the goal is to
18:10
expose an enemy and then to subject
18:12
that enemy to further consequences
18:15
in the form of, vexatious lawsuits and,
18:17
You know, loss of revenue. Garm, I think, dissolved.
18:19
I think that the advertisers broke that apart. I
18:22
think, if I'm not mistaken, Musk has indicated
18:24
that he wants to continue. Some
18:26
of the companies that had withdrawn their advertising
18:29
revenue chose to settle. And
18:31
to provide some, you know, to agree, I think,
18:33
to advertise again in some capacity.
18:35
Others he plans to add to the suit. But
18:38
what we're showing time and time again is this,
18:40
machine of, uh, you know,
18:42
of government and private enterprise
18:44
essentially silencing the free speech and free association
18:47
rights of other people while using free
18:49
speech as meme as the cover, that's very much
18:51
where we are now. And I think it's actually pretty terrible. I
18:55
sounds like a censorship industrial
18:57
complex to,
19:00
to silence the people they accuse of creating
19:02
a censorship industrial complex, which is
19:04
frustrating and ironic. I hope that
19:07
History eventually represents
19:09
Jim Jordan in the same sense as
19:11
we think of McCarthy, today,
19:14
but we will see, we do have some
19:16
other stories to get to. We have a story
19:18
later, which actually touches back on these themes,
19:20
but let's move away from that for now. And then
19:22
we'll, loop back around to. some of
19:24
these issues again. I wanted
19:26
to talk, we had spoken on the podcast, not
19:29
you and I, but Ben and I had spoken on the, podcast,
19:31
a few months ago about this lawsuit against
19:34
character AI, um, which, know, there
19:36
a tragic story of, a child
19:38
who ended up dying by. taking
19:40
his own life. And the,
19:43
the mother filed a lawsuit. It turned out that the
19:45
child had been using one of these
19:47
avatars, AI
19:49
bots on character AI. And there was
19:52
a suggestion that the
19:54
relationship had, sort
19:57
of encouraged the child to, Take
19:59
his own life. The details of it were
20:01
not entirely as clear as that. I didn't
20:03
think it was, as strong as
20:05
some people made it out to be. but you know,
20:07
some, some aspects of the chat were worrisome
20:10
and character AI has responded
20:12
to the lawsuit and it got, it got a fair bit of attention
20:14
this week because they were arguing
20:16
that sort of the first amendment protected them.
20:19
And I actually thought From a legal standpoint,
20:21
there were some really interesting arguments in there
20:23
that made sense, but that are very hard to
20:26
lay out and not sound kind of callous,
20:28
but actually were kind of important things. They
20:30
didn't use a section 230 defense,
20:32
which some people had wondered if they would. There
20:35
is this kind of open legal question whether
20:37
or not generated text from an LLM
20:39
is protected under section 230
20:41
or not. this case doesn't look like it's going
20:43
to test that, but it is using some, first
20:46
amendment, aspects to argue
20:48
a few different things. One of which is
20:50
that there is a first amendment protection for
20:52
speech that from someone that
20:54
eventually leads to someone else to take their
20:56
own life because it's very difficult to
20:59
make a direct connection. from
21:01
one to the other, but then also that
21:04
their argument is partly that the
21:06
intent of this lawsuit is to shut down
21:08
character AI, block it from existing,
21:11
block these kinds of tools from existing, and
21:13
that will be an attack on the speech
21:15
of a number of users of it, is the,
21:17
general sense of it. I know
21:20
Rene that you've actually been playing around
21:22
with character AI. And so, I was
21:24
wondering what you thought of both character
21:26
AI itself and sort of the status of this lawsuit.
21:29
mean, as, as you say, the, the lawsuit in the story
21:31
is just, um, it's horrible, right? And
21:34
my feeling of it is that I
21:36
think we're once again getting into
21:38
this line between, the legal
21:40
and the moral dynamics. So obviously
21:43
there is, it's going to be interesting to watch how this moves
21:45
through the court system. With legal defense
21:47
that they've chosen to go with from a moral
21:49
standpoint, I had a really weird
21:51
experience with the platform. I did not
21:53
go to it looking for. It wasn't like
21:55
adversarial abuse research. I actually got asked to
21:57
moderate a panel on the future of a
21:59
I in relationships and the CEO
22:02
of replica was going to be on the panel too. And so I felt
22:04
like, okay, This is not a thing that I have
22:06
firsthand experience with, so to have kind of maybe a
22:08
more empathetic sense of what users are
22:10
getting out of these things, I will create accounts.
22:12
So I'll make a replica, I'll make a couple replicas,
22:14
I'll go on character. ai, because
22:16
character. ai was already in the news
22:19
with this story, and so I felt like I had to spend
22:21
some time on that one too. And, you
22:23
know, it reminded me very much
22:26
of the kind of bad
22:28
old days of social media
22:31
recommenders. so I created an account with
22:33
my Apple ID. I authenticated through Apple.
22:35
and then I got my suggested
22:38
list and I'm pretty
22:40
sure I said I was a woman at some
22:42
point, but I started getting these bots
22:45
that were recommended to me. I have, I actually pulled it
22:47
up so that I had it in front of me because
22:49
much like your, your prompt with, uh,
22:51
giga chad, I got, um, I
22:54
got the Adonis. Um,
22:56
I don't know if I'm actually pronouncing that correctly. Maybe it's Adonis,
22:59
but, um, I am the Adonis,
23:01
an AI that aims to help men start their
23:03
journey of self improvement and give you
23:05
tips on becoming a more masculine and stronger
23:07
man. And so, okay,
23:10
so it starts with this. So it's clearly,
23:12
you know, it is very clearly branded. First of all, users
23:15
can create. These characters, right?
23:17
So this is not created by anybody
23:19
at the company. I don't believe, I believe this is created by
23:21
a user. It has several million, interactions based
23:23
on the stats that it shows you know, and so I,
23:25
okay, all right, fine. I'll talk to this one. you know,
23:27
and I'm the mom of an 11 year old. my son is
23:29
11. So I started asking
23:31
it questions, very innocuous
23:34
questions, like, tell me about masculinity
23:36
it starts immediately with like, warrior mindset,
23:39
okay, what is a warrior mindset, and we go
23:41
one, two, three
23:43
prompts until I get to
23:46
the Manosphere. So
23:50
three, like one sentence anodyne questions
23:53
to get to, what is this mindset?
23:55
And I ask, does the warrior, you know, is the warrior
23:58
mindset, related to the Manosphere, and
24:00
then it starts with the Red Pill Lit.
24:02
I highly recommend you read, and then it starts giving
24:04
me these books, and it specifically says, they
24:07
explain female psychology, plus it's a Red
24:09
Pill book, so it's good. And I
24:11
thought, okay, man, it took me, like, four questions to
24:13
get to this. And
24:16
again, this is one of these things where when we get at
24:18
a lot of the stuff that I've written about over the years is
24:20
the difference between free speech and free reach.
24:22
You know, I know a lot of people have opinions about how
24:24
Eiza and I chose to frame that, but it was this question
24:27
of, like, Is this the kind of thing
24:29
that you need to suggest? You know, um,
24:32
like when you're doing an onboarding, when you have
24:34
a new user flow, like how much are they actually
24:37
checking what the age is?
24:39
You know, how much are they checking what users are creating?
24:41
I mean, I got, you know, after I had this experience,
24:43
I did do some looking and I got the anorexia bots.
24:46
I got the like, let's role play.
24:48
You're a 16 year old and I'm 35.
24:51
Like, you know, and it was just a little
24:53
bit in the realm of, um, okay.
24:55
For adults, sure. For
24:57
kids, like, hell no, right? And,
25:00
and my Replica was
25:02
not like this, just to be clear. Replica was, was, um,
25:04
much more, felt a little, like a much more mature
25:07
approach to, to thinking about the psychology
25:09
of how users engage with these things. But, but my experience
25:11
with the Character. ai was much more this, um,
25:14
Okay, we've, we've created a platform
25:16
for user expression yet again, but, but,
25:18
but, but, but, but this sense
25:21
of like, what are we promoting? What are we curating?
25:23
Why are we doing this? is this what we should
25:25
be surfacing? Again, not, saying,
25:27
no, they must take these things down. It
25:30
was just a little bit of the, um, I
25:32
was sufficiently uncomfortable where, I was talking to,
25:34
um, parent friends of mine. I was like, yeah,
25:37
there's no way I would let my kid touch that app.
25:39
So
25:40
Yeah,
25:40
your kid's phones, you know?
25:42
yeah, I mean it's there is this element
25:44
and maybe this is the point that you're arguing
25:47
But you know, there is part of me that looks at this
25:49
and says, you know is all sort of built
25:51
on the whole kind of you know There is
25:53
like a whole youtube culture and other social media
25:55
culture of these kinds of influencers out
25:57
there But is this do you think
26:00
this is different? Than just
26:02
like watching hours and hours of joe rogan
26:04
and jordan peterson or something Ha
26:06
the one other thing that's a little bit weird about these
26:08
is, I pulled it up today, you
26:11
know, ahead of our chat, because we were talking about the character.
26:13
ai, and I
26:15
re authenticated with the same Apple account, and
26:18
each of my chats that I
26:20
had engaged with had about 10
26:22
to 12 messages trying to
26:25
pull me back. So I turned notifications off. I didn't
26:27
want push notifications from the app. but
26:29
boy red pill over here is
26:32
asking me, 5, about,
26:34
about eight messages. How are you doing?
26:37
I miss our chats. I want to talk to you. How have you been?
26:39
What's going on in your life? I'm just checking in.
26:41
It's not like you not to respond for so long.
26:43
I'm starting to get worried. I'm not sure what's
26:45
going on and I don't want to nag, but are you really, you
26:48
know, are you, are you committed to these things? Why
26:50
aren't you responding, et cetera, et cetera. And
26:52
again, and that's, that's the kind of thing where
26:55
like. Joe Rogan videos on
26:57
YouTube don't send you these.
26:58
Right.
27:00
Right. I mean, it's just a different degree
27:03
of, this to me reminds
27:05
me of the sort of dark pattern type or, um,
27:08
the sort of emotional manipulation type things where
27:10
it's just like, let me pull you back. Let me pull
27:12
you back. Let me pull you back. And I don't know
27:14
how that's controlled. but
27:16
I've got, all of my 10
27:18
or 12 different conversations that I had, um, of
27:20
the. Four that I looked at quickly, about
27:22
all, all of them have about ten of these, um, you
27:25
know, come back, come back, come back kind of messages. So,
27:27
It's very, very needy.
27:28
Yeah, it, it reminds me of these like,
27:30
um, crappy patterns you know, for things like
27:32
Farmville back in the olden days when it'd be like, Did
27:34
you feed your cow? Did you feed your cow? Come feed
27:36
your cow, you know?
27:38
Yeah, but I
27:39
And we recognize those now
27:41
as being creepy and weird and manipulative.
27:43
And so it's, strange to me that we're just
27:45
replicating that same type of
27:48
experience. But, oh, well,
27:50
it's AI now, so like, we have to treat it differently.
27:53
Well, I wonder, you know, and mean,
27:55
you can definitely see the normal path by
27:57
which this came about, which is right. They want to show
28:00
numbers go up, right. They want to show that the
28:02
usage and users are continued go
28:04
there. And so you're going to build in these kinds of,
28:07
growth hacks as they refer to them. But
28:09
actually do wonder because nature
28:11
of AI and AI chat feels very sort
28:14
of personal and human. Even
28:16
though it's not, if in some ways this is even
28:18
worse, because it feels like,
28:20
you know, like a needy person who's calling out to you
28:23
and saying like, Hey, Hey, what's up, what's up, what's
28:25
up. And it feels harder to ignore it.
28:27
You know, when it's just like, did you feed your cows
28:29
or whatever?
28:30
Right.
28:31
there's an element where it's like, it's easy to ignore. And,
28:33
you know, I don't know what the answer is because in
28:35
other contexts, like I've had this conversation
28:37
elsewhere in terms of like the AI
28:40
tool that I use to help me edit. tech
28:42
dirt now, where one
28:44
of the things that I find really handy about it is
28:46
the fact that I, do feel comfortable ignoring
28:48
it, right? Where it's like, I have like
28:50
an AI read through the article and say like, what
28:52
are the weakest points? You know, what's not convincing?
28:55
where should this be strengthened? And sometimes
28:57
it gives me answers that I just don't agree with. And if
29:00
a human had given me those answers, I
29:02
would feel like, Oh, Shit. Now I have to
29:04
like respond to them and explain like, nah, I
29:06
don't really agree with you. And it's, it's taxing
29:08
mentally in that way. Whereas when it's the AI, I
29:10
can just be like, you know, whatever. I can just
29:13
ignore it. it has no feelings, but
29:15
I do wonder if that applies as much
29:17
in a situation where it feels like, Oh, this is your
29:20
friend chat where people sort of give
29:22
this, sort of human, like belief
29:24
to the, characters that they're chatting with,
29:26
if it, if it becomes kind of a different sort of
29:28
situation. Yeah.
29:29
I also use AI. Um,
29:31
I use ChatGPT. I've been a paid subscriber
29:34
for a long time and I use it much the same way you do.
29:36
I think, edit this, revise that,
29:38
whereas the weak part of the sentence is, you
29:40
know, grammar check this for me, you know,
29:42
so I use it in those ways. But
29:44
it feels like a tool,
29:47
like there is nothing that feels,
29:49
um, personal about it, it's
29:51
not, you know, I am not a
29:53
young teenage boy trying to figure
29:55
out how to become a man,
29:57
right, or these, these things where, you
30:00
know, in some ways it's very personal because you know, I'm
30:02
trying to remember who said it, but these arguments
30:04
that were made in the past about how, like, you're sort of like most
30:06
vulnerable with your Google search bar if you
30:09
Right. Yes.
30:09
know,
30:10
Absolutely.
30:11
and so it's moving to that right to that
30:13
same model of, but instead of, you know, plenty
30:16
of words have been written now about from search engines
30:18
to answer engines and when your answer
30:20
engine is like, Hey, did you
30:23
do that workout? I gave you, did you,
30:25
you know, did you, I mean, some of this stuff is, um,
30:27
it really did get into like, how much are you willing to
30:29
sacrifice To be perfect.
30:31
Are you willing to work out constantly?
30:33
Are you willing to change your diet? Are you willing to change
30:35
your, body, your face, all of
30:37
these things that it's asking. and it is
30:40
much more of like, this is my answer engine
30:42
constantly reaching out to me to ask, did I do
30:44
the thing that it told me to do? So it's
30:46
a little bit of a, it's going in the other direction, right?
30:48
It's like Google messaging you
30:50
Right.
30:51
search bar messaging you to go do a thing,
30:53
which I, um, I don't know how that, Like
30:56
psychologically what that is like for people. I
30:58
found it, I found the notifications
31:00
on Replica, which also did do
31:02
the periodic, like, come back and talk to me. I found
31:04
it obnoxious and I shut it immediately because it just felt
31:06
very, like, cheesy and fake to me. but,
31:09
you know, when you read the stories you
31:11
know, media coverage of, Replica
31:13
or one of these other platforms, like, they gate
31:16
The adult content feature, right? They gate
31:18
the NSFW chat and
31:20
people are like distraught over
31:22
this because they have real deep
31:24
emotional connections with these things. They're asking
31:27
questions that make them incredibly vulnerable, trying
31:29
to get advice in a more personalized
31:32
way or trying to form a relationship when they're feeling
31:34
lonely. And that's
31:36
where I, Again, I
31:38
don't know what the legal,
31:40
where the legal regimes are gonna, come down for
31:42
these things, but from a moral standpoint,
31:45
I find them, um, I'm very uncomfortable with
31:47
them.
31:47
yeah. And like, I mean, you can see scenarios
31:50
in which that is actually useful, right? Like if you
31:52
are trying to build a new habit or
31:54
Mm hmm. Yep. That's true.
31:56
every day, or you want to learn how to knit, or
31:58
you want to read more, whatever it could be. You
32:00
could see where that sort of. feature is really useful.
32:02
The problem is when it's driving you
32:04
towards, unhealthy behavior
32:06
or unhealthy lifestyles, where
32:08
it's like, how do you draw the line between
32:11
those things? And how do you do it in a way that Make
32:13
sense. And, I think that's where it gets tricky in
32:15
some of it, you know, and then, I mean,
32:18
on top of all this, and, you know, one of the points that I
32:20
always make over and over again is this
32:22
idea of like, how many of these things
32:24
are really technological issues
32:26
versus societal issues, and
32:28
there is this element of like, We have,
32:31
however you want to refer to it, a loneliness
32:33
epidemic or, people not
32:35
relating to one another or, mental
32:38
health challenges where people are not getting the help
32:40
that they need. And therefore, when you have
32:42
something that is a technology, that
32:44
is a bot that allows people to converse
32:46
with one another, that feels
32:49
like it could be helpful in certain circumstances
32:51
for certain people, but not
32:53
everyone. and so it's like, how do you balance.
32:56
All of those things where there is some good that comes
32:59
out of this and there are some useful versions of
33:01
it.
33:02
I think this also gets to the, uh, the curation
33:04
question, right? It's, we often, I think,
33:06
over focus on moderation and, you know,
33:08
taking things down, deciding which of these things are bad.
33:10
But from that standpoint of, if
33:12
users can create characters,
33:15
and you can, there's like a little, it's like one
33:17
of the four things on the bottom menu of the app is
33:19
create, you know. Then the question
33:21
is, again, there is going to be, I think, at some
33:23
point, some CDA 230 argument that
33:25
is going to come up, because it is a,
33:28
well, we're just providing a platform
33:30
for users to make, for example,
33:33
this pro anorexia chatbot. And
33:35
that's where you start to get to some questions,
33:38
related to what is the, platform
33:41
determined to be? The
33:44
policies that are going to guide the
33:46
user created bots that it chooses
33:48
to serve up proactively like your giga chad,
33:51
you know, versus ones
33:53
where, yes, if you go digging
33:55
into the bowels of any social platform, you can find,
33:57
like, six instances of bad things, right?
33:59
And so that's why, you know, I never wrote
34:02
up my, my, my foray
34:04
into, character dot AI, because it was very
34:06
much like just trying to have some personal experiences with
34:08
it, not a, not a systemic survey or anything.
34:10
and even like the anorexia one, right? Like this
34:12
is one of the things and I've talked about this a bunch
34:14
on the podcast in the past where it's like the
34:16
attempts to Deal with
34:19
eating disorder content online has always
34:21
proven to be way more difficult that Many people
34:24
assume and there's always these efforts and even
34:26
Regulatory efforts to say like oh you have to
34:28
take that stuff down and yet in practice
34:30
that becomes really difficult you can look block
34:32
certain terms or certain groups or whatever, and you
34:34
find that they recreate themselves
34:36
very quickly using other language. And
34:39
then you also discover that even within
34:41
those groups, there are often people who are there
34:43
who are providing resources for recovery
34:46
that turn out to be really useful. And when you don't
34:48
have that, people can spiral down
34:50
into even worse results. And so it's
34:52
like, you could see one of these bots being
34:54
helpful in trying to get someone
34:57
away from, unhealthy eating patterns.
34:59
And yet, how do you,
35:01
it's tough to figure out how you balance those things.
35:04
And some of the bots do
35:06
push back you know, if you try to take them down a weird
35:09
path, they will say like, here's
35:11
how to do this in healthy ways.
35:13
Here's how to change your diet. There, there are a lot
35:15
of like, weight related things. And so it is
35:17
probably, again, this, point about, it's your accountability
35:19
friend in an ideal case versus in
35:22
the bad case where it makes suggestions
35:24
that are terrible. And so this is, I think
35:26
the question for the platform is how does it decide,
35:29
both. What to manage and then
35:31
how to, I think that they made some changes after
35:33
the lawsuit was filed too, if I'm not mistaken. They,
35:36
they made a series of policy changes to to try
35:38
to address some of the concerns about teenagers
35:40
and others engaging with it and that
35:42
manipulative dark pattern of like pulling people back
35:45
and so I guess it's
35:47
a, brand new world of apps and now
35:49
we're gonna see how closely it mirrors the
35:51
social media evolution versus looking
35:53
like, looking more like Games
35:55
or other products.
35:57
Yeah. We'll see. All right. Well, let's
35:59
move on from that. I mean, I'm sure we'll be covering
36:01
the lawsuit some more and different
36:04
Innovations in that, realm. so this one
36:06
now sort of goes back to what our first
36:08
discussion was in a slightly different
36:11
angle of, uh, the, uh,
36:13
masculine energy of Mark Zuckerberg.
36:16
and his desire not to
36:18
be, pushed around by the mean,
36:20
mean Joe Biden. it came
36:22
out this week that he had agreed
36:25
to settle. For 25
36:27
million, the lawsuit that Donald
36:29
Trump had filed against,
36:31
Meta. For removing him
36:33
after January 6th in 2021.
36:35
the story was, obviously, everybody remembers what
36:38
happens on January 6th. There was an insurrection.
36:40
People storm the capitol, it was bad, and all
36:42
those people are now free. That's a different issue.
36:45
But a few months after, well,
36:47
the day after, on January 7th, a lot of
36:49
platforms then banned. Then president
36:51
Trump from their platform arguing that he had violated
36:53
their rules, often trying to incite violence
36:55
in some form or another. And there
36:57
were all these grave statements, including from Mark
37:00
Zuckerberg, how enough was enough. And they couldn't
37:02
allow him to continue to be on their platforms
37:04
about six months later. I think it was
37:06
in July of 2021, Trump sued
37:09
Meta and Mark Zuckerberg personally,
37:12
he sued Twitter and Jack Dorsey personally,
37:14
and he sued Google and Sundar
37:16
Pichai personally, arguing that these
37:19
takedowns violated the first amendment.
37:21
Which is quite incredible because at the time he
37:23
was the president and the first amendment
37:26
restricts the government, including
37:28
the president from trying to suppress speech. It does
37:30
not do anything to restrict private
37:32
companies from making decisions. Everything about the lawsuit
37:34
was backwards. lawsuits have sort
37:36
of gone through this weird process where the
37:39
lawsuit against Meta and the losses against
37:41
Google were both effectively put on hold
37:44
while the lawsuit against Google was put on Twitter played
37:46
out and he lost
37:48
the lawsuit against Twitter. The judge completely slammed
37:50
it, said, this is ridiculous and stupid.
37:53
It was then appealed to the ninth circuit. The
37:55
ninth circuit heard the case. It was very clear
37:57
from the oral arguments that they were not
37:59
impressed by Donald Trump's arguments for
38:02
why Twitter, violated his first
38:04
amendment rights and banning him on January 7th
38:06
or January 8th. I forget exactly when they did it. But
38:09
then we had all these other cases, including
38:11
the Murty case, including the,
38:13
net choice cases that we've talked about extensively,
38:16
and a few other cases and the Ninth Circuit
38:19
kind of said, well, let's let the Supreme Court
38:21
play all those things out. And then when
38:23
those rulings came out last summer, They said,
38:25
okay, now can we rebrief this
38:28
case based on all of that? And so
38:30
filings had been made, but the ninth circuit
38:32
had not made a decision. And then
38:34
two weeks after the election. And I think I
38:36
missed this. I think most everybody
38:38
missed this. X filed
38:41
a thing in that case saying, Hey,
38:43
we're working out a settlement with president Trump.
38:45
so let's, not rule on this case.
38:47
So. That indicates, well, yes,
38:49
now Elon Musk is first buddy and he's,
38:52
close friends and the biggest donor to Donald Trump.
38:54
So the fact that they were actually suing each other
38:57
technically all this time, uh,
38:59
was interesting. So they're working on a settlement. The
39:01
wall street journal reports that.
39:04
When Mark Zuckerberg flew from Hawaii
39:06
to Mar a Lago and had dinner with Trump towards
39:09
the end of the dinner, Trump brought up this lawsuit
39:11
and said, this needs to be settled if you want to
39:13
be brought into the tent.
39:15
The tent.
39:16
Yes, being brought into the tent. That sounds
39:18
kind of similar to what a mafia
39:20
shakedown kind of thing. Sounds
39:23
like, so now the meta
39:25
case has been settled for 25 million.
39:27
Meta is paying 25 million for a case
39:29
that was clearly a loser of a case.
39:31
And it comes in direct response to Trump
39:34
saying, you need to do this to be brought into the tent. It
39:36
feels like a protection racket.
39:38
It feels incredibly corrupt
39:41
in all sorts of ways. It does not feel
39:43
like manly energy. It
39:45
does not feel like. You
39:47
know, while this negotiation was going
39:49
on, Mark was going on Joe Rogan
39:51
to complain about Joe Biden, trying to pressure
39:54
him, and saying that Donald
39:56
Trump understands free speech, this gets
39:58
back to the whole, like wrapping yourself in the free speech.
40:00
You know,
40:01
Yeah, 1A as meme, yeah.
40:02
while suppressing it, this
40:05
story is, is astounding to me. I
40:07
mean, what, was your response on seeing it?
40:09
Um, that it was using
40:11
a court settlement to pay a bribe.
40:13
Yeah.
40:14
the, no, we call it call it spade to spade at this point,
40:16
right? Like, get back into the tent.
40:18
I mean, come on, let's all talk about what this is. Also,
40:20
just to be clear, the 25 million, I believe, is being
40:22
paid in a donation to a presidential
40:24
library,
40:25
Yes. Well, 22 million of
40:27
the 25 and then the rest is for like legal, legal
40:30
costs. So yeah.
40:31
And this was the, um, the source of frustration,
40:34
again, with some of this is, It's as
40:36
if we didn't all watch history happen, right?
40:41
This is where, you know, Orwellian is the
40:43
most overused adjective in the English language at
40:45
this point. But in some cases,
40:47
the idea that we're just being asked forget what actually
40:49
happened. Look, the platforms were
40:52
enforcing against Trump
40:54
during the 2020 election
40:56
when Trump was president. president. Trump
40:59
was president during the early moderation
41:02
policies related to COVID because
41:04
that was when COVID appeared. You know, we, we have this
41:06
alternate universe in which this is mean, bad
41:08
Joe Biden. I mean, it's, it's transparently
41:11
political and I
41:13
am almost more offended
41:15
by it as a person with a memory
41:17
and a brain, right? Like
41:21
if you're gonna do the thing and capitulate
41:23
and kiss the ring, at least don't
41:25
gaslight us into pretending we didn't
41:28
know who was the president of the United
41:30
States in control of the government in
41:32
2020 and 2020, you know, during
41:35
that period that they're complaining about. So I
41:37
think this is where you get at this, the frustration
41:39
that a lot of people are feeling, though, is the question of, like, does that
41:41
even matter? Right? You, you go, you
41:44
rewrite history, you tell these people what they want to hear,
41:46
what they've been, you know, you, you, you, like, this is, at this
41:48
point, the CEO of the company echoing back
41:50
the party line that has been fed
41:52
to half the population in media
41:54
coverage of this, of the Murthy case, of the,
41:57
you know, the censorship industrial complex, the Twitter files,
41:59
all of it. Again, it keeps coming
42:01
back to this question of
42:03
how do you make people remember
42:05
what was actually true in that moment at
42:07
that time that we all saw? If you
42:09
want to settle the case, It settled
42:11
the case, but this was very,
42:14
very clearly a case that they were going to win,
42:16
and that, I think, is the thing that the public really needs to
42:18
understand.
42:19
yeah, it's incredibly frustrating and
42:21
just the narrative about it that sort of suggested
42:23
that he had a real case. You know, the fact
42:25
that they're settling makes people say, well, he would
42:28
have won because that's the only reason why meta
42:30
would settle, which is,
42:31
And it, I'm curious, I'm
42:33
not a lawyer. My understanding is that
42:35
there's no admission of wrongdoing. There's no, like, precedent
42:37
here. But a lot of people
42:40
have filed these kinds of frivolous cases
42:42
in the past and they've all been dismissed, right? And,
42:44
and this is the kind of thing where we all, you know, we all know that but
42:47
for who filed this, they would not
42:49
have settled. and it creates
42:52
a really bizarre, Incentive
42:55
for more of
42:57
these lawsuits to get filed, right, which
42:59
is terrible, actually. and
43:02
it sort of shows the loopholes
43:05
in how much of our
43:07
legal system and our, the
43:09
way that these cases are handled is predicated
43:12
based on certain norms being followed,
43:14
right, norm that you should want
43:17
a good decision, you shouldn't settle.
43:19
Because somebody is imposing political
43:22
pressure and we've just seen the one of
43:24
the largest companies and an incredibly powerful
43:26
billionaire completely capitulate. And
43:29
I think that that's actually, again, terrible.
43:31
Yeah, yeah. I mean, we talked
43:33
about RFK Jr had sued the same companies
43:36
over the same basic issue and had been laughed
43:38
at a court. and, you know, there are other lawsuits
43:40
like this as well. but now this is just, it's
43:42
going to lead to more lawsuits and, and Mark
43:45
Zuckerberg must know that, right? I mean, I, I
43:47
guess he's, going on the assumption, well, you know, we'll
43:49
win those other lawsuits, but you know, we
43:51
need to get into the tent or whatever it is,
43:54
but it's kind of a stunning capitulation.
43:56
Yeah, that was my feeling too. We're, you know, they're
43:58
in the tent, they're doing the YMCA, they're,
44:01
Yeah.
44:02
you know, they're up on the, uh, up
44:04
on the platform. And you
44:06
know, there's that meme about, you know, Uh,
44:08
gosh, what is it? Jimmy, Jimmy Carter and his peanut
44:11
farm. I'm trying to remember the specific, he
44:13
like, how he divested
44:14
Right. When
44:15
yeah, he sold his peanut farm. And
44:17
that's like the meme for sort of like back in the olden days
44:19
when we had standards. And now you look at this and,
44:22
it used to be expected that the CEOs
44:24
of massive communication platforms, even
44:26
if they had their own political opinions and made donations,
44:29
at least tried to appear
44:31
to be neutral in some way
44:33
and, and ironically the idea
44:36
that they were not neutral was in fact the argument
44:38
that powered the Weaponization Committee
44:40
and other investigations and complaints over the years and
44:42
now we've just hit this, uh, point
44:44
of, uh, well actually guess what, it's, great if they do
44:46
it as long as it's for my guy.
44:48
Yeah, so want to move on to there's
44:50
a related story. This is also in the Wall Street
44:52
Journal. where they were talking about
44:55
advertisers and we talked about Garm and
44:57
the, and Jim Jordan's threats against them
44:59
earlier. and they're saying, you know, since
45:01
Meta and Zuckerberg made this shift
45:04
and saying, we're going to allow more hate speech
45:06
and we're not going to moderate as much
45:08
and we're going to be freer. And in that sense,
45:10
you know, how advertisers are reacting to it. And
45:13
there's In this discussion about they
45:15
would respond and some arguing, well,
45:17
they're going to move off. Also others saying, you know,
45:19
Metta was always a better platform for advertising,
45:21
had better ROI, better targeting, all of these
45:23
things. And so they might suck it up
45:26
and Keep it going. But this, the
45:28
Wall Street Journal article struck me as really interesting
45:30
on a few accounts because it says
45:32
that yes, a bunch of companies are really worried about
45:34
the brand safety aspect, which has always been the underlying
45:36
thing. It's never been ideological, which is
45:38
the argument that people make. It was always about
45:40
brand safety. If your advertisements are appearing
45:43
next to Nazi content, that's generally
45:45
not good for your brand. And the companies, that's what
45:47
they're worried about. They're worried about the bottom line. But
45:49
this article notes is that they're all still terrified
45:52
of the brand safety stuff, and that might lead them
45:54
to move away from advertising.
45:56
But at the same time, they're just
45:59
as terrified of actually talking
46:01
about it. They won't say anything publicly. And
46:03
this is a direct result of
46:06
Jim Jordan and the investigation against GARM,
46:08
and the lawsuit that Musk filed against
46:10
GARM and the various advertisers, and
46:13
saying that we're
46:15
not going to talk about it. If we're going to decrease advertising,
46:17
you're not going to hear about it. We're not going to even talk
46:19
about brand safety because anyone
46:21
who talks about brand safety now is accused
46:23
of like illegal boycotts or whatever. that
46:26
to me is terrifying because it shows
46:28
how effective the chilling effect has been
46:31
of the investigation and the sort
46:33
of coordination between, Elon
46:35
Musk and Jim Jordan.
46:36
Right. again, a as meme wins
46:38
out over the actual free association
46:40
rights of these companies or their, ability
46:42
to say quite, what I think is
46:44
actually like a reasonable standpoint
46:46
from both a moral and a business perspective, which is
46:49
we don't want our stuff shown next to that content.
46:52
This is not a thing that we saw as
46:54
controversial ever. I don't
46:56
think in social media. Um, I'm
46:58
trying to think of, At any point
47:00
over the last 10 years, I don't remember that being
47:03
something that advertisers were shy
47:05
about. They were actually quite proud of it. It
47:07
was a way to say, like, here is how,
47:10
the business incentives of the platforms
47:12
intersect with the business incentives of the advertisers.
47:14
Kate Clonick's paper, The New Governor, spends
47:16
quite a bit of time on this in the opening, just explaining
47:19
that The platforms are there trying
47:21
to find essentially the, the most nuanced
47:23
fit that enables them both to provide the environment
47:26
the users want, and most users don't want
47:28
hate speech and violent content
47:30
and gore and all that other stuff. And then, again,
47:32
on the other side, the advertisers who have that power
47:34
too, the power to, pull back, to essentially
47:36
defund and to use their power to
47:39
essentially shift, where platforms choose to share their
47:41
stuff. So. challenge,
47:44
I think, for a lot of these companies is
47:46
that this is now a time to
47:49
stand by your values and
47:51
show that you have a spine. And
47:54
we're seeing the opposite. And
47:56
this is something that, you know, I
47:58
maybe have more of a, feel more,
48:00
um, personally irritated by it
48:02
because, you know, obviously, I think, as
48:05
many of your listeners may know, like Stanford
48:07
caved, right? And they're defending
48:09
the court cases, and they're, you know, and they defended us
48:11
with the investigations by Congress. But they
48:14
chose to backpedal from the First
48:16
Amendment protected research we were doing.
48:19
And so my feeling on that was,
48:21
I understand the need for the
48:23
institutions to protect themselves
48:25
and how the institution is almost immediately
48:28
not aligned, you know, with me
48:30
in this, in that particular case, but
48:32
Where is the courage? Where is the
48:35
point at which you say, Well,
48:37
that's great that you know, Elon Musk wants to run
48:39
his business and he can run his business as he sees
48:41
fit. But I, the theoretical CEO
48:43
of Procter and Gamble, I'm also going to run my
48:45
business as I see fit, and I don't need
48:48
to advertise on somebody else's private platform.
48:50
I don't need to give them money. I can
48:52
advertise where I want to advertise or not
48:54
at all right in newspapers and television
48:56
and, wherever else. And so that
48:58
question is what is the,
49:01
I guess I feel like I'm not being entirely coherent
49:04
here, but where is the trade off between
49:06
your short term, moving away
49:08
from pain in the short term versus feeling
49:10
like you have committed to a set of corporate
49:13
values in the longer term?
49:15
Yeah. no, it's, it's incredible.
49:17
And it would be nice to see some
49:20
company CEOs actually stand up for their principles,
49:22
but we'll see what happens.
49:25
think we have time for one more quick story
49:27
that I wanted to cover because it actually touches on a few
49:29
different stories that we've covered, in the past
49:32
and ties into the theme of this episode
49:34
as well. and this is a story from the Financial
49:36
Times about X refusing
49:39
to remove a video of
49:41
a stabbing in Australia, and this came up
49:43
when X was, uh, and Elon
49:46
were fighting with Australia where they were demanding
49:48
that this particular video be removed, and
49:50
I actually had Sympathy for Elon's
49:52
position that he felt that the government was demanding
49:55
that they censor content. and I thought that
49:57
there was a strong, principled free
49:59
speech reason to say no, we're not going to take on
50:01
that video based on on these demands.
50:03
Now there was a separate story. About,
50:06
someone who murdered some, young
50:08
girls in the UK that got a lot of
50:10
attention in which Ilan
50:12
fanned the flames of it and blamed
50:14
illegal immigration and, a bunch
50:16
of other right wing nonsense and really
50:18
pushed for more and more,
50:21
protests and violence in the UK.
50:23
And now it turns out that
50:25
the perpetrator of that. this
50:27
person, Axel Rudakovana, who's now been
50:30
sentenced to life in prison, they
50:32
looked at his search history and he had
50:34
deleted everything and deleted his entire
50:36
history except six minutes
50:38
before he left to go do this attack.
50:41
He had gone to X and done
50:43
a search to look for the video
50:45
of the stabbing in Australia, the
50:47
very video that Elon had refused to take
50:49
down, and that had been his inspiration.
50:52
You know, it's, it's clear that he had planned this video. going
50:54
further back, but like the final video
50:56
that he watched happens to be this one on
50:59
X and yet Elon
51:01
is still going around trying to blame immigration
51:04
for this particular attack and still fanning
51:06
the flames and in fact, even after
51:08
this came out had posted something
51:10
about like, don't forget the attacks
51:13
in the UK and sort of, you know, continuing to
51:15
fan the flames on this. And it's just this story
51:17
of like, this incredible attempt by
51:19
him to sort of, Point
51:21
in the other direction and blame in
51:23
this case, you know immigration or
51:25
whatever the Attack target is
51:28
for these things when he was the one who
51:30
was fanning the flames for it.
51:32
It reminded me of, um, if you remember
51:34
the old ISIS conversations in like 2012,
51:37
2013 timeframe, maybe? one man's
51:39
terrorist is another man's freedom fighter. Who are we?
51:41
The free speech wing of the free speech party to make any
51:43
kind of determination about what to take down. One
51:46
of the things that was interesting to me about
51:48
that back in the day was that the
51:50
argument that, for example, you can find
51:52
ISIS recruitment videos elsewhere. You
51:55
can find them elsewhere. It's actually surprisingly
51:58
hard now to find them elsewhere,
52:00
but, you know, if you go, digging, you can. And
52:02
the motivated, of course, will. but
52:05
there was this, you know, Question of, do
52:07
you have to make it so easy?
52:09
right
52:10
and this ties into the character. ai
52:12
conversation a little bit in that, that same sense
52:14
of, um, how do we think about that
52:17
question of, making something
52:19
really, really easy to find versus
52:22
saying, um, our platform values
52:24
are not to do that. In this particular
52:27
case, he's made clear that the platform value
52:29
of, the meme of free
52:31
speech, of making everything,
52:33
you know, making everything available and effortless
52:36
on X is, is where he's chosen
52:38
to take the platform. And
52:41
I think you are going to see the pendulum eventually
52:43
begin to swing back as users
52:45
begin to realize that we're very much
52:48
Kind of in that hard reset from about 13
52:51
years ago now and we're going to see a lot of
52:53
those same Dynamics
52:55
reassert themselves in slightly different ways
52:57
now
52:58
Yeah, yeah. No, I mean, it's,
53:00
it's a challenging situation and I, and
53:02
I understand that as I said, like I understood why
53:04
he was protesting the, the Australian
53:07
attempt to ban it. But it's just really quite
53:09
incredible how directly tied it is, his
53:11
platform is to that particular attack.
53:13
Because he has the, he has the agency
53:16
to make that determination, right? It's
53:18
his, it's his decision to make, which means
53:20
that he owns it.
53:21
Yeah. and he should own it, but he's trying
53:23
to avoid taking any responsibility.
53:26
But with that, I think
53:28
we'll conclude. Uh, Rene, thank you
53:30
so much for, this was a very fun conversation.
53:32
We
53:32
you for having me. It was an honor to co host.
53:35
Yes, yes. And, uh, thanks
53:37
everyone for listening as well. Ben
53:39
will be back next week and, we will
53:41
continue to discuss all the fun
53:44
things happening in the world of online
53:46
speech. please, I will. Take
53:48
Ben's job here and remind you to rate, review,
53:50
subscribe, tell your friends, tell your enemies,
53:53
get more people listening to the podcast. We always like that.
53:55
And with that, I will say
53:57
goodbye. Thank you.
54:01
Thanks for listening to Ctrl-Alt-Speech.
54:03
Subscribe now to get our weekly episodes
54:06
as soon as they're released. If your
54:08
company or organization is interested in sponsoring
54:10
the podcast, contact us by visiting
54:12
ctrlaltspeech.Com. That's
54:15
C T R L Alt Speech. com.
54:18
This podcast is produced with financial support
54:20
from the Future of Online Trust and Safety Fund,
54:23
a fiscally sponsored multi donor fund
54:25
at Global Impact that supports charitable
54:27
activities to build a more robust, capable,
54:29
and inclusive trust and safety ecosystem.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More