Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:01
I'm Dr. Brian Goldman, host of the CBC
0:03
podcast, The Dose. Each week
0:05
we answer vital health questions that will help
0:07
you thrive, like, what does my mental health
0:10
have to do with my gut? How
0:12
can I prevent melanoma? How much sleep
0:14
do I really need? And how can
0:16
I manage my health without a family doctor?
0:19
I chat with the top experts to bring you
0:21
the latest evidence in plain language, all in about
0:23
20 minutes. Join The Dose on
0:25
the CBC Listen app or wherever you get your
0:28
podcasts. This
0:31
is a CBC Podcast. Hi,
0:35
I'm Nora Young. This is Spark. If
0:37
spoken language is a sort of spontaneous
0:40
creative act between speaker and listener, will
0:43
machines ever truly match our
0:45
playful linguistic games? And
0:47
in the cat and mouse game between
0:49
humans and content-moderating algorithms, we creative humans
0:51
seem to have the edge for now.
0:54
Today in an episode that first aired
0:57
in May of 2022, from the quill
0:59
to the telegraph to morphing memes on
1:01
social media, how the tech we use
1:03
changes the way we communicate. Imagine
1:10
you run into a friend. Hey, how
1:12
was that party last night? And
1:14
you start telling them the story. Well,
1:17
first of all, the cake was
1:19
so fancy. But
1:21
in that story, there's a certain word.
1:24
And if you say that word,
1:26
cake, everything you just said disappears.
1:33
That's how it works online, except with words
1:36
a lot less wholesome than cake. On
1:39
social media platforms like TikTok and
1:41
YouTube, if your content contains certain
1:43
flagged phrases or banned words, your
1:45
post could be removed altogether. The
1:48
word police aren't people, they're
1:51
content-aware algorithms. But humans
1:53
are crafty and have developed a
1:55
workaround called AlgoSpeak. AlgoSpeak
1:58
is a direct derivative of English
2:00
or whatever language so that the
2:02
machine can't censor the user. This
2:05
is Jamie Cohen. He's an assistant professor of
2:08
media studies at CUNY Queens College in New
2:10
York City. The practice
2:12
of word swapping has helped content
2:14
creators fool algorithms for some time
2:16
now. For example, people would
2:18
often write panini instead of pandemic in
2:21
their posts. Now, the
2:23
point of this algorithmic monitoring is
2:25
to catch harmful terms, things
2:27
that might spread disinformation or
2:29
racist language or violent extremism. But
2:32
because it's automated, nuance and context
2:34
is often missing and speech that
2:37
shouldn't be flagged is. I'll
2:39
go speak comes directly from the TikTok app
2:41
because TikTok's app is a much more content
2:43
aware system than content moderation systems
2:46
like Twitter or Facebook or so forth,
2:48
which are usually algorithmic, but don't have
2:50
a content aware system. It's more of
2:52
a line that somebody has to report
2:54
you. Okay. On TikTok, the system, if
2:56
it hears you, literally hears you or
2:58
sees you say with the captioning words
3:00
like dead or sex, the
3:03
video clip either gets demonetized or unpublishable. A
3:05
term is shadow banned. Your clip stays online,
3:07
but it doesn't end up in the for
3:09
you page. Okay. Or it simply just gets
3:11
removed from the feed itself. And
3:14
that's for very minor infractions of language. So
3:16
what users have done is figured out ways
3:18
around speech or words. So instead
3:20
of the word sex, it would say segs, S
3:23
E G S or instead of saying dead, it
3:25
would be made on a made on alive. And
3:27
so these words when together make sense to the
3:29
reader, but the machine itself can't actually read them
3:31
in the way that the content moderation system
3:33
would. Okay. But is there any chance
3:35
that the content moderation sort
3:38
of learns that people are using made
3:40
on alive instead of dead? Yeah.
3:43
So when machines read media,
3:45
they learn that human behavior is learned by
3:47
machine learning. And so every time you use
3:49
a computer, whether you're using your Facebook feed,
3:52
whether you're using TikTok, whether you're using anything
3:54
that has machine learning engaged with it, it
3:56
gets better. This is like the same thing
3:58
as auto correct. typing autocorrecting, you have
4:01
a slang term that you use with a
4:03
friend, that eventually autocorrect stops correcting it because
4:05
it learns that that's the word that you're
4:07
going to be using. Now the downside here
4:09
is that algorithms like TikTok are going to
4:11
learn the workarounds very simply and then what's
4:13
going to happen is we're going to have
4:15
to evade them further and eventually
4:17
language will be disguised in a way that
4:19
will be somewhat unrecognizable to a reader. You
4:23
talked a little bit about TikTok
4:25
and its 4U recommendations. Can you expand
4:27
on that a little bit? How do the different ways that algorithms
4:29
work on different social platforms
4:32
shape the needs for users to disguise their
4:34
language? When I teach about algorithms in
4:36
class, we have to keep in mind that they're black boxes.
4:38
We have no real insight as to how the machine learning
4:40
was written, how the code is derived, who's
4:43
writing it, what the outcomes are. We do
4:45
know for a fact though that the algorithms
4:47
are written to sell us data, you know,
4:49
sell our data to advertisers and it works
4:51
back to sell us material. So we know
4:53
that the algorithm is a loop to kind
4:55
of move content back into our eye view
4:57
so that advertisers can get it. But unlike
4:59
Facebook and Twitter where the algorithm is designed
5:01
for attention or the attention economy, it's designed
5:03
to give the most grab for the first
5:05
few clicks that are supposed to cause engagement,
5:07
whether that's a like, a comment
5:09
or a share. TikTok's content aware
5:11
system is what's known as a time spent
5:13
algorithm. So it puts content in front of
5:16
your face that makes you not want to
5:18
leave the app. It makes you want to
5:20
keep watching. So the goal of TikTok is
5:22
an infinite, is infinity that eventually
5:24
you're just not going to leave your screen. You're
5:26
just going to keep thumbing up the next clip.
5:28
Okay. But users have become very aware as to
5:30
how the content aware system works. And they know
5:33
that if they spend too much time on any
5:35
given clip, the algorithm shifts to show them more
5:37
of that type of content. Whereas they know if
5:39
they skip a clip very quickly, the algorithm learns
5:41
they don't like that type of content. So we
5:44
train the machine in real time on
5:46
all of our social media. We're constantly training it.
5:48
This is what causes echo chambers, our inability to
5:50
get outside of our own feedback loop. Okay. But
5:52
when it comes to TikTok, the for you
5:55
pages, the primary way people interact with that
5:57
app and don't go to user profiles that
5:59
go specific. specifically to the feed. Taylor
6:02
Lorenz explicates this in her Washington Post piece,
6:04
but the 4uPage is the dream of a
6:06
TikToker. That's where everybody gets their views. So
6:08
you have to make the content for the
6:10
4uPage rather than for another user. The
6:13
design of AlgoSpeak seems in a way
6:15
to hinge on a kind of insider versus
6:18
outsider binary, right? You need to
6:20
get the language to understand what's going
6:22
on. Does this have implications
6:24
for who can engage on platforms like
6:26
TikTok? Yes, this causes
6:28
a nuance issue. So as we know,
6:31
as from a journalist or
6:33
an educational perspective, nuance is a hot
6:35
commodity. You know, it's not easy
6:37
for a reader to get meta
6:39
text or understand reading between the lines. And
6:42
so when we move to a place where
6:44
we're using AlgoSpeak, it does require a savvy
6:46
use of the internet. It's no longer like,
6:48
oh, just your basic access point. It actually
6:50
moves the access point further away from the
6:52
basic user and it creates it inside outside.
6:54
This is where like digital divides
6:57
actually really emerge. Like digital divides
6:59
of the early 2000s
7:01
were like a big thing. Everybody was talking about
7:03
the digital natives and digital immigrants, people who couldn't
7:06
speak to one another because of generational gaps. Now
7:08
everyone has access, but the esoteric nature
7:11
of the web, the specific use is
7:13
going to become inaccessible to some users.
7:15
And we're going to see a divide,
7:18
whether it's generationally or technically, from how we
7:21
use those social media apps. And
7:23
I mean, more broadly, what does it
7:26
mean for overall accessibility on social media
7:28
platforms if words in
7:30
a way have more than one meaning? Yeah,
7:32
this is where memes are really the... My
7:35
primary subject of study is memes. And the
7:37
reason I study memes is because they are
7:39
reductionist media. They take big concepts and they
7:41
make them very small and shareable. The downside
7:44
is by reducing them, you eliminate a lot
7:46
of the context. You remove a lot of
7:48
the understanding of it because it makes you...
7:50
A reader of a meme is required to
7:52
understand the reference. They have to know what
7:54
it's referring to in order to even understand
7:56
it. All those people do the same thing.
7:58
It will literally... cause us to speak
8:01
a mimetic language where culture is being shared.
8:03
But if you're outside the culture, it's almost
8:06
another language. It would almost be something foreign
8:08
to a user. In that
8:10
case, we're actually going to create these very
8:13
interesting and niche communities that will have
8:15
their own sub-languages on social media platforms.
8:18
Which, I mean, is interesting, but it also
8:20
kind of makes you question the whole point
8:22
of these places is they're supposed to be...
8:25
You know, we can argue whether this is actually
8:27
the case, but they're supposed to be these virtual
8:29
town squares where everyone can meet and have these
8:31
conversations together. Right. Yeah, and
8:33
it causes enforced tribalism. Another
8:35
problem with enforced tribalism in this case
8:37
isn't so much the organic nature of
8:40
that. I think part of using social
8:42
media or even a public square is that you have communities. You
8:44
have sub-communities and you have affinity groups,
8:46
people who like what you like. But
8:48
in this case, in the specific
8:50
algo-speak case, it's machines making those
8:52
communities happen, rather than people organically.
8:54
So instead of a bottom-up grouping,
8:56
like a subreddit, for example, a
8:58
subreddit is a bottom-up community that
9:00
creates their own affinity group. In
9:03
algo-speak, the machine itself is causing people to
9:05
hide from a machine splintering the
9:07
community into different factions that may or may
9:10
not be able to retrieve the original community
9:13
back. Does algo-speak actually work? To
9:15
get the sense that creators substituting
9:17
phrases or using codified words
9:19
can and do actually avoid detection by
9:21
the algorithm? To an extent, it does
9:24
work. Yeah. And we notice
9:26
this because heavy users, heavy producers on
9:28
TikTok have figured out their view count
9:30
does change based on how much they
9:32
engage with algo-speak. Now
9:35
what that means to the platform is questionable.
9:37
Being that it's a black box, we're not
9:39
sure how the algorithm works, or not to
9:41
be literally conspiratorial, we don't know if it's
9:44
by design. In other words, maybe algo-speak is
9:46
part of the process of the way that
9:48
social media operates. Maybe TikTok
9:50
likes this. Maybe TikTok appreciates this
9:53
change in behavior. We're not really sure what the
9:56
outcome is of that. What we do know is
9:58
that it does work. You can hide. from
10:00
language or detection like the example
10:02
that Taylor Lorenz brings up is LaDollarBean
10:04
which is instead of typing the
10:06
word lesbian into the caption it's L-E
10:09
dollar sign BN and the machine the
10:11
machine reader the over-bright reader says LaDollarBean
10:14
because it can't actually read the dollar
10:16
sign so people have adapted now to
10:18
say that word out loud so it's
10:20
first typed up be hidden from the
10:22
algorithm then said to be hidden from
10:24
the algorithm and then adopted and codified
10:26
into modern language. Which is
10:28
remarkably creative and clever right like there's
10:30
something really quite
10:32
charming about it I know you mean about the
10:35
sinister aspect of the machine but there is an
10:37
aspect of it that's really quite charming. I
10:40
do think so I this is where
10:42
I think we're destined to think of social
10:44
media in a cynical way we're really in
10:46
this space where we really understand that it's
10:48
not in our benefit we don't have really
10:50
control over the systems we don't have
10:52
a way of as we just saw
10:54
with like Elon Musk we don't really have a say
10:56
in how these deals get done we have no idea
10:58
how these up systems operate on the other hand the
11:01
systems are made up of people all of
11:03
this is user generated content it is people
11:06
connecting with one another and so to
11:08
go against the cynicism that overt is to
11:11
say you know this is a way of
11:13
making community in a very interesting way and
11:15
it's endearing and earnest to me to
11:17
realize that people want to talk about things and
11:19
it doesn't matter what the machine says they're going
11:21
to talk about them anyway. We
11:35
speak with memes archives. My
11:37
name is Kenyatta cheese I am co-creator
11:39
of the internet meme database know your
11:42
meme. There's this mistake that
11:44
a lot of folks who come from a more
11:46
traditional media space make that media and social media
11:48
are same thing like egg and eggplant are right
11:50
they have the same exact word but completely different
11:52
things. One is content the other
11:55
one's conversation and the reason
11:57
why we want to be able to make.
12:00
those things malleable. The reason why we want to
12:02
put our own mark on that meme and share
12:04
our own versus and experiences because we're using it
12:06
for conversation. My
12:12
name is Inma Sichman and the name
12:14
of the book is Memes in Digital Culture.
12:16
The main concept is
12:18
a digital concept. It was coined
12:21
in 1976 by Richard Duncan and
12:23
it describes small units of culture
12:26
that spread from person to
12:28
person by copying or imitation.
12:30
Academics argued about this concept
12:32
for ages but then the
12:34
internet came. Memes are
12:37
the building blocks of contemporary
12:39
digital culture and they
12:41
express deep fears,
12:45
motivations and social sisters
12:48
and you cannot ignore them just because
12:50
seemingly they see. CBC
13:06
Radio. I'm Nora
13:08
Young and today we're talking about how digital
13:11
technology is changing the way we talk online.
13:14
Right now my guest is Jamie Cohen,
13:16
a digital culture expert and assistant professor
13:18
at CUNY Queens College. So
13:20
far we've heard how ALGO speak
13:22
helps users evade censorship on platforms
13:24
like TikTok by swapping out banned
13:26
words for coded words that imply
13:29
the meaning provided you're in the
13:31
know and we've talked about
13:33
how that can lead to accessibility barriers. But
13:36
beyond ALGO speak are there other
13:38
examples of how technology has shaped
13:40
and changed our communication patterns. Yeah,
13:43
memes. So visual culture communicating graphically is
13:45
probably our biggest step towards a change
13:48
in language. And so around the early
13:50
2010s people moved out of the space
13:52
of low cats where the cat was
13:54
saying the word and then the doge
13:57
meme where the dog was saying the
13:59
words. to referential media that
14:01
changed our language. So we've
14:03
been speaking memetically, at least
14:05
culturally, memetically, digitally. Memes
14:08
go all the way back to ancient times,
14:10
but digital memes or digital internet language is
14:13
really from about 2014 to
14:15
present, where we've been encoding
14:17
our language in ways that
14:19
hide from both people, censors,
14:22
and outside communities. Can
14:24
you expand on that a little bit? What
14:26
is it about the nature of memes? The
14:31
nature of memes is actually similar to the
14:33
nature of emojis. When we want to share
14:36
an emotion that's unusable in text, like sarcasm,
14:38
it's impossible to say sarcasm in a text space
14:40
because people might read it wrong. Again, back to
14:42
nuance. When you use
14:45
an emoji, you could fill in
14:47
the gaps of your emotions. It's a graphical
14:49
way of communicating, and the user usually understands
14:51
it. Emojis are fairly easy because they're representative
14:53
of faces. There's a meme
14:55
is a replacement for the
14:58
emotion. Many examples of
15:00
memes will be sort of like algo speak.
15:02
You take a word that replaces another word
15:04
and you expect the people to get it.
15:06
I'll give you a very interesting coded example.
15:09
For a while, there was this meme that was
15:11
about misspelling how to talk about food. Somebody
15:14
would say, I'm going to make gourmet
15:16
food, but it was spelled G-A-R-E-M-A-Y. And
15:19
then it said, bon app the teeth, instead of bon
15:21
app the teeth. And
15:23
then eventually people said, bon atrophy.
15:26
It sounded like bon app the teeth. And then
15:28
eventually it said, I'm going to make gourmet food
15:30
osteoporosis. So
15:33
osteoporosis became the stand in for bon atrophy, which was
15:36
a stand in for bon app the teeth. And
15:38
so it was this leveling referential memetic language
15:40
that could only be understood if and
15:43
only if you've seen the origin memes from
15:46
this. In other words, that's a four layer
15:48
referential removal from its original meaning. And
15:50
if you think about algo speak in Twitter, we're on
15:52
our way to that. The other thing that I've
15:54
heard about in this context is leet speak. Can you tell me
15:57
a bit about leet speak? Yeah. LeetSpeak
16:00
was the original version of hiding, and
16:02
that was more word replacement. It was
16:04
early internet users that wanted to hide
16:06
from content moderators, who were humans. And
16:09
the way that humans would do it
16:11
at scale is they would highlight bulk
16:13
code and delete curse words and so
16:16
forth. But if you replace letters with
16:18
numbers, like Leet, EliteSpeak, so EliteSpeak would
16:20
be LeetSpeak, L33T, SP43K, so
16:22
it's numbers that look like letters. And
16:25
now when you highlight that code, it
16:27
just doesn't show up. It's just simply
16:29
invisible. But left to right reading
16:32
still allowed you to read the language. You
16:34
could put it together. It's kind of like that thing on the
16:36
internet when people do that joke where you just put the first
16:38
letter and the last letter of any word, you could still read
16:40
the word in contact. LeetSpeak did that.
16:43
That wasn't the way that memetic language
16:45
or algo speak works, which is complete
16:47
word replacement, like words that mean something
16:49
completely different, like being made on alive.
16:52
Like those are two words for one
16:54
word, but that's a word replacement. It's
16:56
completely removed from its original syntactical use.
16:59
You know, this kind of automated moderation
17:02
may be intended to block harmful language
17:04
or harmful content, but it
17:06
can also suppress, you know, really
17:08
important conversations around things like sexism or
17:11
racism or assault. Do you
17:13
think that these algorithms could eventually have the
17:15
nuance of a human moderator for
17:18
images or for text where it's able to discern,
17:20
okay, this is a valid conversation, this is not
17:22
a valid conversation? Wow, that is one of the
17:25
best questions ever. This
17:27
is the discussion I consistently have in classrooms about
17:29
content moderation with Gen Z. And
17:31
what we talk about is bad faith users and
17:33
good faith users. And can a machine understand the
17:36
difference between good faith and bad faith? And
17:38
in good faith, it means you're literally willing to have
17:40
a discussion. And in bad faith, it means you're trying
17:42
to troll somebody, you're making somebody feel bad. Can
17:45
a machine detect that type of nuance? That
17:47
is something that I don't know yet because
17:51
these machines aren't designed for good conversations
17:53
they're designed for advertising. They're designed to
17:55
sell us product. So I think
17:57
it would require in many ways us to go back in
17:59
time. and rewrite the original code. I
18:01
mean, to be honest, the biggest problem
18:04
we have with many social networks, specifically
18:06
Facebook, is the original algorithm was designed
18:08
to rank women. So that's still in
18:10
Facebook, that ranking software, that sexist ranking
18:12
software is still running behind the scenes.
18:15
So it does curb good conversations. And
18:17
also just not to get too dark
18:20
here, but it does enable bad actors
18:22
to use coded language to enable trolling,
18:24
which we call dog whistle, dog whistle
18:26
politics, which is encoding bad words or
18:29
coding bad behavior. So that's
18:31
accidentally shared by mainstream media or shared
18:33
by public users. So many
18:35
people are using algo speak to do
18:37
this, but the machines themselves can't detect
18:39
good faith versus bad faith. And I
18:42
hope someday we figure out
18:44
that part, if we're going to keep going with
18:47
machine learning, we should try to make sure that's
18:49
part of it. Yeah. Do
18:51
you think this kind of coded language is always
18:53
going to have a place? Like, for
18:55
example, in the context of a repressive
18:57
government that that's censoring political dissent? Yeah.
19:00
Unfortunately, in a right
19:02
word or authoritarian leaning future, we're
19:04
moving towards a place where we're going to have to
19:06
be a bit more subversive or covert in our language.
19:08
And we actually see the meme, the reason I bring
19:10
up memes is in China,
19:13
Tiananmen Square memes, or tank man
19:15
memes are invisible, the machine will actually detect those
19:17
and delete them. And people have figured out how
19:19
to how to get those types of information back
19:21
into the public by creating memes to go around
19:23
it. And this is why in the end, I
19:26
think algo speak is actually somewhat
19:28
short term in terms of language structure, I actually
19:30
think we're going to be speaking much more graphically.
19:33
Just generally. Just generally, we're going
19:35
to be sharing images far more than
19:38
language as a subversive or covert technique,
19:40
or even as to put it earnestly,
19:42
as a really cute way of organizing
19:44
communities. Of course, you
19:47
know, on the darker side, there are online
19:49
communities that are using euphemism to promote things
19:51
like hate speech or incite violence. Do
19:53
you think that algo speak makes
19:56
it harder to track radicalized
19:58
online content? 100%.
20:01
I'm actually, I've been doing talks and conversations with
20:03
the Trust and Safety Collective quite often. And
20:05
our big conversation is exactly what you
20:07
asked, which is how do you detect
20:09
radicalization or radicalization and process or even
20:11
grooming in terms of how
20:14
language is operated because young people speak
20:16
at a different language comprehension than adults.
20:18
And the ability to manipulate young people and
20:21
their trust and safety of the platform is
20:24
also used covertly by bad actors in
20:26
that sense. So trust and safety and
20:29
content moderators are when we use the internet, they
20:31
are the internet. And I think we have to put a lot of
20:33
understanding that a lot of social media is just
20:36
about how we moderate that content. And I know
20:38
a lot of people think, oh, they have a
20:40
too much of a heavy hand and this and
20:42
that it's humans get we can hurt we can
20:45
get in pain and we could be end up
20:47
in places that cause trauma by using these social
20:49
media platforms. We learned this in the Facebook papers.
20:52
Content moderators are designed to keep us safe,
20:54
not to censor us. This
20:56
is sort of a philosophical question. But do
20:58
you think there are downsides to letting
21:01
the technology influence how we as
21:03
humans modify language? Yeah,
21:06
there's there's a definite downside. And this is it's
21:08
philosophical. So the answer is going to be philosophical,
21:10
which is if we're trained to allow machines to
21:12
teach us to make new language, then
21:15
we're also being trained to allow other systems to
21:17
do that as well. So by
21:19
accident, we're being trained in forms or
21:21
functions that are outside of social media
21:23
that we're not aware of maybe at
21:26
this moment, it may only take one
21:28
really charismatic authoritarian leader to run a
21:30
similar system in real life to change
21:32
our language. It's interesting, you know, we're
21:34
kind of trapped in this sort
21:36
of cat and mouse game where you know,
21:38
the words
21:40
that we put out there are used to train the
21:42
systems, but then the systems are also training us at
21:45
the same time. Yes, correct. Yeah. So do you think
21:47
overall tech is helping or hurting our capacity to create
21:49
that sense of shared meaning? I'd have to because we
21:51
can't go back in time and turn it off, I
21:53
have to say it helps. I think one of the
21:55
things we have to keep in mind is there's a
21:57
responsible way of using tech or using tech as a
21:59
way of using technology. But tech
22:01
is not going away, so I believe it's
22:03
an overall benefit to humans. I think the
22:05
opposite at this point would be
22:07
far worse than anything else because we've already
22:09
engaged with that. We couldn't just turn it
22:11
off or unfaze it. So I do believe
22:14
it's a net benefit to human expression, to
22:16
human connectivity. I just think that
22:18
we have to – I think the machines
22:20
are designing a space where we're thinking about
22:22
tiny things. We're thinking about very small moments
22:24
where we're taking small scandals and making the
22:26
daily issue, when we should really take a
22:28
step back and think about the larger aspects
22:30
of it, like anything from climate change
22:32
to social justice to the idea of
22:35
just inequality in general, just to have
22:37
that in our mind. We don't have to engage with
22:39
it, but rather than thinking about the daily – what
22:41
they call the main character of the day, we
22:44
could start thinking about the issues at hand that actually
22:46
make these systems work. Jamie, thanks so much for your
22:48
insights on this. Thank you so much for having me.
22:51
Jamie Cohen is an assistant professor at
22:53
CUNY Queens College in the Department of
22:55
Media and
23:26
Archives. I'm Tim Wu,
23:28
professor at Columbia University. You
23:31
famously coined the term net neutrality. How does
23:33
net neutrality fit into these questions about the
23:35
Internet? The most important thing about net neutrality
23:38
is it is a tool for challenging monopolies.
23:40
It suggests that a startup can always go
23:42
to the Internet, try to get started, and
23:44
challenge whoever is in power. You
23:47
know, if you think about somebody like William
23:49
Gibson, Hello, I'm William Gibson. He doesn't predict
23:51
the web, but he coins the term cyberspace.
23:54
Myself, I think of cyberspace
23:56
as very much a
23:58
heritage term. But
24:01
I also, you know, by the same
24:03
token, I think of the real world
24:06
as very much a heritage term. What
24:09
we thought of as cyberspace colonized
24:12
and then effectively became,
24:14
has become what we
24:16
think of as the
24:18
real world. My
24:21
name is Baratunde Thurston and all around
24:23
strange but awesome guy. Basically,
24:25
I got really frustrated with all these different ways
24:27
like, get at me over here and be over
24:30
there and do this and all these different buttons.
24:32
I thought, why don't I just make up my
24:34
own term that encapsulates it all? Friend,
24:37
fan, subscribe, and follow. You
24:39
add all that up and
24:41
you get friend-scralo. I
24:55
own you. Though it was
24:57
originally an internet typo, home is
24:59
now common video game components. Game
25:03
D'ar. I'm Celeste
25:05
McQuarver and that's my word. I coined the
25:07
term Game D'ar. Well, Game
25:10
D'ar is kind of like gaydar. You know,
25:12
I take the gamer to know a gamer.
25:14
It's really cool in fact because I can
25:16
just be in a room with a crowd of
25:18
people and I can just instinctively tell, you know,
25:20
who plays Tetris and Pac-Man. Can only gamers have
25:22
Game D'ar? I mean, I'm just thinking that even
25:24
some stray people have gaydar. If
25:27
they're exposed to it, I suppose they
25:29
can pick up Game D'ar. Hello,
25:43
I'm Jess Milton. For 15 years,
25:45
I produced the Vinyl Cafe with the late,
25:48
great Stuart McLean. Every week, more than
25:50
2 million people tuned in to hear funny,
25:52
fictional, feel-good stories about Dave and his family.
25:54
We're excited to welcome you back to the
25:56
warm and welcoming world of the Vinyl Cafe
25:58
with our new poll. podcast backstage at
26:01
the final cafe. Each week
26:03
we'll share two hilarious stories by Stewart and for
26:05
the first time ever I'll tell you what it
26:07
was like behind the scenes. Subscribe
26:09
for free wherever you get your podcasts.
26:13
I'm Nora Young and this is an episode of Spark that
26:15
first aired in May 2022 all about how
26:18
the technologies we use are reshaping
26:20
language. Think
26:22
about how much we communicate with others
26:24
on an average day. We gestured a
26:26
merging car, we nod to a stranger,
26:28
we talked to a co-worker, a message
26:31
a friend, replied to a comment online.
26:33
We're constantly connecting. But
26:36
how did we get here? How did humans
26:38
develop shared meaning and language to begin
26:40
with? Without a time machine
26:43
we don't really know scientifically, however, we
26:46
do have some ideas about how it might
26:48
have started. Hi, I'm
26:50
Morten Christiansen, I'm a psychologist at
26:52
Konell University working on the cognitive
26:54
science and language. I'm also at
26:57
Orwisch University in Denmark and
26:59
a senior scientist at the Haskins Labs
27:01
in Connecticut. Morten is also
27:03
the co-author of the book The Language
27:06
Game, how improvisation created language and
27:08
changed the world. We
27:12
start out a book with a
27:14
little historical minet about a meeting
27:16
between Captain Cook and his
27:18
crew of the HMS Endeavor and
27:21
a band of Hausch indigenous
27:23
people in the Bay of
27:25
Good Success on Sierra Del
27:27
Fuego in January 1769. Soon
27:29
after dropping anchor, Cook and his
27:33
men, they went ashore and they were
27:35
soon met by the Hausch. At
27:38
first the Hausch sort of
27:40
retreated, but then two
27:42
of Cook's men went forward on their
27:44
own and then the same happened
27:46
with two of the Hausch people. Very interestingly
27:49
what they did is that they helped
27:51
out in front of them sticks and
27:54
then showed them and then threw them aside
27:57
and Cook and his men took that as an indication.
28:00
that they were friendly. And indeed,
28:02
that was true. But of course,
28:04
they had no common language and
28:06
inhabited utterly different worlds. But
28:08
nonetheless, and this is crucial, they were
28:11
able to communicate through what we think
28:13
of as a high-stakes game of cross-cultural
28:15
charades. And
28:19
so what we are suggesting in our
28:21
book is that this historical meaning illustrates
28:23
how language might have emerged through charades
28:25
like interactions between early humans. And
28:28
so from that perspective, language is like a game
28:30
of charades, where what we are trying to do is
28:33
to improvise, to provide clues to each
28:35
other to get our ideas across what
28:37
we want to say. And this means
28:39
that language is first and foremost a
28:41
product of cultural evolution, rather
28:44
than being built in sort of a language instinct
28:46
or something like that. Because
28:48
I think we tend to have this idea that, well,
28:51
you know, language is based on a
28:53
set of rules. We know dictionary
28:55
definitions, we know grammatical rules, and
28:57
we just implement them. But what
28:59
is your view suggesting about how
29:01
we use language alternatively? Well,
29:03
the way we're looking at language
29:06
is sort of fundamentally improvised way
29:08
of communicating. And with contrast
29:10
with this sort of notion that it's relying on
29:12
rules, or like a fixed code that allow us
29:14
to sort of bottle up our thoughts into a
29:16
stream of words, and then these
29:18
are decoded and uncalled by
29:21
the listener. But instead, what we
29:23
are suggesting is that language is
29:25
just like when we place charades, we
29:27
communicate as best as we can using
29:29
hints and clues, which are created and
29:32
presented through the powers of
29:35
human ingenuity. And so
29:37
when it comes to rules, we are suggesting
29:39
that rules are really something that emerge gradually
29:41
as various kinds of patterns of how we
29:43
use language get overlaid on top of each
29:45
other, rather than being there sort of a
29:47
priori, as it were. Your
29:50
book really stresses the important role that spontaneity plays
29:52
in language creation. Can you tell me a bit
29:54
more about what makes that so essential? Well,
29:58
so what we're trying to do... we are
30:00
communicating is really just trying to be
30:02
in the moment and trying to sort of
30:04
indicate what it is we are trying to
30:06
get across in the same way when we
30:08
are playing charades. So in a game of
30:10
charades, what you're trying to do is that
30:13
you're looking at the people you're playing with
30:15
and you're trying to get say the title
30:17
of a movie or something else across and
30:19
so what you're doing is you're paying attention
30:21
to what your audience is understanding what they
30:23
are perceiving in what you're trying to do
30:25
and you sort of adapt what you're trying
30:27
to sort of gesture to
30:29
your audience and this is what
30:31
we are suggesting is exactly what we're going
30:33
on in language. We are improvising in all
30:35
sort of clever ways in order to try
30:37
to get our point across and in doing
30:40
that we are relying not only on the
30:42
gestures themselves but also what we know about
30:44
our audience, what we know about the world
30:46
and sort of what we can take for
30:48
granted and what we sort of think maybe
30:50
our audience doesn't know and then we try
30:52
to put all that together spontaneously to collaborate
30:54
in order to understand one another. Yeah
30:57
and building off this idea of charades, it really is
30:59
this collaborative process where the role of
31:01
the listener is much more important than
31:03
just being a sort of passive recipient
31:05
of information, right? Exactly.
31:08
There's a tendency to sort of treat
31:10
language understanding as if we are sort
31:12
of kind of like a computer where
31:15
we're sort of waiting until somebody has
31:17
said something and then we sort of
31:19
spring interaction, decoding it and so on.
31:21
But our suggestion is that just like
31:24
in charades, we're sort of actually engaging
31:26
with the person talking in order trying
31:28
to figure out and collaborate to generate
31:30
a sort of common understanding of whatever
31:32
the topic is. But
31:35
it can't be all just sort of linguistic chaos, right?
31:37
There must be some sort of order
31:39
as well. Well, so
31:42
here we are referring to what's
31:44
in the science of complex systems
31:46
is called self-organization. So
31:49
what we are suggesting is that over time,
31:51
as we are playing the same game of
31:53
charades over and over again, we might reuse
31:55
certain gestures that we used before
31:57
and they might become stylized over time.
32:00
time and what we are suggesting is
32:02
that something similar is happening in
32:04
language as well and there's a
32:06
whole sort of subfield of linguistics
32:08
that's called grammaticalization that's all about
32:10
how that actually happens in language
32:12
also. So what
32:14
happens is that patterns are not there beforehand
32:16
but they build up over time as we
32:18
sort of communicate with each other over and
32:21
over again. And the kinds of
32:23
patterns vary from linguistic group to
32:25
linguistic group. Exactly. So
32:28
when we look across the world it's about 7,000 languages. We
32:31
see an amazing diversity in ways of
32:33
expressing ourselves. So there are some languages
32:35
like first the First
32:37
Nation language in Canada called straight
32:40
Salish where it seems
32:42
that they don't have a distinction between nouns and
32:44
verbs. They have a different
32:46
way of dividing up how they communicate. Of course
32:48
we also have sign languages that don't even use
32:51
spoken words at all yet. They're able to
32:53
communicate just as well as we are in
32:55
spoken language. So the
32:57
variety of human languages is
33:00
just amazing and it actually
33:02
there's a major contrast between
33:04
human languages and human communication
33:06
systems and animal communication systems.
33:08
So when you look across
33:10
the sort of the animal kingdom
33:12
there's an amazing variety
33:14
in ways that animal
33:17
communicate. So there are bacteria that uses
33:19
chemical sensing to communicate. There are the
33:21
cuttlefish that uses amazing visual displays and
33:23
you have a beast doing the whackling
33:26
that's behind to indicate where a nectar
33:28
might be found and the quality of
33:30
it and you have monkeys that use
33:32
different kinds of calls to indicate whether
33:35
there's this kind of predator or that
33:37
kind of predator. But
33:39
when you look within a
33:41
particular species of animal you
33:43
find that there's very little
33:46
variation in how these animals communicate
33:48
across individuals. But when
33:50
you look across human languages we just
33:52
see this amazing and astonishing variety of
33:54
ways in which we use different
33:57
ways of putting words together or
33:59
different ways of using signs in order
34:01
to communicate. And this is
34:03
really a major distinction between human language
34:05
and other animal communication systems. And this
34:07
is what gives us the flexibility to
34:09
express ourselves no matter what kind of
34:11
culture we live in and what kind
34:13
of environment we live in. So
34:16
when it comes to this kind of spontaneity and
34:18
importance of context in understanding language,
34:20
could you give me an example
34:22
of that in English, of how
34:24
it's contextual and how it varies? So
34:27
consider the word break. A breakdown
34:29
is terrible. So if you're breaking down with
34:32
your car, that's obviously terrible. A breakup is
34:34
also not fun for anyone. But
34:36
a breakthrough is excellent. So if we're having
34:38
a break, we are very excited. And of
34:40
course, more generally, breaking things is considered to
34:42
be bad. But if we're in a sort
34:44
of a long meeting, a long day meeting,
34:47
maybe a day-long meeting, having a break is
34:49
excellent, especially if we get coffee, of course.
34:51
And so here you have different contexts, sort
34:53
of changes what we mean by the same
34:55
word. In this case, you're a break. I
35:18
mean, there has been speculation that the
35:20
advent of the internet has meant that
35:22
English is a much more dominant force and
35:24
that more niche languages are going to fade away.
35:26
Do you have thoughts on that? Well,
35:29
there certainly have been some pressure towards
35:32
English becomes sort of a lingua franca
35:36
in some cases. But on the other hand,
35:38
also, the internet has allowed a small group
35:40
of individuals to sort of band together and
35:43
sort of maintain their languages that may not
35:45
be a sort of majority language. So I
35:47
think it goes in both directions. And of
35:49
course, we as a society
35:51
can be supporting, say, First Nation
35:53
languages or other languages that might
35:55
otherwise be in danger of
35:58
dying out. And I think there are some movements in
36:00
that direction. But certainly it is
36:02
the case that across the world there
36:04
are many languages that are near dying
36:07
out and thus we
36:09
need to both document them but also
36:12
support the people who are speaking those
36:14
languages. Because once those languages disappeared, it's
36:16
also a part of that culture that
36:18
disappears as well. So the language is
36:21
not just a language, but it's also part of a
36:23
culture and it carries with it much
36:26
information about that culture and how people
36:28
interact with one another. And that's of
36:30
course an incredible and terrible loss
36:33
once each language that has
36:36
existed dies out. From
36:47
this park, Language Preservation Archives.
36:50
Who am I without my language and
36:54
where do I come from without my language?
36:56
It is who we are as a Houssaint'n'ch people.
36:59
It ties into
37:01
our laws and our beliefs and teachings. Asa
37:04
Renee Sampson, just a lattice, an'akkou saint'n'ch.
37:08
Hul nuk sant. Ies en
37:10
kunas asla. Che'is en, dana
37:12
squeal sin chasen. Huluk sin
37:14
kunas nah asla hila. I
37:17
tat anuk sant dana kunay ex
37:19
sasqueal. And I'm First
37:21
Nations and I'm really happy to
37:23
be here. And I'm starting to
37:25
learn my sin chasen talk
37:28
and that I'm starting to learn my
37:30
teachings and my language. That's what I
37:32
said. There's a
37:34
lot of push for
37:37
language revitalization in our community.
37:40
There's word lists on there that you can go that has
37:43
different multimedia, has a picture and it
37:45
has the word
37:47
and sin chasen. Many
37:49
different other First Nations people are at
37:51
a different level. Some of them have
37:53
a lot of audio clips and
37:56
they have songs. So you go
37:58
on there and you can basically have
38:00
access anywhere in the world
38:03
to your own language. Learning
38:06
my language is learning who I am and
38:09
it's important for me to learn my
38:11
language for my children's sake because they
38:13
will grow up and they will
38:16
know who they are and who they're connected to. My
38:23
name is Sqachaltin or
38:25
Khalsilim. Those are my ancestral
38:27
names that I carry from my ancestors.
38:31
My friends that speak the language, we text to each other
38:33
in the language. And then
38:35
the funny thing like English does is that
38:37
people started abbreviating things and finding shortcuts to
38:39
say Squamish words. So just like LOL
38:42
and things like that how English would do those
38:44
things. People started doing that with Squamish
38:47
but they were like completely Squamish concepts
38:49
and Squamish words that they were shortening
38:51
and communicating instead of actually writing out the whole word.
38:54
My goal isn't to get to a place in
38:56
the future where young people can go to downtown
38:59
Vancouver and order a hot dog in
39:01
Squamish. You can do that
39:04
in English. We're always going to have English. We're never going to
39:06
really escape from English. The text messaging
39:08
stuff though and being able to communicate to
39:10
these new ways, it's a way that I
39:12
think for myself and for the young people
39:14
is that we're trying to find avenues to
39:16
reclaim our identity, reclaim our language, reclaim our
39:18
little place in this world that gives us
39:20
a sense of pride and strength and
39:23
encouragement. You're
39:41
listening to Spark from CBC Radio.
39:44
I'm Nora Young and today we're talking about
39:46
how technology is driving changes in the way
39:49
we communicate. Right now, my
39:51
guest is Morten Christensen, a cognitive scientist
39:53
and co-author of the book The Language
39:55
Game. So far we've
39:57
heard about how spontaneity and collaboration are
39:59
foundational. pillars of human language. But
40:02
what did the introduction of digital communication mean
40:04
for the evolution of language?
40:07
In general, across human history, of course,
40:10
we have come up with many different
40:12
kinds of technologies, and that has influenced
40:14
our way of communicating with one another
40:16
in a variety of ways.
40:19
Just to first consider a different
40:21
technology, namely writing systems. So writing
40:23
systems allows us to more easily transmit
40:25
knowledge from one generation to another. And
40:27
that, of course, is a major advantage.
40:29
But it also introduces a certain kind
40:31
of conservatism in how we interact with
40:33
one another, if we sort of adhere
40:35
too slavishly to what was written before,
40:38
rather than going with the flow of how we talk
40:40
now. And oftentimes, when new
40:42
technologies have been introduced, they
40:44
have been decried, or the impact on them
40:47
have been decried by especially previous generations. So
40:49
for example, going back to
40:51
writing, when the Gutenberg printing press was invented,
40:54
there was actually quite a lot of concern
40:56
about this spread of literacy, because they were
40:58
concerned about what it might do to the
41:00
mind of the general population. But of course,
41:02
today, we're sort of continuously emphasizing
41:05
kids should be reading, we should all be reading more,
41:07
and so on. And of course, when we
41:09
look at sort of modern technologies, when texting
41:11
became popular, especially amongst young people, there's a
41:13
lot of consternation about what they would do
41:16
to their language skills. And some
41:18
of these language maiments, they were sort
41:20
of very concerned that they were just
41:22
completely, that language skill would degenerate, and
41:24
so on, because they used new abbreviations
41:26
like LOL or JK, and
41:28
so on. But of course, the actual
41:30
advent of texting is actually a nice
41:33
example of what we think of as
41:35
cultural evolution of language in action. Cultural
41:37
evolution of language is how we think
41:39
language has evolved. And cultural evolution is
41:42
constrained by whatever medium we're using for
41:44
communication. So for example, spoken language is
41:46
limited by how we can move our
41:49
mouth around, our memory for
41:51
sound sequences, and so on and so forth.
41:53
But the same is true when it comes
41:55
to texting. So initially, when we started texting,
41:57
we'd want to know that we had these sort of tiny cell phones.
42:00
I had nine buttons with typically
42:02
each button. There was three letters
42:04
going to that. So it's really
42:06
cumbersome to type out long messages.
42:08
So what happened is that these abbreviations
42:11
and other ways of doing
42:13
shorthand, these were really
42:15
adaptations by the texting language, as it
42:17
were, to those constraints in
42:19
order to make it easier to
42:21
communicate. But of course, once we
42:23
got our modern day smartphones with
42:25
their virtual keyboards and predicted completion,
42:28
we sort of gone back to
42:30
spelling out words fully again. And
42:32
so the texting language sort of kind of reverted
42:34
to a more standard written format. Now, of course,
42:37
with the kind of added flourishes of things like
42:39
emojis and all sort of stuff. So language sort
42:41
of adapts to the technology. And I think that's
42:43
sort of a nice example of at least a
42:46
positive one of how it has happened in this
42:48
case here. Yeah, and certainly indicates a lot
42:50
of the kind of playfulness that you talk
42:52
about in your book as well. You
42:54
say in your book that successful communication relies on
42:56
this kind of shared knowledge and a shared context.
42:59
But I'm wondering if online
43:01
culture makes that more difficult if we
43:03
just think about how quickly cultural touchstones
43:05
move in and out of fashion. So
43:08
I think on the one hand, having
43:10
the internet has sort of provided
43:12
us with more input from all places
43:14
in the world. So that's a good
43:16
thing. So another thing can
43:18
create a sort of more common understanding
43:21
across different cultures to some degree. But of course,
43:23
it's also possible that you create these bubbles that
43:25
we sort of tend to stick within and that
43:27
could become sort of quite narrow minded within those
43:30
bottles. So I think there's a lot of different
43:32
variations and exactly how it's gonna play out in
43:34
the long term I think might be a little
43:36
hard to. There's a lot of different variations and
43:38
exactly how it's gonna play out in the long
43:40
term I think might be a little hard to
43:43
know at this point. But at least the
43:45
way we are looking at language as
43:47
a fundamentally collaborative endeavors does
43:49
suggest that in the long term, language is
43:51
about a collaboration. And if we take that
43:53
too hard, then maybe that can also help
43:56
us deal with things like find ways around,
43:59
no fake. and all these other problems
44:01
that are sort of plaguing us at
44:03
the moment. And I think that's sort
44:05
of an illustrative example of regard
44:07
to technology from a few years ago. So
44:10
you probably remember that at
44:12
some point, there was all these spam
44:14
email. And at that time,
44:16
those talk about at some point in the
44:19
future, email would be pretty much impossible because
44:21
we would just be inundated with spam. But
44:24
spam filters have become incredibly good. So now
44:26
we still get some spam, but not as
44:28
much as we used to. So hopefully, there
44:30
might be some solution in the future to dealing with
44:32
things like fake news and so on going
44:35
forward as well. At
44:37
one point in the book, you write that, quote,
44:39
words do not have stable meanings. They're tools
44:42
used in the moment, as we've been discussing. Overall,
44:45
in this episode, we've been looking at how users
44:47
on social media platforms have started
44:49
using codified words or algo speak
44:51
to keep algorithms from detecting and
44:54
censoring their content. For example, using
44:56
a phrase like leg booty instead
44:58
of LGBTQ. Is that
45:00
the kind of instability of meaning that you're talking about,
45:02
or does that fit into how you see the instability
45:04
of meanings? Well,
45:06
I think it's been said that the sort
45:08
of meaning of words are in constant flux.
45:10
And as we as a
45:13
society or as a culture evolve,
45:16
the words meaning will evolve with us in
45:18
that way. So I think that's a nice
45:21
actually example of how meaning is
45:23
never stable, but continues to change across time.
45:25
Again, sort of trying to fit into the
45:27
constraints of the moment. In
45:30
this case here, getting around being censored
45:32
by some of these bugs and so on.
45:35
Mm-hmm. Further
45:42
start. Speech police. Archives.
45:45
Mignon Fogarty. Grammar Girl. There's
45:48
just not an excuse for not using a capital
45:50
letter at the beginning of a sentence or not
45:52
capitalizing someone's name when you're writing a
45:54
professional email message and you're typing on
45:57
a regular keyboard. It
45:59
is kind of real. And when you're doing
46:01
the electronic equivalent of passing a note
46:03
to a friend, it's different. You know,
46:05
I think that when you don't capitalize
46:08
someone's name in a work email, you're essentially
46:11
saying, I don't care about you enough to
46:13
hit the shift key. Here
46:18
at Spark, we've had a number of conversations
46:20
and arguments about the best way to end
46:23
an email. Yes, arguments.
46:25
So today, my tech PSA is how
46:28
to sign off without sounding weird, or
46:30
forced, or British. Michelle! What,
46:33
Nora? I'm just calling
46:35
it as I see it. And as I
46:37
see it, when you sign off with cheers,
46:39
or best, you sound like you're from across
46:42
the pond, or clinking wine glasses at a
46:44
dinner party. Come on. Nora also uses cheers.
46:46
In fact, so many people do it that
46:49
I'm starting to wonder if it's a colonial
46:51
tick we can't shake. So
46:53
I think your safe is to end an
46:55
email with six. Yeah, just
46:57
thanks. It's simple, to the
47:00
point, polite and friendly. You
47:08
spent 30 years learning the difference between it's
47:10
and it's, and then you end up fighting
47:12
with the phone over it. Iver
47:15
Tossen. I'm a senior product manager at Buzzfeed,
47:17
and I hate typing on my phone. I
47:20
hate fighting with auto correct. I hate the
47:22
typos. I hate the whole experience. And it
47:24
surprises me that this is not
47:26
an area where there's not more active
47:28
innovation. The extent to which technology and
47:31
predictive software is surging ahead in some
47:33
levels seems out of
47:35
sync with this extremely pedestrian
47:38
fight that everybody is fighting
47:40
on a daily basis. Especially,
47:42
low but high to anyone who tries to
47:44
use Canadian spelling. Yes. Where
47:46
not only do you have the cultural classes of
47:48
the United States, dragging room in one direction, but
47:50
you also have other software. We'll be
47:52
tied you if you try and use we'll be tied you and see
47:55
what they've got a lot of corrects to. Well,
47:57
very much so. It's almost the same. It's almost
47:59
the same. like the phone has an idea
48:01
of how you should be speaking. And if you
48:03
deviate from that norm, it tries to drag you
48:05
back to it. Mm-hmm. I'm
48:15
Nora Young, and today on Spark, we're exploring
48:17
the way digital tech has changed the
48:19
way we communicate, for good and bad.
48:22
Right now, we're talking about the evolution
48:24
of language with my guest, Morton Christensen,
48:26
a cognitive scientist and co-author of the
48:28
book, The Language Game. You
48:32
spent some time in the book talking about AI,
48:35
and you write that computers can't match
48:37
the complexity of human linguistics. But
48:39
with artificial intelligence, mastering chess,
48:41
writing poems in some cases, responding
48:44
to our natural language queries from us on
48:47
Alexa or Siri, why
48:49
couldn't it also learn to
48:51
truly replicate human language patterns?
48:55
Well, in part because
48:57
they don't really understand anything.
49:00
I mean, computers today in AI is
49:02
amazing. I mean, they can steer spacecrafts,
49:05
and they can play chess or go
49:07
or any kind of computer games with
49:10
amazing skills. However,
49:12
when it comes to these language models, and
49:14
they are also incredible. So the ability to
49:16
do things like Google Translate or other kinds
49:18
of language systems, they can clearly create all
49:21
sort of complex language. But they
49:23
don't really understand what they're doing. What they're
49:25
relying on is taking little bits and pieces,
49:27
putting them together in a way that
49:30
makes it seem like it's true human
49:32
language. But they're not really interacting with
49:34
one another. So it's more like they're essentially
49:37
engaging in monologue. And one of the
49:39
things that we argue is that it's
49:41
very important for us to view language
49:43
as dialogue rather than monologue. And
49:45
in a sense, these bots or
49:47
AI language system, they're really just
49:50
playing monologue. And that's a major limitations,
49:52
which means that they can't really go
49:55
beyond that. And so clearly today, at
49:57
least, we haven't told computers to place
49:59
your rates. And I think we don't
50:01
really have to worry about sort of computer-seeking
50:03
or a language as such until
50:06
we see them playing charades. Now once they
50:08
– if and when they do that, then
50:10
we might want to be worried.
50:14
Indeed. So
50:17
I mean, I suppose from a more philosophical level,
50:19
we could say that it's because AI systems
50:22
don't have subjectivity. They're not bringing that
50:24
subjectivity to the moment of communication, I
50:26
guess. No.
50:28
I mean, there are so many not doing
50:31
that. I mean, they're able to digest billions
50:33
of words in a way that none of
50:35
us could ever do. So
50:37
I mean, they're much more well-read, so to speak, than
50:39
any of us. But yet, they
50:41
don't really understand what
50:43
the text, how the text is
50:46
sort of relating to the word. Nonetheless, they
50:48
can do amazing things. So here at
50:50
Cornell University, I'm in the middle of
50:53
a project where we are using one
50:55
of these big language models to look
50:57
at poetry. And we are having them
50:59
generate poetry. And they can produce
51:01
poetry. That's actually quite interesting. So we can
51:03
ask it to do a poem
51:06
in the style of Emily Dickinson or
51:09
Shakespeare or Walt Whitman or some other
51:11
poet. And they can do that quite
51:13
well. So one of the things that we are currently
51:15
trying to figure out is that how
51:18
good are they? So we're going
51:21
to have both undergraduates trying to produce poetry
51:23
based on a prompt, say two lines of
51:25
a poem, and they have to continue it.
51:27
And we ask sort of
51:29
one of these language models to do that, too.
51:31
And then we're going to do sort of a
51:33
poetry-touring test where we ask other people to judge,
51:36
was this generated by a person or machine? And
51:40
we'll see. We don't have the answer yet. So maybe we'll
51:42
talk at some later point, and then I can tell you
51:45
what the answer is. But they can
51:47
clearly do this. But of course, they don't
51:49
have any kind of emotion or anything like
51:51
that. So they're doing that by essentially having
51:53
read loads of poetry. So they've been
51:55
trained on poetry, books of poetry, and so on.
51:57
And they can use that to generate new poems.
52:00
But yeah, they don't really understand the
52:03
underlying meaning, the emotions, the culture that
52:05
goes into writing these poems. So they're
52:07
going to be hopefully missing that. So
52:10
we are interested in how they might
52:12
be able to generate poetry. So
52:15
does that sort
52:17
of limitation on artificial intelligence mean
52:20
that things like content moderation
52:22
are always going to be a problem because
52:24
AI systems are going to
52:26
have trouble dealing with the flexibility of language. They're going
52:28
to have trouble understanding when something is sarcasm
52:30
and when it's not, for example? Probably
52:33
yes. And also, humans are quite ingenious.
52:35
So they have, you know, as soon
52:37
as we come up with one sort of algorithm
52:40
to sort of try to deal with content moderation,
52:42
humans are going to come up with a clever
52:44
way of trying to get around it. But
52:46
of course, the question is whether in the
52:48
long term the algorithm sort of can keep
52:51
ahead of human ingenuity. We don't know.
52:53
But because language is so flexible and
52:55
humans keep on coming up with new
52:57
ways of expressing themselves, it's going to
52:59
be hard to prevent anything
53:01
to everything from new ways
53:04
of doing things from occurring. Yeah. And
53:07
just finally, Martin, looking into the
53:09
future, how do you envision our styles of communication
53:11
will change? Well, we've certainly
53:14
seen throughout human history that every time sort
53:16
of new technologies have come in, it's going
53:18
to change how we communicate. So like with
53:20
reading or texting, as we talked about
53:23
earlier. So likely there will be changes in
53:25
how we communicate. And as, you know,
53:28
smartphones, computers, etc, etc, becomes more
53:30
and more integrated in our lives,
53:32
it's likely to affect how we
53:34
communicate. Exactly how that change
53:36
will come sort of through is
53:39
uncertain, at least in my mind. I'm not sure
53:41
what happened. Of course, there's all sort of science
53:43
fiction scenarios and so on. But I don't really
53:45
know. But what I think everyone
53:48
can be confident about is that it will
53:50
change in some way, that we will adapt
53:52
to the new technologies just as they also
53:54
will to some degree adapt to us. Mm
53:57
hmm. Thanks so much for your
53:59
insights on this. So thank you. Thanks for
54:01
having me. Morten Christensen is a
54:03
cognitive scientist and co-author of the book
54:06
The Language Game. You've
54:11
been listening to Spark. The show is
54:14
made by Michelle Parisi, Adam Killick, McKenna
54:16
Hadley Burke, and me Nora Young. And
54:18
by Jamie Cohen and Morten Christensen. And
54:21
from the Spark Archives, Kenyatta Cheese,
54:23
Lamour Shiffman, Kim Wu, William Gibson,
54:26
Baratunde Thurston, Celeste McCorder,
54:28
Renee Sampson, K
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More