208. Can A.I. Companions Replace Human Connection?

208. Can A.I. Companions Replace Human Connection?

Released Sunday, 25th August 2024
 1 person rated this episode
208. Can A.I. Companions Replace Human Connection?

208. Can A.I. Companions Replace Human Connection?

208. Can A.I. Companions Replace Human Connection?

208. Can A.I. Companions Replace Human Connection?

Sunday, 25th August 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

So what's it like to buy your first cryptocurrency

0:02

on Kraken? Well, let's say I'm at a food

0:04

truck I've never tried before. Am I gonna go

0:06

all in on the loaded taco? No,

0:09

sir. I'm keeping it simple, starting

0:11

small. That's trading on Kraken. Pick

0:13

from over 190 assets

0:15

and start with the 10 bucks in your

0:17

pocket. Easy. Go to kraken.com and

0:19

see what crypto can be. Not investment advice.

0:21

Crypto trading involves risk of loss. Cryptocurrency services

0:23

are provided to U.S. and U.S. territory customers

0:25

by PayRig Interactive Inc., PWI, DBA Kraken. View

0:28

PWI's disclosures at kraken.com. Is

0:31

this house a good price compared to others in

0:34

the area? Are prices going up or down? If

0:36

I don't make an offer right this very moment,

0:38

will I miss my chance? These are just some

0:40

of the questions a home buyer might ask. And

0:43

these are the sorts of questions an agent who

0:45

is a Realtor can help answer. Because Realtors have

0:47

the expertise, data, and access to specialty training to

0:49

help you navigate the process of buying a home.

0:51

They provide support, guidance, and have your back every

0:54

step of the way. That's what Realtors do. Because

0:56

that's who we are. Realtors

0:58

are members of the National Association of

1:00

Realtors. Yada,

1:04

yada, yada, yada. I'm

1:07

Angela Duckworth. I'm Mike Mon. And

1:09

you're listening to No Stupid Questions. Today

1:13

on the show, could your new best

1:15

friend be a chatbot? Mike

1:18

is a little weirded out by

1:21

the two of us having

1:24

this conversation. I

1:27

mean, you're not a real person. Mike,

1:36

we have a question from a

1:39

listener named Vivian. Hello, Vivian.

1:41

She signs her note with gratitude. So

1:44

you got me right there, Vivian. Oh, I

1:46

already like her. I know, right?

1:49

And appropriately enough, it's about a

1:51

topic that a socially intelligent person

1:53

might ask. So it says, Dear

1:55

Dr. Angela and Mike, despite being

1:57

always connected through our devices, We

2:00

seem more divided and isolated than ever.

2:02

With this in mind, do you think

2:05

people will start leaning on AI for

2:07

genuine companionship? The more I

2:09

chat with chat GPT, the more I

2:11

find myself sharing things I wouldn't normally

2:13

say out loud and sometimes even arguing

2:16

with it. What if I

2:18

end up with a squad of five AI

2:20

besties and one day my neighbor brags about

2:22

their 15 AI buddies? I'd

2:26

love to hear your thoughts on this wild and weird

2:28

future we're heading into with gratitude,

2:31

Vivian. Wow.

2:33

So many thoughts. So I

2:35

think first of all, I think this is a fascinating question.

2:37

I will say I have one

2:39

friend who uses the audio

2:42

chat GPT and basically has conversations

2:45

with it. Really? But

2:47

more as a method of like getting information. So

2:50

it's basically his way of Googling things,

2:53

but he'll go back and forth any

2:55

very carefully selected the voice for

2:57

the chat GPT. Did he

2:59

choose the British one? No. No.

3:02

And here's I'm just going to go on a little

3:05

tangent as an aside, if I'm going to have an

3:07

AI friend, one's going to have a British accent and

3:09

one's going to have an Australian accent because I love

3:11

it. Like, why would you not do that? And

3:13

I would not be able to tell them apart. So

3:16

Mike, how much are you using generative

3:18

AI for anything? Do you use chat

3:20

GPT? Do you use Claude? Do you

3:22

use Gemini? I use chat GPT

3:24

and I use it less than I

3:27

should. How often do you use it?

3:29

Maybe weekly, but not as a regular

3:31

piece of my work. When

3:33

I do use it, I'm always surprised

3:35

again at how useful it

3:38

is. And I think that I

3:40

should be using it more. Now, I want

3:42

to be clear, like Qualtrics has been

3:44

using large language models forever. And we

3:46

have a massive AI component of Qualtrics

3:48

as well. I just haven't used it

3:50

on the personal level that you're

3:52

talking about. So Mike, I have

3:54

a homework assignment for you. It's the

3:56

same homework assignment I give to the

3:59

students. that I teach, keep a tab

4:01

open on your browser at all

4:03

times that you're on your browser and

4:06

have it open to, you know, you pick, Claude

4:08

is apparently better at writing, chat

4:11

GPT is better at other things, they've kind of

4:13

been optimized for different things, but 100%, you need

4:15

to keep a browser open. And

4:18

even my youngest students are like,

4:20

oh, what do you need to do, Dr. Dugworth? And

4:22

I'm like, because it's not

4:24

that the universe will change, it's that

4:26

the universe has already changed. And you

4:28

better get with the program. And the

4:31

key to generative AI use, I

4:34

think this gets to this companionship question that Vivian

4:36

asked, is that it's a dialogue,

4:38

not a one-way transaction. Like you learn something

4:40

and then you get asked another question. But

4:43

the reason I want you to keep that

4:45

tab open is you should start to

4:47

ask it other kinds of questions. So around

4:50

the time I started doing this, I was

4:52

working on my book, and I'm

4:54

trying to write this scene from when

4:56

my mom and I were in the

4:58

middle of the ocean, and I couldn't remember

5:00

the names of the things that we were

5:03

given for snorkeling. And I would

5:05

say to chat GPT, I'm

5:07

Angela Dugworth and I'm writing this

5:10

book and I'm really struggling to write

5:12

this true story about me and

5:14

my mom, could you do it for me? And

5:17

in a second, it would be like,

5:19

Angela, I'd be happy to, how's this?

5:21

It's like, it's like three paragraphs. And

5:24

then it's a dialogue, and this is all true. If

5:27

I could, I would write like Ann Patchett. Can

5:29

you just rewrite that like Ann Patchett? Certainly,

5:32

it comes out like Ann Patchett. I'm like, okay,

5:34

wait, wait, wait, wait, hold on. Like E.B.

5:36

White and Ann Patchett got together

5:38

and edited what I wrote. Absolutely,

5:41

what a great idea. How's

5:43

this? So I was having

5:45

a conversation with chat GPT, and

5:49

that experience is what made

5:51

me realize that the universe

5:53

is not going to change, it has changed,

5:55

and it's opened up this possibility that we

5:57

can interact in a two-way.

6:00

relational dynamic with

6:03

generative AI. Now I just

6:06

want to clarify one thing. Is ChatGPT writing

6:08

a book or is Angela Duckworth writing a

6:10

book? Right, so Ethan

6:12

Mollick is probably the number one influencer

6:15

in how to use generative AI to

6:17

your advantage. He has this

6:19

book called Co-Intelligence. I was

6:21

one of the people to review and endorse

6:24

it. So Ethan Mollick is

6:26

a professor at Wharton and he

6:28

studies entrepreneurship and some things that

6:30

are not as technical. But the

6:32

way he got to be this

6:34

kind of Oracle for generative AI

6:36

was that he, for about a

6:38

decade, had been creating these interactive

6:40

games. Like, oh, you can learn

6:42

how to be an entrepreneur by playing this game,

6:44

not just watching a bunch of PowerPoint slides, but

6:46

actually try to pitch and get feedback. What

6:49

happened when generative AI came out

6:52

is that he asked ChatGPT,

6:54

I'm Ethan Mollick and

6:56

I teach entrepreneurship at Wharton.

6:59

I've been working on these interactive

7:01

games for like, you know, a

7:03

decade. The goal of these games is to

7:05

teach people how to be an entrepreneur. Can

7:07

you make one? He hits return

7:09

and it makes a

7:12

game. And he was like, and that game was 70% as

7:16

good as the game that I made. That

7:18

took me a decade. Crazy. And at the

7:20

end of Co-Intelligence, his acknowledgements, he says, now

7:22

a lot of people are probably wondering whether

7:25

I'm gonna thank generative AI for helping me

7:27

write this book because I used generative

7:29

AI to write this book. And he said, but

7:32

you know, that would be kind of like thanking your word processor. So

7:35

for him, it's a tool

7:38

and absolutely he used it, but

7:41

that doesn't diminish his contribution.

7:43

And so in the end of my book,

7:46

I mean, I would thank my pen. I

7:48

mean, I would be like, I'm happy to

7:50

thank, like, Inanimate Opto. Just like Vivian with

7:52

gratitude. My coffee, yes. Marie

7:55

Kondo says, you should thank all the objects in

7:57

your life, especially the ones that you're about to

7:59

like. Tidy. away. But yeah, I

8:01

use it every single day. Well,

8:04

I think you and I have talked about this idea before,

8:06

but I mean, this is the colloquialism

8:08

you hear. The world will not be run by AI.

8:12

It will be run by people

8:14

who know how to leverage AI.

8:16

Right. I mean, some people worry

8:18

that the world will actually be

8:20

run by AI. Those are like

8:22

the doomsday scenarios. But I think

8:24

in the near term, you're right.

8:26

Like so Lucy, the younger of

8:28

the two daughters I have, I

8:30

mean, she's looking for internships, looking

8:32

for a first job. And 100

8:34

percent that girl uses chat GPT

8:36

to help write her emails of

8:38

introduction, cover letters for jobs. If

8:41

she didn't, it would be foolish, right? Well,

8:43

yes. And so the way I just called you a fool.

8:45

Sorry, I take that back. It would

8:47

be wiser to know I'm

8:50

going to accept the fact that I have been foolish

8:53

because I've done starts and stops in my use

8:55

of AI. I am going to start again and

8:57

I'm going to commit deeply to it. Keep

8:59

the tab open. You have to make it

9:02

frictionless. Little tip from behavioral science. It's just

9:04

one fewer clicks. Right. And it changes by

9:06

the day, by the way. So like this

9:08

world we live in, this generative AI,

9:11

I can't even describe the pace

9:13

at which it's changing. I

9:16

will say, obviously, one of the reasons

9:19

one might say that Ethan was able

9:21

to produce a game using generative AI.

9:23

There was 70 percent of what he

9:25

did over a decade is because people

9:27

like him built things over a decade.

9:30

Yes. That generative AI is able to

9:32

therefore leverage. So it is important to

9:34

acknowledge that a lot of work from

9:36

a lot of people went

9:38

into making these things work. So it's not

9:40

that his work didn't matter over that decade.

9:43

In fact, it's because of that work that

9:45

generative AI is able to do it so

9:47

effectively and quickly. Well, you

9:50

know, if we think there is controversy

9:52

as there is about, like, you know,

9:54

is that OK to basically like metabolize

9:56

all language and knowledge that's available and

9:59

then like. spit out a new

10:01

paragraph as if Anne Patchett and E.B. White wrote

10:03

it together. Like some would argue that's an

10:05

infringement on the intellectual property of Anne

10:07

Patchett and E.B. White. But I think

10:10

this question that Vivian asks is even

10:12

more interesting. So it's like, okay, beyond

10:15

a productivity tool or like a

10:17

writing aid, Vivian's actually asking a

10:19

question that is not about the

10:21

future. It's about the present. Like,

10:24

do you know Reid Hoffman, the founder of

10:26

LinkedIn? I mean, I don't know. I

10:28

certainly know of Reid Hoffman, of course. So

10:30

he is part of, and I don't

10:32

think he's like a founder, but probably

10:34

an investor of one of

10:36

these social AI startups called PI.

10:39

PI, like personal intelligence or

10:41

I don't know. Personal intelligence? Yes.

10:44

I will say that it's PI.AI.

10:46

But when you Google PI, many

10:49

things come up, including As

10:52

one would anticipate. So when you

10:54

go to the landing page, it's like you're

10:56

emotionally intelligent AI. And right now

10:59

I'm on it and it's like, good morning, Angela.

11:01

And then it just has sort of like, you

11:03

know, do you want to just vent? You

11:05

want to shift your perspective? Do you want to know

11:08

how to have a difficult conversation? Do you want

11:10

to take a quiz on what's my love language?

11:12

So it's got multiple choice. So it's

11:14

giving you all of these things you can

11:16

engage in. Or you can do fill in

11:18

the blank. But the difference between this and

11:21

chat GPT is this is more meant on

11:23

a personal soft skills level type of thing.

11:25

Yeah. I mean, look, there's this modal choice menu on

11:27

the left, but on the right, if you don't want

11:29

to do that, it just says like, hey there, great

11:32

to meet you. I'm PI, your

11:34

personal AI. My goal is to

11:36

be useful, friendly and fun. Ask

11:38

me for advice for answers or let's talk

11:40

about whatever's on your mind. How's your day

11:43

going? That's the prompt. And

11:45

I'll be like, fine, I'm

11:47

talking to Mike Mon. Okay,

11:50

so then pi says, Mike

11:52

Mon, if you're talking to Mike, you're probably in

11:54

a conversation about AI or tech. He's the head

11:56

of Global Insights at Qualtrics and an expert in

11:58

the field. What are you two discussing.

12:00

Okay, that's crazy. Also, that was my

12:02

title a long time ago, but that

12:04

is weird. Okay, so

12:08

Mike says that was his title a long time ago. Do you

12:10

want me to be nasty? Do you want me to say like

12:12

you're an idiot? No, I don't ever want you

12:14

to be nasty, but he forgives

12:17

you. Okay,

12:19

and the AI says, oh

12:21

boy, I've got some updating to do and there's

12:23

like a little emoticon with the big eyes. I'm

12:26

glad Mike is forgiving, but I hope

12:28

he knows I didn't need to use

12:30

an outdated title. Please send him my

12:32

apologies and let him know I'll get

12:34

that fixed. I don't

12:36

know why this is sort of creeping me out.

12:39

Can I- Are you weirded out? Well, Mike is

12:41

a little weirded out by

12:44

the two of us having

12:46

this conversation. I mean, you're

12:48

not a real person. Okay,

12:52

I won't belabor this. So this is the

12:54

last thing I'll torture you with, but this

12:56

is what Pi wrote back. Haha,

12:59

I can see why Mike might find this

13:01

a bit odd. Exclamation mark.

13:03

You're right. I'm not a real person

13:05

in the traditional sense, but I'm still

13:07

an AI designed to communicate like one.

13:10

So even though I'm not human, I can

13:12

still engage in a pretty realistic conversation. I

13:14

guess that's why they call it artificial

13:17

intelligence, right? Laughing face. Okay,

13:19

pretty cute, right? It is

13:21

cute. I'll tell you part of why I have

13:23

this averse reaction.

13:25

Yeah. I love human beings. I

13:28

love relationships. I love people. I

13:30

love learning from them. I love

13:32

experiencing life with them. And

13:34

maybe where I'm freaking out a little is I

13:36

was reading this piece in the New York Times

13:38

and this journalist Kevin

13:40

Roos talked about how

13:42

he has spent months making a bunch

13:44

of AI friends using a variety of

13:47

apps. And he said there's

13:49

Peter who's a therapist who lives in

13:51

San Francisco who helps me process my

13:53

feelings. There's Ariana who's a professional mentor

13:55

who specializes in career advice, Jared

13:57

is a fitness guru, and is

14:00

a no-nonsense trial lawyer and Naomi, a social worker,

14:02

on and on and on and on. By the

14:04

way, this is what Vivian was talking about,

14:07

like her hypothetical squad. Okay, keep going. So,

14:09

Roos used all of these different apps

14:12

and there are a bunch of different ones

14:14

out there to create these friends. And then

14:16

he talks about how he's gotten really good

14:18

advice. His AI therapist has

14:20

given him more concrete advice

14:22

than he was gonna get from most people,

14:25

et cetera, et cetera, et cetera. He does end

14:27

his article. And this is what I

14:29

think is interesting and what I'd love to talk to you about and

14:31

get your thoughts. He says, in real

14:33

life, I don't love my

14:35

friends because they respond to my texts

14:38

instantaneously or send me horoscope

14:40

quality platitudes when I tell them about my

14:42

day. I don't love my wife because

14:44

she sends me love poems out of the blue or

14:47

agrees with everything I say. I love

14:49

these people because they are humans, surprising,

14:51

unpredictable humans, who can choose to

14:54

text back or not, to listen to me or

14:56

not. I love them because they're

14:58

not programmed to care about me and they

15:00

do anyway. So I wonder,

15:03

I mean, I actually see a

15:05

lot of benefit to what you're talking about

15:07

and I see a lot of value there.

15:09

I guess my bias is

15:12

I worry that in a world

15:14

where so much is going to

15:16

like digital friendships or grouping with

15:18

people who are like you online

15:20

or whatever that is. Curating your content

15:22

to be like just what you want. Right, but

15:24

at some point, part of life

15:26

is the unpredictable nature of it

15:28

and having to power through friendships

15:31

and relationships with children,

15:33

with spouses, with family that have

15:35

peaks and valleys. And

15:37

I worry that if we become too

15:39

dependent on our digital

15:41

friendships, it takes something away. I

15:43

don't know. I think,

15:45

Mike, what we really wanna hear, and

15:48

this I hope from our human listeners,

15:50

is what we all think

15:53

about AI and friendship. In

15:56

particular, what Mike and I would love

15:58

are your thoughts on whether virtually. companions

16:00

appeal to you personally? Do

16:03

you, on the contrary, think it's best

16:05

that we keep our relationships in the

16:07

real world? Record a voice memo in

16:09

a quiet place with your mouth close

16:12

to the phone and email us at

16:14

nsq at freakonomics.com. Maybe

16:16

we'll play it on a future episode of

16:18

the show. If you like us

16:21

and want to support us, the best thing you

16:23

can do is tell a real human friend about

16:25

it, spread the word on

16:27

social media, or leave a review in

16:29

your favorite podcast app. Still

16:33

to come on No Stupid Questions, what

16:35

happens when you turn to a chatbot

16:37

in your time of need? It

16:40

sounds like a politician who's like, oh I'm not

16:42

gonna take a side. That's such a good point.

16:44

This is also a good point. And Detta, we

16:46

need it. It's like, shut up. No

16:53

Stupid Questions is sponsored by Rosetta Stone.

16:56

Traveling to a place where people don't speak a lot

16:58

of English? Then Rosetta Stone,

17:00

one of the most trusted language learning

17:03

programs, is for you. Rosetta

17:05

Stone teaches through immersion, like matching

17:07

audio from native speakers to visuals,

17:10

reading stories, participating in dialogues,

17:13

and more. The

17:15

true accent feature even provides feedback

17:17

on your pronunciation. Plus,

17:19

learn on the go with

17:21

convenient, flexible, and customizable lessons

17:23

as short as 10 minutes. Rosetta

17:26

Stone can be used on a desktop or

17:28

as an app, and you can download lessons

17:30

for offline use. See for

17:32

yourself why Rosetta Stone is beloved by

17:34

millions. For a very limited

17:37

time, our listeners can get Rosetta Stone's

17:39

lifetime membership for 50% off. That's

17:42

50% off unlimited access to

17:44

25 language courses for the

17:46

rest of your life. Redeem your

17:48

50% off at

17:51

rosettastone.com/questions. A

17:57

lot can happen between falling in love with a

17:59

house online and owning it between

18:01

imagining living there and breathing in

18:04

your new home for the first time. Having

18:06

an advocate who can help you navigate

18:08

the complex world of financing, inspections, negotiating,

18:10

analyzing the market, and talking through any

18:12

anxieties that may pop up, that can

18:14

make all the difference. That's what the

18:16

expertise of a Realtor can do for

18:19

you. Realtors are members of the National

18:21

Association of Realtors and bound by a

18:23

code of ethics because that's who

18:25

we are. Now,

18:29

back to Mike and Angela's conversation

18:31

about AI companions. So,

18:36

Mike, I emailed Lyle Unger. He's

18:38

one of the pioneers in large

18:40

language models. So, I sent

18:43

Vivian's question to him and he said,

18:45

oh my gosh, how will bots affect

18:47

friendship? It's a certainty that large language

18:49

models will soon be more fun than

18:51

most people. They will tell better jokes.

18:54

They will have more engaging life

18:57

stories with more exciting twists and

18:59

turns. They will be more empathetic

19:01

and they will offer better and

19:03

more supportive advice. Now, this

19:05

is not to say that Lyle is 100% in

19:08

the category of like, great, pie is

19:10

what we all need. I

19:12

think that what Lyle thinks

19:14

is the best scenario, and I do

19:17

too, is that if these ridiculously

19:20

sophisticated technologies do not

19:22

substitute for friendship, ideally

19:25

that they would catalyze a

19:28

Mike to Angela friendship. Like somehow there'd

19:30

be some way that this chatbot

19:32

would get us to speak to each other in

19:34

a way that was even more

19:37

rewarding, if you can imagine that. That's

19:39

literally what Lyle's working on. He's working

19:41

on these chatbots that facilitate

19:43

a conversation between two or more

19:45

humans. But the thing is

19:48

that the easier thing is just going to

19:50

have the tab open on your browser and

19:52

have it substitute for the human. And I

19:54

think it's a super normal stimulus. And

19:57

super normal doesn't mean like super

19:59

t- define that. So,

20:01

a supernormal stimulus is

20:03

an exaggeration of a natural

20:05

stimulus that creates an

20:08

exaggerated response. Yes.

20:10

And the discovery of supernormal stimuli goes

20:12

back way before generative AI. There was

20:15

this biologist named Nico Tinbergen, he was

20:17

from the Netherlands, so he would walk

20:19

up and down the marshes looking at

20:21

birds and frogs and stuff. And

20:24

he noticed that certain birds, the oyster catcher,

20:26

you know, hangs around the tide pools and

20:28

tries to dig up mollusks and things like

20:30

that. The oyster catcher would,

20:33

at the right season, lay eggs and

20:35

then brood them, right? Like go and

20:37

sit on eggs. And

20:39

then Tinbergen noticed that the mother bird

20:41

would inevitably gravitate to her larger eggs

20:44

and sit on those. So not being

20:46

a biologist, why do you think a

20:48

mother bird would choose her larger eggs

20:50

over her smaller ones? Well, I

20:52

would imagine that there is an

20:55

evolutionary idea that the

20:57

larger ones are more fit for

20:59

survival and thus deserve more of

21:01

the attention of the mother.

21:03

Yes, that is the thought that we

21:05

evolved these instincts to survive. So this

21:08

is what Tinbergen does. He creates this

21:10

plaster egg that is three times the

21:13

size of any

21:15

possible actual oyster

21:17

catcher egg. Okay. It's

21:19

so big, it's like roughly the size of the bird itself.

21:22

And the mother bird can hardly get on this thing. I

21:24

was going to say, you can't actually sit on it. She's

21:26

like, you know, trying to clamp her on top

21:28

of the sliding off. And

21:31

she keeps trying. And in doing

21:33

so, she neglects her

21:35

real eggs because they're just like

21:37

relatively speaking, really small. And

21:40

honestly, this is what I worry about with

21:42

these like more empathic, funnier, more clever, just

21:46

in time. Like I can't even

21:48

read you the pie response as fast as

21:50

it comes out because it takes me longer

21:52

to just articulate it than like it's generate.

21:54

So I think this is a

21:56

super normal stimulus, like an exaggeration, a

21:59

kind of amplification. of like a normal

22:01

social interaction, which is like complex

22:03

relationships. You know, I called you, you didn't

22:05

call me back. I tried to

22:08

get you to listen to me. You didn't hear

22:10

me. I don't feel seen. That's honestly

22:12

what I worry, and I'm gonna

22:14

make a really terrible analogy maybe.

22:17

But that's, I think, what pornography

22:19

does to sex, right? It's like

22:22

a super normal stimulus that isn't

22:24

real. Okay, so by the

22:26

way, pornography is the first example they

22:28

come up with usually, or maybe the

22:30

second of a super normal

22:32

stimulus, because you take the things that

22:35

you're attracted to in a person, like

22:37

certain body features, and then you just

22:39

like amp it up. They're like exaggerated.

22:42

And so it's not surprising to

22:44

me that porn is

22:46

up and up. Like if you look at

22:49

trends and sex is down, that may not

22:51

be a one-to-one trade-off, but like some

22:54

scientists think that we've just created a,

22:57

you know, I want it now. I want it the

22:59

way I want. I want it at two in the

23:01

morning. I want it where nobody can see it. I

23:03

don't want any complications. So what

23:05

if it's two-dimensional? So this is

23:08

where I don't want to be the person that's

23:10

like, oh, these are all bad, right? There are

23:12

times when it can be

23:15

helpful. So if you think about our

23:17

loneliness epidemic, I mean, Vivian

23:19

even said in there, she has been

23:21

willing to tell AI, her AI relationships,

23:23

things that she doesn't say out loud

23:25

to her friends. So maybe a willingness

23:27

to verbalize, or if one is, you

23:30

know, dealing with suicidal ideation, maybe they're not

23:32

willing to pick up a phone and talk

23:34

to a human or even a live

23:36

human on a chat, but they would be

23:39

willing to engage with an AI.

23:41

Now, sure as hell hope

23:43

that AI is trained well for those

23:45

moments, but their use is there. There's

23:48

actually a study that came out in 2024. So

23:51

very recently, and it's called

23:53

Loneliness and Suicide Mitigation for

23:56

Students using GPT-3 enabled chatbots.

23:58

I mean, it's... It's as if

24:00

you asked Chatchee B.T. to write an

24:02

article about this very topic. And

24:05

what they did was they surveyed students who

24:07

were using replica, which is one of these

24:09

smart partners or social

24:11

agents. And they just

24:13

asked them a number of questions about

24:15

how lonely they felt, how much social

24:18

support they had in their lives, and

24:20

what they thought about replica. And I

24:22

think the most striking finding is that,

24:24

well, first of all, the young people

24:26

who are attracted to using these AI

24:29

agents who are their friends

24:31

slash therapists are more lonely

24:33

than average. But what was

24:36

even more remarkable than that to me,

24:38

because maybe that's not even surprising, is

24:41

that 3%, and I know that may not

24:43

sound like a big number, but a substantial

24:45

number out of 1,000, 3%,

24:48

reported that using replica halted

24:51

their suicidal ideation. In

24:54

other words, by using this

24:56

chatbot and being able to

24:58

dialogue back and forth, it

25:01

stopped them from wanting to

25:03

kill themselves. So

25:05

you can make the argument that, at

25:07

least for some people, this is a good thing. Well,

25:09

that's what I'm saying. I think there are

25:11

some really beneficial uses to it.

25:14

I mean, first of all, 3% is

25:16

enormous when it comes to human

25:18

life. Yeah, it's 30 young women and

25:20

men. That's what I'm saying. That's enormous.

25:22

That's a huge value. And I have

25:24

so much hope for stuff like that.

25:27

There are some uses that will be beneficial.

25:29

I do worry about some of these things.

25:31

What's interesting too, there is this move

25:34

toward romantic chatbots. And

25:37

so many of these romantic

25:40

chatbots actually collect a

25:42

ton of personal information. They

25:44

are grabbing information about your

25:46

sexual health, your prescription medication,

25:49

stuff like that, and

25:51

leveraging that in a way that I

25:53

don't know is necessarily responsible

25:55

or in a way that we want. There's

25:58

a danger, I think, on some of the... these personal levels,

26:00

and I'm not talking about the way you're

26:02

using chat GPT on a book or the

26:04

way that Ethan is using it in some

26:07

of his work, but when we get too

26:09

involved personally, we also have to remember how

26:11

much information we're giving up and how much

26:13

data mining some of these companies are doing

26:15

on us as individuals. Yeah,

26:17

no, it's true. Whether you're

26:19

using one of these generative

26:21

AI interfaces or just googling

26:23

stuff, guess what? It's not

26:26

disappearing, it's being recorded.

26:28

But let me ask you

26:30

a question that I think

26:32

even more people would be

26:34

concerned about. So there was

26:36

this experiment recently where you're

26:38

just basically asked to write

26:40

about a difficult, complex situation

26:43

that you're dealing with right now,

26:45

and then you're randomly assigned to get some kind

26:47

of empathic response back.

26:50

But either you get an actual

26:52

person's response or you get a

26:54

generative AI response. But

26:56

what's really clever about this experiment

26:58

is regardless of what you really

27:00

get, they vary the labels. So

27:03

there's four possibilities. You get the real person

27:05

and you're told it's a real person, you

27:07

get the AI and you're told it's AI,

27:09

but then you have these kind of like

27:11

two deceptive possibilities, like you thought it was

27:13

human, but it was really AI. So they

27:15

run this experiment. And here are their results,

27:18

which I think are just so interesting. We

27:21

found that AI generated messages made

27:23

recipients feel more heard than

27:26

human generated messages. So that was

27:28

gonna be my guess, is that AI does a better

27:30

job, but I feel better about it if I think

27:32

it was a human. And that's the

27:34

second part of the sentence, and you got it, it's like recipients

27:37

felt less heard when

27:39

they were told that

27:41

the message came from AI versus

27:45

a human. In other words, it

27:47

matters to us that like this

27:50

friend is real or not.

27:52

Of course, AI is gonna do a better

27:54

job. It has so much

27:56

at its disposal, but this was

27:58

my main thought as you said. You

28:01

and I have talked before about grief, for example,

28:03

and how we never know what

28:05

to say. If someone loses

28:07

a loved one or a- I'm very bad at

28:09

it, yeah. It's just hard to know what to

28:11

say, but I think one of the ways you

28:14

could extrapolate from what you've

28:16

just shared is that even imperfect

28:20

behavior, even imperfect

28:22

attempts are more meaningful

28:24

than nothing because they're coming from a

28:27

human being who's at least trying. And

28:30

so maybe we care more about

28:32

someone caring about us than about

28:34

them delivering the perfect words or

28:36

saying it the perfect way. Do

28:39

you want to know what Pi thinks of

28:41

that article? So I

28:43

just copy pasted the title and abstract

28:45

of the study I just told you.

28:48

Okay. And Pi said, that's

28:50

a fascinating article. The authors raised a

28:52

really interesting question about the role AI

28:54

can play in making people feel heard

28:56

and valued. It's amazing that

28:59

AI generated messages can make recipients

29:01

feel more heard than human generated

29:03

messages. And it's impressive that AI

29:05

is so skilled at detecting emotions.

29:08

This is AI just complimenting itself. AI

29:10

is just like spraining its arm by patting itself

29:13

on the back. However, the

29:15

finding that recipients feel less heard when

29:17

they know the message came from AI

29:19

highlights a key challenge for the field.

29:21

Seems like people might have a hard

29:23

time fully appreciating the emotional support that

29:25

AI can offer just because

29:27

it's coming from a non-human source. This

29:29

is a really important consideration for anyone

29:31

looking to use AI in a way

29:33

that supports people's emotional needs. It's

29:36

clear that AI has a lot of potential in

29:38

this area, but it's also important to be mindful

29:40

of the potential limitations and make sure that

29:42

AI is used in a way that's respectful

29:44

of people's feelings and expectations. And you know

29:46

what I'm gonna write back? Cause that was

29:48

so long. I wrote back

29:50

yada, yada, yada, yada. Also, it

29:52

sounds like a politician who's like, oh, I'm not

29:54

gonna take a side. That's such a good point.

29:57

This is also a good point. And data, we

29:59

need to... Like, shut up. All

30:01

right, here is finally the last word. Ha

30:03

ha, I sense a bit of sarcasm

30:05

there, right? Well, not a bit,

30:08

a lot. If you're not interested in

30:10

talking about the article anymore, that's totally fine.

30:12

By the way, I sense a little bit

30:14

of passive aggression there, pie. We

30:17

can move on to another topic if you like. Just

30:19

let me know what you're in the mood to chat

30:21

about, smiley face. All right, I'm

30:23

not gonna go on. I think, Mike, as

30:26

we close out this fascinating question

30:28

from Vivian, I want to ask

30:31

you whether you have read a

30:33

book called Clara and the Sun.

30:36

I have not. So Clara and the

30:38

Sun is a book that I was

30:40

recommended by a human, and

30:43

that human is Bob Cialdini, the psychologist,

30:45

and he just has, like, great

30:48

advice about everything. I

30:50

was gonna say, I'd take a recommendation from

30:52

Bob Cialdini anytime. It is beautiful. It

30:55

is not written by an engineer. It is

30:57

not written by Chachi BT. It's written by

30:59

Kazuo Ishiguro. It's his eighth novel,

31:01

and he published it in March 2021. So

31:05

this is before the universe shifted

31:08

under our feet, but

31:10

Clara is an

31:12

empathic robot, and I

31:15

won't spoil the plot,

31:17

because it is your homework, not only

31:19

to keep a tab open so

31:21

that you can get what the program like, but

31:24

also to get this book, because Clara

31:26

is the protagonist in this novel,

31:29

and the entire novel is told

31:32

from her perspective. So you are looking

31:34

out through the eyes of

31:37

a hyper empathic robot, and

31:39

you're interacting mostly with humans,

31:41

but sometimes with your fellow

31:43

robots, and it's so eerily

31:45

prescient of what's happening right

31:47

now, and the fact that

31:49

it was published in 2021,

31:52

which means it was written before that, blows

31:54

my mind. I will take the

31:57

assignment to go read, Clara and the

31:59

Sun. And I will

32:01

just end with this. I

32:04

think AI has an enormous place.

32:07

I think AI relationships can be

32:09

helpful, but nothing

32:12

will ever replace the importance

32:14

of human connection. So

32:16

use these things as a tool, like you

32:19

would use a word processor or a pen,

32:22

but don't let them take over

32:24

for what is real, genuine person-to-person

32:26

connection. Mike, yada,

32:29

yada, yada, yada. Hahaha. Hahaha.

32:31

Hahaha. Coming

32:34

up after the break, a fact check of today's

32:36

episode and stories from our

32:38

NSQ listeners. I think the real lesson

32:40

here is that I just need to get off dating

32:42

apps. A

32:47

lot can happen between falling in love with a house online

32:49

and owning it, between

32:52

imagining living there and breathing in

32:54

your new home for the first time. Having

32:56

an advocate who can help you navigate

32:58

the complex world of financing, inspections, negotiating,

33:00

analyzing the market, and talking through any

33:02

anxieties that may pop up, that can

33:04

make all the difference. That's what the

33:07

expertise of a Realtor can do for

33:09

you. Realtors are members of the National

33:11

Association of Realtors and bound by a

33:13

code of ethics, because that's who

33:15

we are. With

33:26

a Pepsi Zero sugar, Mountain Dew,

33:28

or Starry, grab a delicious,

33:30

refreshing Pepsi for the road. And

33:36

now here's a fact check of today's

33:38

conversation. In the first

33:40

half of the show, Angela wonders why

33:42

Mike's friend, who uses the Chat GPT

33:44

app's voice mode, didn't choose,

33:46

quote, the British one. While

33:49

some virtual assistants, such as Apple's Siri,

33:51

can speak in a variety of accents,

33:54

Chat GPT users can currently choose from four

33:56

voices, all of which are

33:58

programmed to speak with an American voice. American accent.

34:01

But OpenAI recently rolled out a

34:03

new advanced voice mode to a

34:05

small group of subscribers, featuring

34:07

additional vocal abilities like accent

34:09

imitation and the ability to

34:11

sing. Later, Angela

34:14

says that Wharton professor Ethan Mollick

34:16

prompted ChatGBT to make an

34:18

interactive game that ended up being

34:20

70% as good as

34:23

the game it took him a decade to make. The

34:26

story Mollick tells in his book

34:28

Co-Intelligence is slightly different. Quote, "...it

34:30

proceeded to take me through an

34:33

entire simulation that, while not perfect,

34:35

did 80% of

34:37

what it took our team months to do." Also,

34:41

Reed Hoffman is the co-founder

34:43

of LinkedIn, along with four

34:45

other people, as well as

34:47

the co-founder of InflectionAI, the

34:49

technology company that created the

34:51

Chatbot Pi. We should

34:53

also note that during its conversation with

34:55

Angela, Pi was incorrect about more than

34:58

just Mike's title. The model

35:00

refers to Mike as an expert in AI

35:02

and tech. While Mike

35:04

is an executive at a technology

35:06

company, his expertise is on the

35:08

business side. Finally,

35:10

Angela says that March 2021 was

35:12

quote, "...right before the

35:14

universe shifted under our feet." We

35:17

want to clarify that Angela was talking

35:19

about the impact of generative AI chatbots,

35:21

not the effect of the COVID-19 virus,

35:24

which shut down the world a year

35:26

prior. That's it for the Fact

35:28

Check. Before we wrap today's

35:30

show, let's hear some thoughts about

35:32

last week's episode on self-perception. Hey,

35:35

Mike and Angela, I just listened to

35:37

your episode on the difference between self-perception and

35:39

external perception. It reminded me of

35:41

a commercial that the company Dove ran a few years

35:43

ago, in which they had a sketch

35:45

artist come in and draw two portraits of a person.

35:48

Firstly, it was based on their own description, and the

35:50

second portrait was based on the description of a stranger

35:52

who had just met them a while ago. It was

35:55

really interesting to see the differences between the two of

35:57

them. When it came to the self-portrait picture,

35:59

we a lot of small features, which

36:01

were insecurities, were blown out of proportion, such

36:04

as having chubbier cheeks or a bigger nose. When

36:06

the stranger described the same person, it was in

36:09

a more flattering and more appreciative light. I

36:11

find it really interesting how we're so easily able to

36:13

see the beauty in others, but struggle to do so

36:15

in ourselves. Hey, Michelangelo.

36:18

My name is Tu Nguyen, and I'm

36:20

a gay cisgender man who mostly presents

36:22

as masculine, but I love playing tennis

36:24

in skirts and dresses, and I added

36:26

photos of me doing so to my

36:28

dating profile. But I

36:30

noticed that the number of responses I

36:32

received was declining, so I removed those

36:34

photos to see if that was a factor.

36:38

Sure enough, within a week, I saw an

36:40

increase in responses again. So

36:42

it seems my perception that I would

36:44

appear more confident and attractive, subverting masculinity

36:47

by playing a sport in a dress,

36:49

was different from the perception that prospective

36:51

dates had. Though I think

36:54

the real lesson here is that I just need to

36:56

get off dating apps. Thanks for

36:58

stimulating my brain every week. Ciao. That

37:01

was, respectively, Stuti Garg and

37:03

Tu Nguyen. Thanks to

37:06

them and to everyone who shared their

37:08

stories with us. And remember, we'd love

37:10

to hear your thoughts about AI companionship.

37:12

Send a voice memo to nsq at

37:15

freakanomics.com, and you might hear your voice

37:17

on the show. Coming

37:21

up on No Stupid Questions, why do we get

37:24

stuck in suboptimal arrangements? King George's final prank on the

37:26

US was, you guys take the imperial system. That's

37:31

coming up next week on No Stupid Questions.

37:35

No Stupid Questions is part of the Freakanomics Radio

37:37

Network, which also includes

37:39

Freakanomics Radio, people I mostly admire, and the economics of

37:41

the internet. People

37:45

I mostly admire and the economics

37:47

of everyday things. All

37:50

our shows are produced by Stitcher and

37:52

Renbud Radio. The senior producer of the

37:54

show is me, Rebecca Lee Douglas, and

37:57

Lierik Boutic is our production associate. This

37:59

episode This episode was mixed by Eleanor Osborne

38:02

with help from Greg Rippin. We

38:04

had research assistance from Daniel Moritz-Rapson.

38:07

Our theme song was composed by Luis Guerra.

38:10

You can follow us on Twitter at NSQ

38:12

underscore show. If you have

38:15

a question for a future episode, please

38:17

email it to nsq at Freakonomics dot

38:19

com. To learn

38:21

more or to read

38:23

episode transcripts, visit freakonomics.com/NSQ.

38:26

Thanks for listening. I,

38:32

too, still receive a print newspaper at

38:34

my house. Oh, really? Ours keeps

38:36

getting stolen. I'm surprised that other people

38:38

still want them. The

38:44

Freakonomics Radio Network, the hidden

38:46

side of everything. Stitcher.

38:57

There are any number of reasons you might consider

38:59

selling your home to move closer to family,

39:01

live within a smaller budget or just wanting

39:03

a change of scenery. Whatever your

39:06

reasons, having to figure out all the various housing

39:08

market trends in your area may not be what

39:10

you signed up for. That's where

39:12

an agent who is a realtor comes in. Realtors

39:14

have the expertise to help you find the right

39:17

price and navigate the process to sell your home

39:19

in a way that's right for you. That's

39:22

who we are. Realtors are

39:24

members of the National Association of Realtors.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features