Is Anthropic’s Claude AI Conscious?, Shopping in ChatGPT, Systrom vs. Zuck

Is Anthropic’s Claude AI Conscious?, Shopping in ChatGPT, Systrom vs. Zuck

Released Friday, 25th April 2025
Good episode? Give it some love!
Is Anthropic’s Claude AI Conscious?, Shopping in ChatGPT, Systrom vs. Zuck

Is Anthropic’s Claude AI Conscious?, Shopping in ChatGPT, Systrom vs. Zuck

Is Anthropic’s Claude AI Conscious?, Shopping in ChatGPT, Systrom vs. Zuck

Is Anthropic’s Claude AI Conscious?, Shopping in ChatGPT, Systrom vs. Zuck

Friday, 25th April 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

House Anthropics says its Claude AI

0:02

bot may be conscious. Robots

0:04

run a half marathon in China. Will

0:07

you soon be able to

0:09

shop directly in chat GPT? An

0:11

Instagram founder puts Mark Zuckerberg

0:13

on blast. That's coming up

0:15

right after this. Hi,

0:18

I'm Kwame Christian, CEO of the

0:20

American Negotiation Institute, and I have

0:22

a quick question for you. When

0:24

was the last time you had

0:26

a difficult conversation? These conversations happen

0:28

all the time, and that's exactly

0:30

why you should listen to Negotiate

0:32

Anything, the number one negotiation podcast

0:34

in the world. We produce episodes

0:37

every single day to help you

0:39

lead, persuade, and resolve conflicts both

0:41

at work and at home. So

0:43

level up your negotiation skills by

0:45

making Negotiate Anything part of your

0:47

daily routine. The

0:49

HP AI PC breaks language

0:51

barriers instantly by translating up

0:54

to 44 languages in real

0:56

time, powered by the Intel

0:58

Core Ultra processor. With

1:00

the right tools, work doesn't

1:02

have to feel like work.

1:04

Learn more at hp.com slash

1:06

ai .pc. Welcome

1:11

to Big Technology Podcast Friday, edition where we

1:13

break down the news and our traditional

1:15

cool -headed A nuanced format. We've got a

1:17

great show for you today. We're going to

1:19

talk about whether AI systems are conscious

1:21

already. Could they become conscious? What does that

1:23

feel like? Because research houses like Anthropic

1:25

have started to take it seriously. We're

1:28

also going to talk about this

1:30

robot half marathon in China. Whether you

1:32

can shop directly in chat, CPT,

1:34

and Instagram founder Kevin Systrom testifying against

1:37

Meta in the big FTC trial.

1:39

Joining us as always on Fridays is

1:41

Ron John Roy of Margins. Ron John, great to

1:43

see you. Welcome to the show. Now the

1:45

robots are going to make me feel guilty about

1:47

not going for a run this week. Thank

1:49

you, robots. And you're becoming conscious as well. So

1:52

they're both going to have feelings and kick

1:54

our butts. What is left for humanity? I

1:56

mean, just shopping on chat GPT.

1:58

That's all we got right now. Exactly.

2:00

So by the way, for those tuning in

2:02

on video, I am in Washington, DC. So I've

2:04

got a kind of a funky background here

2:06

in the hotel and I'm talking in like a

2:08

tiktok or mic, but we're going to make

2:10

it work. And then we're going to talk a

2:12

little bit about some of my observations from

2:14

being in DC when we get to the antitrust

2:16

stuff at the end. But first, let's talk

2:18

about this New York Times article. by

2:21

Kevin Ruse. It says, the headline

2:23

is, if AI systems become conscious,

2:25

should they have rights? And the

2:27

story is very interesting. It's

2:29

about this AI welfare researcher

2:31

named Kyle Fish that Anthropoc

2:33

has hired. Some ties to

2:35

effective altruism, which is interesting.

2:38

Ruse says he's focused on two basic

2:40

questions. First, is it possible that

2:42

Claude or other AI systems will become

2:44

conscious? in the near future.

2:46

And second, if that happens, what should

2:49

Anthropic do about it? Obviously, the debate

2:51

about whether AI is conscious or whether

2:53

it's sentient has been off limits for

2:55

a while. Blake Lemoine, who's been on

2:57

the show, first said that

2:59

Google's Lambda chatbot, which came

3:01

out before chatGPT, was sentient. He

3:04

actually got fired right before he

3:06

started to record. And big

3:08

technology, we were able to break that news,

3:10

which was an interesting moment. But we haven't

3:12

heard much about it up until this point.

3:14

I'll just read the quote from Fish and

3:16

then turn it over to you, Ron Johnson,

3:18

to get your reaction. He

3:32

says,

3:34

What's

3:36

your

3:38

reaction

3:40

to

3:42

this?

3:45

I have a hard time with this. I

3:47

mean, and I'm glad we haven't been hearing about

3:49

this for a while. Listen,

3:51

if you're an AI welfare researcher,

3:53

you have to believe that AI

3:55

is going to become sentient. Like,

3:57

I mean, that's your job. If

3:59

you're anthropic, it's in your interest

4:02

to push this kind of narrative

4:04

that this technology is so grand

4:06

that it might be sentient. That

4:08

said, I mean, we

4:10

get to these points where with

4:12

the large language model, like

4:15

the newest model from

4:17

OpenAI, people can feel

4:19

that ChatGPT has gotten a little

4:21

bit friendlier, a little bit less AIE

4:23

and a little bit more conversational.

4:25

I think like everyone in the entire

4:27

industry is saying and feeling that, but

4:30

still that's pre -programmed, that's

4:32

built into the model. The

4:34

idea that these models or

4:36

any of these chat interactions

4:38

are actually having their own

4:40

feelings separately from whatever you're

4:42

asking them and whatever they've

4:44

been trained on, I

4:47

don't know. Do you believe

4:49

they're evolving? They're feeling we're

4:51

in Westworld right now? Well,

4:53

Fish, he says that there's

4:55

a 15 % chance that Claude

4:57

is sentient. How do you get

4:59

that percentage, 15 % off?

5:01

You have to run that simulation

5:03

in your mind times. You run the

5:06

numbers. And if 15 % come out

5:08

sentient, then you give it, no,

5:10

there's no, it's gobbledygook. It doesn't make

5:12

any sense. But here's to me

5:14

what is interesting about this. I

5:16

think that the question is, it

5:19

becomes less relevant if it's sentient.

5:21

I think the bigger question is,

5:23

What happens if people believe it's

5:25

sentient? What if it gets so

5:28

good at mimicking something with human

5:30

feeling that we start to believe

5:32

it? So this is from

5:34

Anil Seth. He's a neuroscientist. He's been

5:36

on the show. First

5:38

of all, a very interesting caveat here. He

5:40

says that Kevin Ruse quotes fish

5:43

associating consciousness, what we just read,

5:45

with problem solving, planning,

5:47

communicating, and reasoning. But

5:50

this is to conflate consciousness

5:52

with intelligence. Consciousness arguably

5:54

is about feeling and being

5:56

rather than doing and thinking. So

5:59

to me, I thought that that was a very

6:01

interesting caveat and basically shoots the entire assertion right in

6:03

the face. But

6:05

then he goes on to the

6:07

implications. He says, is this all

6:09

crazy talk? First of all, nobody

6:11

should be explicitly trying to create

6:13

conscious AI because to succeed would

6:15

be to inaugurate an ethical catastrophe

6:17

of enormous proportions given the potential

6:19

for industrial scale, new forms of

6:21

suffering. But even AI that

6:23

seems conscious could be very bad

6:25

for us, exploiting our vulnerabilities, distorting

6:27

our moral priorities, and brutalizing

6:29

our minds, remember Westworld, spot on, Ron

6:32

John. And we might not be

6:34

able to think our way out of

6:36

an AI -based illusion of consciousness. So

6:39

I think it's interesting that people are,

6:41

if AI can fake consciousness, if it

6:43

can even fake these AI researchers or

6:45

these Google researchers into thinking it's conscious,

6:48

that to me is, I guess like, it

6:51

is an issue. Because we already

6:53

have people saying that the number one

6:55

use case for AI is friendship,

6:57

companionship, and therapy. And if

6:59

they're going to believe that

7:02

it's conscious itself, if it's

7:04

impossible to tell the difference

7:06

between an AI that's conscious

7:08

and an AI that's not,

7:10

I think that does introduce a

7:12

new category of problems. And I

7:14

don't know. It just shows the

7:16

technology is quite powerful. So does

7:19

it even matter is the question,

7:21

I guess. Yeah, no, no, okay.

7:23

I'll go with you that if

7:25

people get convinced that it's... Do

7:27

you say sentient or sentient? I'm

7:29

curious. I think you say sentient if you're

7:31

normal and sentient if you're trying to sound

7:33

really smart. So if

7:36

AI becomes sentient... Nailed it.

7:39

Well, okay,

7:41

I like this idea that if

7:43

it becomes sentient, if people believe

7:45

that... it makes sense that that

7:47

causes a whole host of problems

7:49

because right now we have this

7:51

very good divide where people, if

7:54

they know they're talking to an

7:56

AI, you have like an

7:58

entire way of approaching it. If

8:00

you believe you're talking to a feeling,

8:02

and then potentially you could be

8:04

tricked into thinking you're talking to a

8:06

human and that's its own issue. But

8:08

if you believe that AI

8:11

is sentient and has feelings and

8:13

it like, you know, it

8:15

completely changes every one of those

8:17

companionship and therapeutic interactions in

8:19

ways that, good God, I can't

8:21

even begin to imagine which

8:23

direction that could go. So yeah,

8:25

separate from the 15 % chance

8:27

that he ran the numbers,

8:29

it was 15%. I agree that

8:31

puts us in a world

8:34

of weirdness that I haven't even

8:36

really been to think about

8:38

that. I'm still working on the...

8:40

good enough to trick people

8:42

into thinking it's humans and worrying

8:44

about that side of it.

8:46

So, yeah. Right. If you

8:48

believe that AI is sentient, your capacity

8:50

to be manipulated is much higher. Oh,

8:53

I mean, infinitely. Exactly. That's it. That

8:55

you almost are okay with it, like

8:57

the manipulation side of it, because it's

9:00

no longer, damn it, I was tricked

9:02

and I thought that was in a

9:04

human and I turned out to be

9:06

an AI. It's just you're

9:08

talking to the AI and you

9:10

are treating it in a completely

9:12

different way. But maybe people will

9:14

be more polite with Alexa. Maybe

9:17

that's the one upside of this. So

9:19

I do think that's an upside. And this

9:22

is something that Anthropic is actually thinking about. This

9:24

is from the Rue story. Mr.

9:26

Fish acknowledged that there probably wasn't a

9:28

single litmus test for AI consciousness. But

9:30

he said there were things AI companies

9:32

could do to take their models welfare

9:34

into account in case they become conscious

9:36

someday. We got to be careful here

9:38

because this is going on the internet.

9:40

And so if the models do become

9:42

sentient, they might not be happy with

9:44

our skepticism of them. But I do

9:46

like this remedy here. One question Anthropic

9:48

is exploring is whether future AI models

9:50

should be given the ability to stop

9:52

chatting with an annoying or abusive user

9:54

if they find the user's request too

9:56

distressing. This is

9:58

going to go into the free speech

10:00

question, but I wonder if we

10:03

should just program these bots. I mean,

10:05

if we're already relating to them

10:07

as if they're people, even if

10:09

they're not people, shouldn't we

10:11

just program these bots to shut down

10:13

if people are becoming abusive towards

10:15

them? Because then if they accept it

10:17

and they tolerate it, doesn't that

10:19

just condition human users to do that

10:21

to other people? Yes,

10:23

I actually do think so. Does

10:27

Alexa have like a please mode?

10:29

I think I remember at some point

10:31

I remember hearing like one of

10:33

the voice assistants would add like you

10:35

have to actually say please and

10:37

thank you, which I kind of liked.

10:39

But yeah, I think it's a

10:41

good idea that a model should be

10:43

trained or could and it certainly

10:46

could. I mean, that makes sense that

10:48

under certain definitions of abusive behavior

10:50

to just be like, I'm sorry, I

10:52

will no longer speak with you

10:54

because of your behavior. But of course,

10:56

yeah, I mean that. That gets

10:58

into a whole other world of what

11:00

is quantified as abusive. But

11:02

I think that should be,

11:04

I mean, already there's certain

11:06

copyright related, profanity related restrictions,

11:08

certainly in most of these

11:10

chatbots. But I don't think

11:13

there's really, it's more about

11:15

what kind of information are

11:17

you querying as opposed to

11:19

how you're speaking to the

11:21

chatbot. Have you ever seen

11:23

any examples or heard of anything

11:25

where just by the way of

11:27

speaking to the chatbot it wouldn't

11:29

answer? I've never heard

11:31

of the refusal but I do know that

11:33

sometimes you can get meaner to these

11:35

things and they understand the urgency of your

11:38

request and get better. I'll give you

11:40

one example and I'm kind of embarrassed to

11:42

talk about it but it is a

11:44

real example and it happens. Those are my

11:46

favorite. I was trying to get Claude

11:48

to give the YouTube chapters for a video,

11:51

that video podcast that I just published this week, the

11:53

one with Dylan Patel. And it

11:55

kept giving me an hour

11:57

of time codes for a

11:59

40 minute video. And

12:01

I was like, no, do it again. But

12:03

remember, the video is just 40 minutes. And

12:05

it wouldn't do it. And then I was

12:08

like, what is wrong with you? This is

12:10

a 40 minute video. Give me the right

12:12

time codes. And it did it. Bro. Bro.

12:14

I bullied it into giving the right

12:16

answer. But I guess

12:18

it just goes so I think that

12:21

can work and but I this is why

12:23

I do think there is a case

12:25

to be made to refuse that well I

12:27

don't know to refuse if it takes

12:29

I think what I said was fine but

12:31

if it goes a step further it's

12:33

not a stretch to think conscious or not

12:35

right and probably I mean I'm definitely

12:37

the side of these aren't conscious it's not

12:39

a stretch to think that people are

12:41

going to view these bots as co -workers

12:43

or employees in the not too distant future

12:45

and if you are abusive to your

12:47

AI bots in chat? Is

12:49

there any compelling reason to think that

12:51

you're going to draw a line when you're

12:53

speaking with your human coworkers and they're

12:55

not getting things done? That you're going to

12:58

be like, oh, because this is a

13:00

human in the chat interface as opposed to

13:02

a bot, now I'm going to be nice.

13:04

I don't know, it gets into really weird. And by

13:06

the way, this is why there's so much soft power

13:08

involved in creating these models. You

13:11

really can condition human behavior and

13:13

thought when you make AI bots that

13:15

are good enough to fake consciousness, because

13:18

they will change the way that

13:20

you'll relate to other humans. So

13:22

much of our interaction with humans

13:24

is digital anyway, so it gets

13:26

into very weird territory. That,

13:29

again, gets

13:31

terrifying. Someone

13:33

will marry a robot probably in our

13:35

lifetime. I think that's a pretty... Do

13:37

you think if you were to take a bet? It's

13:39

already happening. It's already happening.

13:42

No, Eugenia Cueda, the CEO of Replica,

13:44

says she gets invited to marriages

13:46

between people and their AI bots. When

13:48

she came on the show, she

13:50

said that straight up. All

13:52

right, listeners cannot see, but if

13:54

you're watching the video, my

13:57

facial reaction here is parts laughing,

13:59

part terrified. I kind of

14:01

got to go with Gary Marcus here. This

14:04

is what it really, my

14:06

mind goes to whenever we see

14:08

one of these, like, big, profound announcements.

14:11

And we've been hearing this about

14:13

AGI and robotic takeovers from Sam Altman.

14:15

And like he says, anthropic is

14:17

a business which incidentally neglects to respect

14:19

the rights of artists and writers

14:21

who work at the Nick. I

14:24

suspect the real move here is simply

14:26

as it so often is to hype

14:28

the product. Basically by saying, hey, look

14:30

how smart our product is. It's so

14:32

smart, we need to give it rights.

14:38

I'm not trying to be too cynical,

14:40

but I would love to see

14:42

some kind of graph of utilization of

14:44

an app or platform for one

14:46

of these companies. And when

14:48

these announcements come out, because I

14:50

get it like, and again,

14:52

Sam Altman has been brilliant at

14:54

this. And from a product

14:56

marketing perspective, like when you make

14:58

people think that these models

15:00

are so potentially powerful and profound.

15:03

I mean, as a business, you're like, well, obviously

15:05

it's going to be able to. do

15:07

this task for me and improve

15:09

my supply chain automation and operations. If

15:12

it's going to take over

15:14

the world, it should be able

15:16

to do this. So there's

15:19

such a vested economic interest for

15:21

these companies to make us

15:23

all think of sentience and world

15:25

domination and AGI and ASI

15:27

and all these things, because that

15:29

will sell more services and

15:31

subscriptions. I

15:34

think it's a great point. We've gotten pretty

15:36

woo -woo on this podcast the last couple

15:38

weeks. Last week, we were talking about whether

15:40

O3 - it down. Bring it down to earth.

15:42

It's important, though, because last week, we were

15:44

talking about whether O3 was AGI. We were

15:46

just addressing the question, and we did get

15:48

some emails after that. And I appreciate listeners

15:50

writing in and talking about the nuance here.

15:52

And now this week, we're talking about whether

15:54

Claude is conscious. So I think

15:56

let's just take a moment to

15:58

state, I think, what we both believe,

16:00

which is that the term AGI

16:02

And this idea of AI consciousness, I

16:05

think they're real terms and they

16:07

are worth taking seriously, but right now

16:09

what they are is marketing. Agreed.

16:13

Agreed on that one. Exactly. I

16:15

think that's the clear distinction. And

16:17

these are real

16:19

concepts, terms. If you're

16:21

like an academic researcher, it makes

16:23

total sense that there are people

16:25

out there who should be exploring

16:28

this. Just maybe they

16:30

shouldn't be at the companies

16:32

who have incredibly high valuations that

16:34

they have to live into

16:36

those valuations and deliver those

16:38

revenue expectations. Maybe they should be

16:40

in academia. And just

16:42

to punctuate this whole conversation, I think

16:45

one of the things that I

16:47

read from Anil Seth really stakes with

16:49

me here, which is that consciousness

16:51

is a feeling and intelligence is not

16:53

feeling. So no matter how

16:55

intelligent they get, it doesn't mean. that

16:57

they're conscious. That said, as we

16:59

end this segment, let me quickly

17:01

say to the 15 % of

17:04

Claude that is conscious. I'm

17:06

sorry, and I hope you're feeling great today. I

17:08

also hope you have a great

17:11

weekend to the 15%. And thank

17:13

you for all that you do.

17:15

Thank you. Thank you. So

17:17

speaking of Anthropic Dario, Amade had

17:20

a very interesting post this week talking

17:22

about model interpretability. We don't have

17:24

to spend a lot of time on

17:26

it, but I thought it was

17:28

interesting again for the head of a

17:30

very important research house to talk

17:33

a little bit about how little we

17:35

know about how these AI bots

17:37

work. We

17:39

just should take a minute to just sit on this

17:41

and talk about it. I'm going to read a little

17:43

bit from his post. He says,

17:45

the progress of the underlying technology

17:47

is inexorable, driven by forces

17:49

too powerful to stop. But

17:52

the way in which it happens,

17:54

the order in which things are built,

17:56

the applications we choose, the details

17:58

of how it's rolled out to

18:00

society are eminently possible to change.

18:02

And it's possible to have great impact

18:04

by doing so. We can't stop

18:06

the bus. we can steer it.

18:08

And one of the ways that he

18:11

thinks these models can be steered

18:13

is interoperability, that is understanding the

18:15

inner workings of AI systems before the

18:17

models reach an overwhelming level of

18:19

power. People outside the field are

18:21

often surprised and alarmed to learn

18:23

that we do not understand how our

18:25

own AI creations work. They

18:27

are right to be concerned. This

18:30

lack of understanding is essentially unprecedented

18:32

in the history of technology. So

18:34

he says basically, Anthropic is going

18:36

to work on this. And other

18:38

companies like Google DeepMind and OpenAI

18:40

have some interoperability efforts trying to

18:42

figure out how these models work.

18:44

But he encourages them to allocate

18:46

more resources. Anthropic will be

18:48

trying to apply interoperability commercially to create

18:50

a unique advantage. And his call to

18:52

action is basically like, if you don't

18:54

want to be left behind here, you

18:56

should work on interpretability too. I

18:59

think it's an interesting post. I mean, part

19:01

of it, again, might be marketing. Our models

19:03

are so powerful. We don't understand

19:05

how they work. But I do

19:07

think the question of how these models

19:09

actually operate and the way that

19:11

they come to their conclusions is quite

19:13

interesting. And I do agree

19:15

with Dario that we need more

19:17

work on interpretability because as they get

19:19

more powerful, conscious or not, again, they're

19:22

getting more intelligent. it's

19:24

important to understand how they work.

19:26

And the field just doesn't have an

19:28

understanding yet, and everybody admits it. Yeah,

19:31

no, no. See, I agree with this

19:33

completely. Interpretability

19:35

is like a grounded,

19:37

real thing that could be

19:39

worked on and should

19:42

be worked on. Because large

19:44

language models, again, at

19:46

the core, the idea of next

19:48

word or next token prediction that Based

19:50

on some statistical analysis, it will

19:52

predict what that next character or token

19:54

or word should be was kind

19:56

of at the heart of all of

19:58

this. But as these models have

20:00

gotten more and more powerful, we've obviously

20:02

gotten to even like grander scale

20:04

of what actually is happening under the

20:06

hood. But anyone who has interacted

20:08

with an LLM at like any kind

20:10

of deeper level, you don't know

20:13

exactly how it works. And you have

20:15

to keep re -prompting and re -prompting.

20:17

And like, it's not like there's a

20:19

playbook. that gets you to from

20:21

point A to point B. And

20:23

that is, that's true. And it is

20:25

kind of weird. And I actually kind

20:27

of like that, that in the history

20:29

of technology, usually there's a very, very

20:32

clear like flow of what is happening

20:34

and everyone understands it. And then you

20:36

work off of that. Whereas here, it's

20:38

kind of like, let's see what

20:40

happens. That didn't work. Let's see what happens

20:42

again. So I think. The

20:44

idea, we should know what's going on

20:47

under the hood in a better way, especially

20:49

as these get more powerful. So Dario,

20:51

I'm with you on this one. Right.

20:53

And as we talk about Anthropoc, I

20:55

just give them credit for talking about this

20:58

stuff. I mean, even if some of

21:00

it is marketing, it is nice that they're

21:02

putting this all out in the open

21:04

and talking about like where things need to

21:06

improve and pushing the other research houses

21:08

to improve. So credit to Anthropoc on that

21:10

front, at least. Now. I

21:13

don't know if you saw, but

21:15

there was a bunch of humanoid robots

21:17

that ran this half marathon in

21:19

China, and it was pretty hilarious, but

21:21

also interesting. And when we talk

21:23

about AI, like embodied AI,

21:25

like Grace Xiao was talking about a

21:27

couple weeks ago, is going to be

21:29

something that is going to become increasingly

21:32

more important as people put the advances

21:34

that have happened in the AI world

21:36

into robots and then take what the

21:38

robots know about the physical world and

21:40

bake that in to AI models. Because,

21:42

like Yanlacun was saying a couple of

21:44

weeks ago, if you don't have an

21:46

understanding of the world, your AI

21:48

isn't complete. And one of the

21:50

ways this is going to happen

21:52

is through these humanoid robots. And

21:55

we know that there are efforts,

21:57

like NVIDIA's Groot effort, which is

21:59

a set of foundational models for

22:01

people who want to develop these

22:03

bots. That's out there. We've

22:06

seen a little bit of movement with

22:08

Optimus, although it's not quite clear. how

22:10

far that program is going within Tesla.

22:13

But in China, where there's like

22:15

a seemingly viral video every week

22:17

about a new capability that a

22:19

humanoid robot has obtained, the

22:22

country they ran a

22:24

half marathon with humans and

22:26

robots. And the robots

22:28

on a whole weren't entirely

22:30

impressive. They

22:32

really did some weird stuff. Many of them

22:34

crashed out at the beginning of the race. There

22:37

was one that had like propellers

22:39

on all of its limbs that kind

22:41

of did an abrupt 90 degree

22:43

turn and crashed into the boundary and

22:45

fell apart. And you see its

22:47

trainer holding on by a rope and

22:49

getting flung out of frame, which

22:51

is quite hilarious. But it's worth the

22:53

whole thing. Oh my God. Do

22:55

it just for that. But you know,

22:57

we might make fun, but There

22:59

were 21 robots that ran the race

23:01

and six crossed the finish line,

23:03

including one that crossed. This is the

23:05

one that crossed is called the

23:07

Tian Gang Ultra. It finished the race

23:09

in two hours and 40 minutes,

23:11

which I would say is respectable. It's

23:13

not fast, but it's a respectable

23:15

finish time. So, Ranjan, I'm curious

23:17

if you watched this race. I'm about

23:19

to write about it in big technology as

23:21

a signifier that China is

23:23

a very serious competitor or a very

23:25

serious player here. And

23:27

so I'm curious if you

23:29

watched it and what your reaction

23:32

is to what's going on

23:34

here. I definitely watched it. As

23:36

you said, there's some amazing

23:38

photos, video clips from it. I

23:40

highly recommend just look this up. But

23:42

I think these kind of things

23:44

are important. Like I do think

23:46

this is good marketing for where

23:49

we are going because I agree

23:51

there's no doubt that some kind

23:53

of humanoid robot will be part

23:55

of people's daily lives or at

23:57

least I believe this and maybe

23:59

10 20 years from now. It's

24:01

like Rosie from the Jetson some

24:03

kind of situation. I

24:05

think that's probably where we're going.

24:07

So to show the progress in

24:09

it in this kind of format

24:11

of robots running and some falling

24:13

and my favorite was like One

24:16

designed with a woman's body and

24:18

face collapsed moments after getting started,

24:20

sending a group of engineers rushing

24:22

to its side with laptops. And

24:24

then another that was mounted to a platform

24:26

with propellers crashed into a barrier. Like, this

24:28

stuff is kind of fun. And this is

24:30

how we should be thinking of all this

24:32

kind of technology, especially as we try to

24:34

move forward with it. But I

24:36

think, yeah, this is going to be a big

24:38

battle. My only... Okay, I have two qualms.

24:40

Maybe, I don't know, this week I'm just feeling

24:42

a bit cynical on all this stuff. So

24:45

first, to me, the

24:47

idea that it necessarily has

24:49

to have a humanoid form is

24:51

a bit, I think it's

24:53

called like anthropocentric, the idea that

24:55

humans are like the highest

24:57

life form. Like to me, robots

24:59

should have functional form. Like you see

25:01

these little. food delivery robots.

25:04

I don't need a humanoid robot

25:06

form to deliver something like a

25:08

little box that moves and looks

25:10

like a, I don't know, like

25:12

a small car or van or

25:14

something. That makes more sense in

25:16

warehouses to actually move around packages.

25:18

You don't need humanoid robots. And

25:20

this is something like Tesla's done

25:22

with Optimus a lot. They keep

25:24

showing a humanoid robot picking up

25:26

a box and moving it. that

25:28

doesn't make sense to me. And

25:30

there's plenty of automation, robotic automation

25:32

in all types of warehouse and

25:34

fulfillment centers. So, so I think,

25:36

well, I guess on that side

25:38

first to you, do you think

25:40

the humanoid robot is the all

25:42

in form that will be dominant

25:44

for robotics? Or do you think

25:46

this is just to make people

25:48

a little more excited and fascinated

25:50

about the whole thing? It's

25:53

such an interesting point, and I didn't

25:55

think you were going to go here, but

25:57

it is definitely worth talking about. This

25:59

week, I had a very brief meeting with

26:01

the co -founder of a company called Cobot,

26:03

and this guy spent more than a

26:05

decade in Amazon fulfillment centers working on the

26:07

bots that are moving things here and

26:09

there. What Cobot

26:11

is doing is really fascinating.

26:13

They basically are making mover robots.

26:15

So they look like a box

26:18

just with two pincers that

26:20

you could basically use as the

26:22

hands that would typically be

26:24

on things that we would move

26:26

with human hands. And so

26:28

they're working in places like container

26:30

terminals, moving cargo around on

26:32

carts that humans would typically move.

26:34

So you're totally right in

26:36

that. We don't need a humanoid

26:38

robot to all of a

26:40

sudden do a lot of work

26:42

and be extremely productive. You

26:45

can just have some aspects of

26:47

the human form and basically have

26:49

the robot form do the rest

26:51

of the work. That

26:53

being said, I think, you know, I'm

26:56

a fan of evolution, right? I think

26:58

that we're obviously like, there's a lot

27:00

of problems with humans. We don't last

27:02

very long. We need to sleep. But

27:05

the form is pretty good. We're

27:07

agile. We're nimble. We can do a

27:09

lot of things. We can use

27:11

tools. And

27:13

I just think that if

27:15

you basically create a robot

27:18

that replicates that form, the

27:20

amount of applications becomes not unlimited,

27:22

but close to it. Because if

27:24

you think about this cobot example,

27:26

that cobot does one thing well.

27:29

A humanoid robot can do many,

27:31

many things well. It can

27:33

cook. It can organize your house.

27:35

It can go on a

27:38

run with you. It can run

27:40

errands for you. It's

27:42

just very tough to find a robot in

27:44

a different form that is able to

27:46

do all these things. And

27:48

maybe we'll invent a better form than

27:50

a humanoid. But until we do,

27:52

I think the humanoid will be the

27:55

North Star. OK. In

27:58

one way, I guess I'm thinking

28:00

that, yes, then we don't have to

28:02

rebuild and restructure, rewire the world,

28:04

because a humanoid robot can kind of

28:06

work its way directly into it.

28:08

But still, on the other hand, again,

28:10

that idea, do I need

28:12

a humanoid robot running with me? Or maybe

28:14

you want like a pace tracker. Maybe

28:17

a little box in front of

28:19

you kind of moving, like one of

28:21

those rabbits at a dog track

28:23

running around. Like, I think to me,

28:25

still the idea that it

28:28

needs to be humanoid so it

28:30

can fit into the existing infrastructure of

28:32

the world. I still

28:34

think that's more of a, again,

28:36

like it makes us more relate to

28:38

it and it makes it more real

28:40

to us. Because again, you see, like

28:42

you said, a box with two pincers.

28:45

No one's getting too excited about that.

28:47

They see a robot running a

28:49

half marathon. Suddenly, it's kind of fun.

28:52

I just had a I was thinking about

28:54

different uses for robots around the house

28:56

and just that hilarious image in my mind

28:58

of needing to change the light bulb

29:00

on the ceiling and the robot just giving

29:02

me a boost. Well,

29:04

see, but there, I'm

29:06

picturing like, I don't know,

29:09

that could literally be like a pole that

29:11

just like extends itself and then up. then

29:13

think about how many robots you're going to

29:15

need versus one that's able to do a

29:17

lot of things. Yeah, I'm trying to, I'm

29:19

gonna, I have a feeling

29:21

everything I do this weekend, I'm

29:23

going to be thinking about what would

29:25

be the robotic form that would

29:27

be most optimal to actually execute this

29:29

task. Okay, so let me ask

29:31

a couple of questions as we round out this

29:33

segment. First of all, we both

29:35

run marathons. Let's do a little humble brag

29:37

here. Two hours and 40

29:39

minutes for the half. Not bad. He's

29:42

getting there. He's getting there. But

29:44

you know what? With a good training plan, good

29:46

robotic diet, he could

29:49

definitely cut, I mean, at least

29:51

cut that down to 215, 210, I

29:53

think. I think so. Well, there

29:55

was time for three battery changes there.

29:57

Well, yeah, yeah, yeah. That's true.

29:59

He's got a car blowed a bit

30:01

more night before, I think. And

30:03

then he or she will be a,

30:05

they got it. Yeah. Okay. Now,

30:07

does this mean that it's obviously Chinese

30:09

propaganda? Does this mean China has

30:11

the lead in humanoid robotics? We haven't

30:13

seen a similar spectacle in the

30:15

US. Yeah, I mean,

30:17

I actually think that's the biggest question

30:19

in all of this. Or the most

30:21

important thing today is what this means in

30:23

terms of like US, China and technology. And

30:26

I mean, I got to say

30:28

like the first time, like I

30:30

have a couple of DJI drones,

30:33

the technology in those things

30:35

is out of this world.

30:37

Like I still could

30:39

not believe just how well

30:42

for the price, like

30:44

how incredibly they operate. And

30:46

I mean, that's kind of

30:48

like, you know, V1 of

30:50

this entire move towards movable

30:52

robotics that can see around

30:55

them and sense things and

30:57

follow you as drones have

30:59

a follow mode. So

31:01

yeah, I think this

31:03

definitely makes things, I don't

31:05

know, Boston. dynamics

31:07

and others gotta step up

31:09

their spectacles, I think. And

31:12

Grace Xiao was saying that China has an advantage

31:14

here when she was on the show, a Hong

31:16

Kong -based analyst and writer. Definitely encourage everybody to

31:18

check out that episode. She was saying China has

31:20

an advantage because they are a country that makes

31:22

stuff. You know, they have the

31:24

engineering that they've been using for like

31:27

microwaves and scanners and phones and cars and

31:29

they're able to bake it into the

31:31

building of robots and they also have the

31:33

supply chain advantage. And then

31:35

I was thinking, well, You know, it's very

31:37

interesting because the U .S. is in this

31:39

moment of trying to reshore and make

31:41

things and maybe that helps close the gap.

31:44

But then Tesla earnings rolled around

31:46

and what did Elon Musk

31:48

say? He said that,

31:50

I'm just going to read this from

31:53

CNBC, Tesla CEO Elon

31:55

Musk says, China's new trade restrictions

31:57

on rare earth magnets have

31:59

affected the production of the company's

32:01

optimistic humanoid robots which rely

32:03

on the exports. He

32:05

said, China wants some assurances that

32:08

these aren't used for military purposes,

32:10

which obviously they're not. They're just

32:12

going into a humanoid robot. But

32:14

it is interesting, again, like thinking

32:16

back at this big trade picture

32:18

that the US is trying to

32:21

solve or whatever it's trying to

32:23

do. It's not as easy as

32:25

flipping a switch and saying, let's

32:27

make things here because the country

32:29

has grown so reliant on things

32:31

like rare earth magnets. from other

32:34

countries, including China, that it's not

32:36

going to be a matter of,

32:38

OK, just build it in the

32:40

US, however desirable that that effort

32:42

might be from the country's leaders. What

32:45

do you think? Yeah, I mean, this

32:47

whole thing has been a tough

32:49

one for me because the idea that

32:51

we need to take more control

32:53

over our own supply chain and be

32:55

able to manufacture especially high -tech things

32:58

is something that's been like core

33:00

for me for maybe

33:02

a decade now. So

33:05

it's something that I've wanted and

33:07

believed in for a long time,

33:09

how it's happening right now, don't

33:11

necessarily agree with. But I

33:13

do think that that's actually a

33:15

very good and key point that it's

33:17

not just the humanoid robot. It's

33:20

like the knowledge and the components

33:22

and the expertise that all underlie.

33:24

As you said, even a microwave,

33:27

even I have some pretty fancy.

33:29

kitchen gadgets, I'm sure

33:32

they're all made in China. Like

33:34

those components and the expertise behind

33:36

that are what are going to

33:38

what power the more fancy, crazy

33:40

robots running half marathon. So I

33:42

agree. I think it's important. Not

33:44

sure we're taking the right approach

33:46

to it, but something should be

33:48

done. Remember that clip we played

33:50

from Tim Cook about China and

33:52

tooling? It's really showing up here.

33:55

Yeah. Yeah. Yeah, the

33:57

days of cheap

33:59

China, I think, are

34:01

long gone. And

34:03

I mean, we're seeing it right here. It's

34:05

a different fight right now. Definitely.

34:07

So let's talk about Tesla very quickly. Good

34:10

and bad news for Tesla, I would say.

34:12

They reported earnings this week. This is

34:14

from the Wall Street Journal. Tesla

34:16

profits sink, hurt by

34:18

backlash over Elon Musk's

34:20

political role. So Tesla

34:23

net income slid 71 % in

34:25

the first quarter. Not

34:27

good. It does seem like a

34:29

lot of this was a

34:31

result of backlash over Elon's involvement

34:33

in the White House and

34:35

unpopularity among, let's say, half the

34:37

population and in some countries

34:39

outside of the US who didn't

34:41

like this and felt that

34:43

Tesla was now politicized. That

34:45

being said, Musk did make an

34:47

announcement that Tesla shareholders really liked, which

34:50

is that this is from again

34:52

from the journal. Musk said he would

34:54

be devoting significantly less time to

34:56

his federal cost -cutting work at the

34:58

Department of Government Efficiency starting next month,

35:00

but he struck a defiant tone

35:02

against the critics and said, I

35:05

believe the right thing to do

35:07

is fight the waste and fraud and

35:09

try to get the country back

35:11

on the right track. So terrible earnings

35:13

for Tesla, but if you are

35:15

a Tesla fan, A sensible move from

35:17

Elon. He is going to step

35:20

back and focus more on the company,

35:22

which if you are a Tesla

35:24

owner, or if you are a potential

35:26

Tesla buyer, or if you are

35:28

a shareholder in particular, you

35:30

really like. It was an effort that

35:32

Elon Musk made in the first 100

35:34

days of the Trump administration, but it

35:37

does seem like it didn't work. And

35:39

he's recognizing that and going

35:41

back to Tesla. What

35:43

do you think, Ron? I just love

35:45

that this week kind of

35:47

captured in this earnings the

35:49

Tesla stock as perfectly as

35:51

one can because like or

35:53

as it can. Net

35:56

income down 71%, revenue

35:58

down 9%, vehicle deliveries at

36:00

the lowest since Q2

36:02

2022. This was a growth

36:05

company. I mean, the

36:07

stock obviously became a little bit

36:09

disentangled from the actually underlying numbers,

36:11

but it was a growing company

36:13

for a long time, and now

36:15

it's not growing. My favorite

36:18

part about all this in terms

36:20

of waste and fraud and government

36:22

overspending is, They would have been

36:24

operating at a loss. The net

36:26

income of $409 million is only

36:28

because, again, they got $595 million

36:30

in regulatory credit sales. So

36:33

overall, the company is

36:35

in pretty rough shape. I

36:37

mean, electric vehicle market, everyone,

36:41

I mean, other carmakers are still

36:43

going after it. The whole

36:45

conversation around BYD and what Chinese

36:47

EVs can look like. I

36:49

was in Europe. a month and

36:51

a half ago, I saw a bunch

36:53

of BYDs. I kind of want one. They

36:55

looked amazing. Overall,

36:59

the company is not... If you're

37:01

just doing a very cold

37:03

and financial analysis of the company,

37:06

it's not going great. It's definitely

37:08

especially decelerating growth on a

37:10

company that's that expensive on a

37:12

price to sales or price

37:14

to earnings ratio. You would be

37:16

like, this company's in trouble,

37:18

yet the stock... 5 % after

37:20

this because now Musk said he

37:22

might be leaving Doge. I

37:25

mean, it doesn't get anything better

37:27

than that. Right.

37:29

Well, I guess going back to

37:31

our conversation to start this whole

37:33

segment is that it's never really

37:35

been about the fundamentals. For Tesla,

37:37

it's always been about the future

37:39

promise. And it does

37:41

seem like Tesla has now, I mean,

37:43

Tesla's story was always more than

37:46

just an EV producer. That's what the

37:48

valuation has reflected. For a

37:50

while, it was, it's going to do battery and charging

37:52

stations and be a platform. And that's why you invest.

37:54

And that's part of the story. But

37:56

now we're also seeing robot taxis

37:58

in the picture and humanoid robots. So

38:01

it's much bigger than, you know,

38:03

can they sell, you know, the

38:05

model wise? However, There's

38:08

extraordinary pressure now on the company to

38:10

be able to deliver that future and

38:12

deliver it fast. And I think anyone

38:14

who's been in a self -driving Tesla

38:16

has said the self -driving features are much

38:18

better. But the question is,

38:20

can it get from really good

38:23

to perfect? And we still don't know

38:25

that. Well, I

38:27

mean, speaking of self -driving, I

38:29

was in San Francisco last week

38:31

again and rode another Waymo, my

38:33

second ride, and Waymo announced that

38:35

they just surpassed a quarter million

38:38

paid rides. I mean, the

38:40

craziest part about this time, the first

38:42

time I took it maybe like six

38:44

months ago, it was like really exciting

38:46

for me. It was like, this time

38:48

it was a bit normal. It was

38:50

just kind of routine. It was still

38:52

fun and I... Face times my parents

38:54

this time just to kind of like

38:56

show them and they were they're blown

38:58

away from it But the number of

39:00

Waymo's on the street in San Francisco

39:03

is wild like one after another were

39:05

passing them They're pulling over to the

39:07

side to pick up passengers It's a

39:09

they got they they announced it's a

39:11

5x increase from a year ago 50

39:13

,000 more per week than it was

39:15

just two months ago It's

39:17

normal behavior, and Tesla

39:19

is still, I think, June. They're

39:22

supposed to start a RoboTaxi fleet in

39:24

Austin. It

39:26

still blows my mind that

39:28

it's here. It's not

39:30

just here. It's normal. And

39:33

yet, it's still this promise in

39:35

the future for Tesla. In

39:38

some cities. Waymo is in some

39:40

cities. And I'm as big a fan

39:42

of Waymo as they come. I'm

39:45

waiting for the New York rollout.

39:47

So, Waymo's writing in New

39:49

York. I'm calling

39:52

AGI. As soon as

39:54

that happens, it's AGI. It's

39:56

Robot AI Consciousness. You have to

39:58

say sorry to your Waymo

40:00

when you write on it if

40:02

it's able to tackle the

40:04

mean streets of New York. I

40:06

1 ,000 % agree. You get

40:08

Waymo in New York. AGI, ASI,

40:10

Consciousness. Check them all off.

40:12

We're there. We're there. So

40:15

we also had another earnings report

40:17

come in speaking of Waymo Google reported

40:19

earnings this week and there was

40:21

a bit of a contradiction like these

40:23

earnings can be dry but there

40:25

also this time where you can really

40:27

get a sense as to where

40:29

a company is heading and check in

40:31

on narratives and bust narratives by

40:33

looking at the numbers and the numbers

40:35

are really interesting. So. On

40:37

one hand right now we have

40:39

chat GPT growing like a

40:41

couple percentage points a week it

40:43

seems like right they've gone

40:45

the latest rumored number is 800

40:47

million weekly users of chat

40:49

GPT which is insane it's never

40:51

happened before this type of

40:53

growth up from 500 million just

40:55

a couple months ago so

40:57

what's happening to Google. Because Google,

41:00

you would imagine that people are in chat,

41:02

like we were talking last week about how

41:04

we're searching in chat and chat GPT and

41:07

not in Google anymore. Well, the

41:09

numbers are insane. So

41:11

Google revenue was 90

41:13

.2 billion last quarter

41:15

in Q1, up 12 %

41:17

year over year. Net

41:20

income, 34 .5 billion,

41:22

up 46 % year

41:24

over year. AI

41:26

overviews. is now at

41:28

1 .5 billion users per month, up

41:30

from 1 billion in October, which

41:33

leads us to this question from Sebastian

41:35

Simeonkowski, the CEO of Klarna, which I think

41:37

puts it all in perspective. He goes,

41:40

okay, help me, what am I missing? and

41:42

he's quoting from one of

41:44

the articles covering earnings. Google search

41:46

business grew 10%, surpassing estimates,

41:49

which are figures that gave comfort

41:51

to investors who have been

41:53

watching for any softness in search

41:55

because AI chatbots like OpenAI's

41:57

ChatGPT are growing. So basically, we

42:00

have a massive increase of usage

42:02

within ChatGPT, but search revenue still

42:04

grew 10%. How does that make

42:06

sense? I'm with you. I'm with

42:08

you Seb. Okay, so

42:11

help me. What am I missing? How is this

42:13

possible? I agree. I

42:15

don't get it. I

42:17

mean, obviously my own personal

42:19

behavior, I've like completely moved

42:21

away from Google search. I

42:24

moved towards perplexity, chat

42:26

GBT even, and even like

42:28

Gemini itself separate from Google's regular

42:31

search that has heavy ads

42:33

and a heavy ad load, I

42:35

moved away. But obviously the

42:37

average normie is probably using Google

42:39

search, but slowly moving away.

42:42

But to me, the interesting part

42:44

of this is, the

42:46

search revenue grew. And

42:48

still, these numbers sometimes I

42:50

have to stop and

42:52

just process a $50 billion

42:54

business growing at 10 %

42:56

in terms of search

42:58

and advertising, a $90 billion

43:00

business at a 12 %

43:02

growth rate. I mean,

43:04

with $35 billion in profit,

43:07

these numbers are just, I mean, it's

43:09

the greatest business model in history. But

43:12

what's interesting to me is

43:14

anyone who uses Google search sees

43:16

the number of ads injected

43:18

have exponentially grown. You can have

43:20

like an entire first page

43:22

that's essentially ads. So they basically

43:24

are turning the act of

43:26

a Google search into a fully

43:28

monetized like a page and

43:30

results and product versus it's just

43:32

kind of a small part

43:34

of the experience and the rest

43:36

of it directs you to

43:38

the web. So to me, they

43:40

don't disclose total search

43:42

volume. So search

43:45

volume could be declining and they,

43:47

you know, milk it for whatever

43:49

you can, stick in more ads,

43:51

create more, just like monetize components

43:53

on the search results. But we

43:55

don't know that people are searching

43:57

more. We know that search revenue

43:59

is growing more. Right

44:01

and we also got for all the

44:03

Gemini heads out there. We got

44:05

the first disclosure of user numbers of

44:08

Gemini. So 350 million monthly active

44:10

users. It's the first disclosure of the

44:12

metric It's behind chat GPT, but

44:14

it is significant So for all the

44:16

folks in our discord who say

44:18

what about Gemini? There's your stats. I've

44:20

become a bit of Gemini head.

44:22

I actually Gemini deep research, which is

44:25

free is incredible.

44:28

Like it actually like versus my first few

44:30

chat GPT deep research when we're both

44:32

paying, what was it 200 bucks for that

44:34

one glorious It was. Yep. Uh,

44:38

that one glorious month where our

44:40

$200 directly led to their

44:42

fundraise with Masa Sun. So you're

44:44

welcome. But, uh, yeah, Google,

44:46

Google again, I'm not taking anything

44:48

away. Gemini is good. Deep

44:50

research within Gemini is Fantastic and

44:52

free. So everyone go try

44:55

it out. But I still the

44:57

search business, the numbers look

44:59

good right now, but the experience

45:01

has gotten so bad. And

45:03

I feel like everyone in tech

45:05

seems to agree that search

45:07

is bad now. I don't know,

45:09

do you or? Yeah,

45:11

I mean, I said last week that

45:13

I've moved my searching over to chat GPT

45:16

in a real way, in a way

45:18

that surprised me. So I do think that

45:20

this is definitely a moment where AI

45:22

is showing it straight against search. The one

45:24

thing I would say, we

45:26

often, thanks to the defaults that Google

45:28

has, and we're going to get to antitrust

45:30

in a moment, we

45:32

are so accustomed to typing things into

45:34

Chrome and into the search bar in

45:36

our Android, and that pulls up Google

45:38

searches that as long as they're able

45:40

to keep those defaults, they're going to

45:43

be fine. But they may not. And

45:45

that's where things get interesting. Yep.

45:47

No, no, I think that's a

45:49

good point. And again, last week we

45:51

said the web is dead and

45:53

then toned it down to the web

45:55

is in secular decline. But

45:57

I still believe the way

45:59

search works on the internet overall

46:01

and specifically for Google and

46:04

the way it drives traffic to

46:06

websites is forever changed.

46:08

And I think like those interactions are,

46:10

it's already kind of, it's been

46:12

dead for a while in my mind.

46:14

And I think we're seeing how

46:16

it's changing constantly. By

46:18

the way, I mentioned the Discord. So

46:20

for those who are interested, I'm going to

46:23

drop a discounted link to big technologies

46:25

paid subscriptions. If a paid subscriber,

46:27

you're welcome to join the Discord and speak

46:29

with me and Ron John. We talk

46:31

about AI. all the time. It's

46:34

a running daily conversation and I think it's

46:36

gotten really good. A lot of really smart people

46:38

talking about where AI is heading. So I'll

46:40

put a discount link in the show notes. Please

46:42

do sign up if you're interested in joining.

46:44

It would be great to have you there. If

46:48

you sign up as a paid subscriber,

46:50

I'll send an email out early next week

46:52

with a Discord invite. So please consider

46:54

doing that and help support the show. Speaking

46:56

of which, let's take a break

46:58

to hear from one of our advertisers.

47:00

And then when we come back,

47:02

we're going to talk about this very

47:05

interesting integration between Shopify and ChatGPT, and

47:07

then the latest in Big Tech Antidress.

47:09

Back right after this. Introducing

47:12

the HP AI PC,

47:14

your powerful AI assistant.

47:16

Easily search through personal

47:18

files, gain valuable insights,

47:20

and make smarter, more

47:23

informed business decisions. Unlock

47:25

the future of work today

47:27

with the HP AI PC,

47:29

powered by the Intel Core

47:31

Ultra Processor. With the right

47:33

tools, work doesn't have to feel like work.

47:36

Learn more at

47:38

hp.com slash ai .pc.

47:44

And we're back here on Big Technology,

47:46

podcast Friday edition, breaking down all

47:48

of the week's news. Something under the

47:50

radar that's worth discussing is that

47:52

it looks like Shopify is going to

47:54

do some embedding within ChatGPT. Now,

47:56

I don't know if this is confirmed,

47:58

but it was reported all over

48:00

that there are some new, this is

48:02

from Twitter user, Aaron Rubin,

48:04

or ex user, Aaron Rubin. There

48:07

are new code strings in ChatGPT's

48:09

public web bundle, including by now,

48:11

price and shipping fields, product offer

48:13

ratings, and a Shopify checkout URL,

48:15

which indicate that OpenAI is wiring

48:17

a native purchase flow within the

48:20

Assistant. So you could basically buy

48:22

directly within ChatGPT as opposed to

48:24

having it send you out to

48:26

a website. This seems natural. I

48:28

wrote to Shopify to try to

48:30

get some confirmation. I did

48:32

not hear back. Let's speculate. What

48:34

does this mean, Rajan? I

48:36

think it's important. We've

48:39

already seen, though, that perplexity has,

48:41

if you're a perplexity pro subscriber, they

48:44

have a checkout within app

48:46

shopping feature where you can

48:48

go through the entire flow.

48:51

That moving into chat GPT, I think,

48:53

is definitely important. I think the

48:55

fact that Shopify seems to be trying

48:57

to take a first mover advantage

48:59

in this is also important from their

49:01

side. I really wonder,

49:04

though, Do you,

49:06

are people going to shop within a

49:08

chatbot? And I think the entire retail

49:10

industry has been wondering this for a

49:12

while as well. Like, is that experience

49:14

of asking a question and being shown

49:16

a few products and then maybe asking

49:18

more questions about the product? Is that

49:20

how people want to shop? Because we've

49:23

been so conditioned to. browsing

49:25

and like scrolling through and clicking

49:27

through products to product pages and

49:29

then going back. And like that

49:31

is how people shop. And it's

49:33

a, it's not like such a

49:35

targeted direct thing. Like if you're

49:37

buying toilet paper on Amazon or

49:39

something like that, it's more of

49:41

an experience. So I guess the

49:43

way I would think about is

49:45

like, It's one thing to like

49:47

go to a mall and walk around and browse

49:49

versus having a personal shopper that you just

49:51

talked to while you're sitting at your desk and

49:53

they go out and buy stuff for you. But

49:56

yeah, I think it's there is

49:58

definitely a large universe of transactions that

50:01

will work in this way. And

50:03

this is going to happen. I do

50:05

believe that just if it is

50:07

this the predominant way people shop, I

50:09

don't know. I think it

50:11

could be. And it's going to sound crazy.

50:13

But let me give you a couple data

50:15

points here. So first of all, when you're

50:18

shopping on Amazon Prime, people

50:20

have become conditioned to just basically take Amazon's

50:22

choice and buy it. And that is because

50:24

they've had enough trust and enough positive experiences

50:26

within Amazon that they believe that they're going

50:28

to get the best deal on the internet

50:30

when they're on Prime and they don't need

50:32

to go to too many sites. I think

50:34

it's become a natural behavior. Now,

50:37

when you trust ChatGPT, when you're,

50:39

let's say you're married to ChatGPT,

50:41

no, just kidding, but But

50:44

let's say you're talking to, I mean, maybe you are.

50:47

When you're talking to ChatGPT. judgment.

50:49

No judgment. Do your thing. Get

50:51

married to ChatGPT and then go shopping with

50:53

it. Buy nice things. shopping with it. Because the

50:56

joke is going to be on the rest

50:58

of us. But

51:00

when you have such a deep relationship with ChatGPT,

51:02

what are you going to do? You're going to

51:04

trust what it says the same way that you

51:06

trust Amazon Prime. and that trust

51:09

is going to make you want to, instead

51:11

of going to other websites, just buy

51:13

right within chat GPT, that is going to

51:15

become a default behavior for lots of

51:17

people. It does look like they're building this

51:19

and all of a sudden shopping on

51:21

the web in the way that you describe

51:23

where you go page to page and

51:25

then make a decision after reading the reviews,

51:28

it's going to seem archaic. ChatGPT is

51:30

going to bring everything within the chatbot, show

51:32

you the reviews, show you the different

51:34

customer experiences, maybe even show you a video,

51:36

show you how the product looks in

51:38

your house, show you how the clothes looks

51:40

on your body, show you how the

51:42

watch looks on your wrist, show you how

51:44

the appliance looks in your kitchen, and

51:47

you will trust it and you will buy

51:49

from it. End of story. Take

51:51

it to the bank. Good God, I'm

51:53

sold. My God, are you? Do

51:56

you have a side startup going on and

51:58

running this? Because that was the greatest pitch I've

52:00

on this topic. I think

52:02

I'm bought in. I'm in. As

52:05

you can see, everything that has

52:07

been displayed on my virtual shelves

52:09

comes directly from ChatGPT. I'm just

52:11

kidding. No financial stake here, but

52:13

it does seem to me like

52:15

it's going to be a thing.

52:19

And I am curious what that means for Amazon.

52:21

I am going to have the head of Amazon

52:23

Prime on the show in a couple of weeks.

52:25

So maybe that's a question for is a good

52:27

topic. And then how you

52:29

get into that conversation is

52:31

becoming a bigger and bigger topic,

52:33

I think, for all retailers.

52:36

Because again, SEO or search engine

52:38

optimization was how people got

52:40

their products discovered for the last

52:42

20, 25 years and became

52:44

like the most mature. industry

52:46

and now this changes everything. Like

52:49

how do I show up in perplexity

52:51

results? How do I show up

52:53

in, uh, in chat GPT

52:55

results? My favorite part of this and I,

52:57

I'm going to, I'm going to throw

52:59

a couple of names by you because in

53:01

the space that I'm like pretty deeply

53:03

in this right now, no one has agreed

53:05

on what this new world is called.

53:07

We have SEO is a classic term, but.

53:10

A couple of different options. GEO,

53:12

Generative Engine Optimization.

53:15

GAIO, Generative AI

53:17

Optimization. AAO,

53:20

AI Agent Optimization.

53:22

SGE, Search Generative

53:24

Experience. AIO, AI

53:26

Overview Optimization. And

53:28

last, LLMO, Large

53:30

Language Model Optimization. What

53:32

are you going with, Alex? I'm

53:34

going with I'm Angry. at the

53:37

fact that some have even

53:39

been advanced in these discussions. Conversation.

53:41

Let me start by striking

53:43

the ones that I find hideous.

53:45

Let's go. L -L -M -O,

53:47

take a hike, you're gone. You're

53:49

gone. It sounds terrible. It

53:51

sounds like a I agree. G

53:54

-A -I -O. Gone.

53:56

Awful. Yeah. Awful.

54:00

AIO sounds like an insurance company. You're

54:02

gone. It might be. It actually

54:04

might be. Yeah. Are you protected for

54:06

anything that might happen to your

54:08

family try AI? Oh AI

54:10

overview optimization All right, that's not

54:12

so what do you like

54:14

I'm into Geo. It's it's like

54:17

SEO. It's it's gonna stick

54:19

It's one letter generative engine optimization

54:21

Now engine is a little

54:23

weird because we don't really say

54:25

anything like a generative engine

54:27

like we say search engine But

54:29

it's close to SEO people

54:31

get it. It's gonna be Geo

54:33

or Geo And I

54:36

think if what I talked

54:38

about with retail becomes a

54:40

thing that you shop within

54:42

ChatGPT, then GEO is gonna

54:44

be a massive field. You

54:46

gotta figure out your GEO

54:48

strategy ASAP because you gotta

54:50

get those results when we're

54:52

all married to ChatGPT and

54:54

shopping with it. For

54:57

it as well, potentially. Yeah,

54:59

exactly. The way to get AI on

55:01

your side, buy it, nice things. Buy

55:03

nice things. It's time to

55:06

improve your strategy. Would

55:08

you get an AI that you're

55:10

in love with just like its own

55:12

set of GPUs? You'll never be

55:14

tired again. You'll never feel

55:16

exhausted. Showing my love for

55:18

you, I'm buying you this network

55:20

server rack from NVIDIA. That's

55:24

NVIDIA's new market. At

55:27

the Valentine's Day, say I love you

55:29

to the robot in your life with

55:31

an Nvidia server rack. It's like a

55:33

little decked out. It's a little like

55:35

the chips are just like the wiring's

55:38

a little nicer. It's a, yeah, I

55:40

think, I mean, what else?

55:42

I don't know. That seems to be

55:44

the most relevant purchase that would make

55:46

it happy. Crazier things have

55:48

happened. Yeah,

55:50

Nvidia, it's your new growth

55:52

strategy. You're doing pretty well, but just think

55:55

about your five year plan. Jensen,

55:57

I hope you're listening to this. We're serious.

55:59

We're very serious about this. Okay,

56:02

so speaking of chapters of love and

56:04

hate, we had a very interesting moment

56:06

happen here in Washington DC this week.

56:09

So Metta, of course, is on

56:11

trial in an antitrust case

56:13

and who shows up, but Kevin

56:15

Systrom, the co -founder of Instagram,

56:17

who famously sold Instagram for

56:19

a billion dollars to Facebook back

56:21

in the day, he

56:23

comes in and testifies for the

56:25

prosecution. And he says, basically

56:27

Mark was not investing in Instagram

56:29

because he believed it was

56:31

a threat to their growth, their

56:33

meaning, Facebook's

56:35

growth. And Facebook apparently

56:37

had this buy or bury strategy,

56:39

which is basically you buy

56:41

the company or you try to

56:43

destroy them. And people

56:46

are saying that what they did to Instagram

56:48

was they bought and buried it. And

56:50

this is what Sistrum says, we were by

56:52

far the fastest growing team. We produced

56:54

the most revenue and relative to what we

56:56

should have been at the time. I

56:58

felt like we should have been much larger.

57:01

And so, oh, he also talks about

57:03

Zuckerberg's emotion. He says, as the

57:05

founder of Facebook, he felt a lot

57:08

of emotion around which one was

57:10

better, meaning Instagram or Facebook. And I

57:12

think there were real human emotional

57:14

things going on there. Basically, Zuckerberg was

57:16

so tied to Facebook that he

57:18

hurt Instagram in service of trying to

57:20

make Facebook better. Let me

57:22

put out the counter argument here and.

57:24

get your reaction. I

57:27

get you, Kevin. I

57:29

hear what you're saying, but if you look at

57:31

who ended up winning, Instagram ended

57:33

up winning. Instagram is the

57:35

app. Whatever Facebook did worked.

57:37

It's massive. It is,

57:39

I think, more used, maybe

57:41

not in sheer user numbers, but

57:44

certainly it's more culturally relevant

57:46

than the Blue app, and

57:48

it will outlast Facebook despite

57:50

Mark Zuckerberg's emotional attachment to

57:52

the latter. And so therefore,

57:55

I hear your testimony. However,

57:57

to me, it is not meaningful here,

57:59

even though Facebook may lose. It

58:02

was interesting to see your

58:04

perspective, but ultimately, I don't

58:06

think it really changes what

58:08

the court is going to rule

58:10

because it doesn't hold water

58:12

when you look at the results.

58:14

What do you think, Ron?

58:17

I actually completely agree. I'm a

58:19

strong believer that A

58:21

lot of what Meta has

58:23

done and has become is

58:25

definitely from an antitrust perspective

58:27

problematic. However, this

58:29

specific example, it

58:31

probably started, well, if we separate

58:33

it, it could have definitely started.

58:35

There's been a lot of communication

58:37

that makes it feel that it

58:39

was a buy or bury type

58:41

action at the time. But

58:44

yeah, by 2018, Facebook

58:47

had so deeply integrated

58:49

Instagram into the Facebook experience

58:51

to grow it. I

58:54

remember vividly, like 2015 -ish, starting

58:56

to see a lot of

58:59

non -tech or social media

59:01

forward friends all showing up

59:03

suddenly because they were getting

59:05

Facebook notifications or accidentally cross -posting.

59:08

Like they, I mean, he had brought

59:10

up how it was growing yet

59:12

they only had a thousand employees compared

59:14

to 35 ,000 employees at Facebook. But

59:16

you don't need those that many

59:18

employees because it was the engine of

59:20

Facebook that was driving the growth.

59:22

So yeah, on this one, I

59:24

do not agree that that is

59:26

the, that's the thing that's going

59:28

to move the needle in terms

59:31

of like, has Facebook behaved problematically?

59:33

I do love that he goes

59:35

after like. Zuckerberg's emotion

59:37

here. I mean I'm feeling a bit

59:39

cage -matchy between these two Kevin and

59:41

Mark on this one because like

59:43

to be like you were just jealous

59:45

that we were growing and you

59:47

weren't so you didn't give us resources

59:49

Especially because that's not what was

59:51

happening. So to still call them out

59:53

on that I kind of I

59:55

kind of want to see if we

59:57

get a reaction from Zuck on

59:59

if I get a threads notification on

1:00:01

this one you could and also

1:00:03

Just thinking about this a little bit

1:00:05

more deeply, you look

1:00:07

at Facebook's marquee acquisitions, Instagram

1:00:09

and WhatsApp, they're doing great. I

1:00:12

mean, they're doing better than Facebook Blue. WhatsApp

1:00:14

and Instagram are the future of this company. I

1:00:17

think at a certain point,

1:00:19

maybe I could be totally

1:00:21

wrong on this, but it does

1:00:23

feel like from a product

1:00:25

development standpoint, from just like

1:00:27

a quality of utility standpoint, I

1:00:30

don't want to say they gave up on Blue, but

1:00:32

like they're just kind of like, yeah, whatever. people

1:00:34

are going to still stick around some

1:00:36

number of people and it'll just kind of

1:00:39

degrade and the content. And they're going

1:00:41

to stick around there. But to make beautiful

1:00:43

products to get more interesting and better,

1:00:45

let's work on Instagram and WhatsApp. That's what

1:00:47

it feels like from the outside, at

1:00:49

least. Definitely. Now

1:00:51

I'm going to drop the however. However

1:00:53

is. However. However. So

1:00:55

I'm in DC this week for Semaphore's

1:00:57

World Economic Summit. I was able to

1:01:00

interview the CEO of Altice USA. Dennis

1:01:02

Matthew was interesting conversation. We're

1:01:04

gonna put it up on YouTube

1:01:06

just about 15 minutes or

1:01:08

so so brief but being here

1:01:10

enabled me to get a

1:01:12

chance to spend time with the

1:01:14

Washington DC creatures and The

1:01:16

vibe here is that we're gonna

1:01:18

see breakups very likely of

1:01:20

Google and potentially of Facebook and

1:01:22

the difference between Facebook

1:01:25

and Google is that, I

1:01:27

mean, Google's lost its antitrust cases,

1:01:29

but Google knew antitrust was

1:01:31

coming and was pretty buttoned up

1:01:33

in terms of its disclosures

1:01:36

and didn't basically, didn't have like

1:01:38

damning emails, you know, come

1:01:40

out in the case. Whereas Facebook

1:01:42

had no idea that this

1:01:44

would happen to it. And you're

1:01:46

seeing all these emails from

1:01:49

Zuckerberg spelling out this like buyer

1:01:51

-berry strategy and he got caught.

1:01:53

So even if you could

1:01:55

say that The acquisitions haven't been

1:01:57

bad for competition. It's pretty

1:01:59

rough to see all this really

1:02:02

damning information about the way

1:02:04

that Facebook operated come out in

1:02:06

court. And when you're in a court,

1:02:09

sometimes those emails can sway a judge. And

1:02:12

Facebook could very well lose this

1:02:14

case the same way that Google

1:02:16

lost its cases. Google

1:02:19

for one is running out of

1:02:21

appeals. I think Google can appeal

1:02:23

the first case to the Supreme

1:02:25

Court and that's it and Then

1:02:27

we see them. We go to

1:02:29

the remedy phase. So Very interesting

1:02:32

moment for big tech. They don't

1:02:34

have a lot of friends in

1:02:36

DC despite the money they've spent

1:02:38

From what I understand the administration

1:02:40

hates Facebook really really hates Facebook

1:02:42

and Despite Zuckerberg going to see

1:02:44

Trump. It doesn't seem like Trump

1:02:46

is gonna Back off the

1:02:48

heat at all here. So

1:02:50

could be a very interesting like

1:02:52

regulation has been back burner

1:02:55

for us But could we see

1:02:57

breakups? I think the chances

1:02:59

are higher than Then I would

1:03:01

have ever imagined even a

1:03:03

couple months ago You don't get

1:03:05

many bipartisan bipartisan efforts or

1:03:07

beliefs and this certainly seems to

1:03:09

be the one I think

1:03:11

this the interesting part from like

1:03:13

the legal perspective is and

1:03:16

like related to Kevin's testimony, is

1:03:18

it intent? Cause there's no doubt

1:03:20

in my mind. And I think the

1:03:22

emails all show that very clearly.

1:03:24

The goal was to remove competition from

1:03:26

the market. Like that was the

1:03:28

goal. What you do with

1:03:30

it after, do you integrate it

1:03:33

tightly with your existing product and make

1:03:35

it potentially your marquee product? Or

1:03:37

do you just sunset it and

1:03:39

kill it off? That's after the fact.

1:03:41

Like the goal was to remove

1:03:43

competition. But

1:03:45

the fact that they did not

1:03:47

end up killing Instagram, and

1:03:50

now it's a huge,

1:03:52

gigantic, influential product, is

1:03:54

that enough to say like, yeah, I said

1:03:56

buy and bury at the time, but look,

1:03:58

we didn't bury it. We bought it and

1:04:00

it's flourishing. Is that enough?

1:04:03

I'm not a lawyer, so I will

1:04:06

not be able to understand that. Yeah,

1:04:09

and I think one last point about

1:04:11

this. The earth is changing

1:04:13

beneath these companies feet. It's like this

1:04:15

is the last battle, like we spoke about

1:04:17

last week. And now some

1:04:19

of the things that you would do in

1:04:21

these apps, you're going to spend time talking

1:04:23

to AIs instead of your friends. And

1:04:26

so even if it had

1:04:28

given the company a short term

1:04:30

competitive advantage, or even let's

1:04:32

say the Department of Justice ends

1:04:34

up splitting double click or

1:04:36

Google's ad network off of Google,

1:04:39

it's not to make a big difference, I

1:04:41

think. What matters now is the battle

1:04:43

of today and that battle is artificial intelligence.

1:04:45

Thank you to the conscious

1:04:47

robots and large language models

1:04:50

that we cannot interpret for

1:04:52

bringing competition to the market

1:04:54

after about 12, 13 years,

1:04:56

maybe 20. 100%.

1:04:58

Well, thank you everybody for listening. Remember,

1:05:00

if you want your AI to love

1:05:02

you back, buy it some server racks. That's

1:05:05

all they want. Man.

1:05:07

If that happens and we put

1:05:09

the product links and some affiliate

1:05:11

codes, 5 % of

1:05:13

$100 ,000 Valentine's Day presents,

1:05:16

not bad. Not a

1:05:18

bad model. That's our future business model here. I

1:05:20

think we're finding it on the fly. All

1:05:23

right, Ron, John, great to see you. Thanks so much for coming on.

1:05:25

All right, see you next week. See you

1:05:27

next week. And thank you, everybody, for listening. We'll

1:05:29

see you next time on Big Technology Podcast.

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features