Why Mark Zuckerberg wants to end the smartphone era

Why Mark Zuckerberg wants to end the smartphone era

Released Wednesday, 25th September 2024
 1 person rated this episode
Why Mark Zuckerberg wants to end the smartphone era

Why Mark Zuckerberg wants to end the smartphone era

Why Mark Zuckerberg wants to end the smartphone era

Why Mark Zuckerberg wants to end the smartphone era

Wednesday, 25th September 2024
 1 person rated this episode
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

6:00

We have lived through one

6:02

major platform transition already because we started

6:04

on web, right? Not on mobile. Mobile

6:07

phones and smartphones kind of got started

6:09

around the same time as Facebook

6:12

and kind of early social media was getting

6:14

started. So it didn't really get to play

6:16

any role in that platform transition, but going

6:19

through it, where we weren't born on mobile,

6:21

we kind of had this

6:23

awareness that, okay, web was a thing,

6:26

mobile is a thing, it is different. There

6:28

are strengths and weaknesses of it. There's

6:30

this continuum of computing where now you

6:33

have a mobile device that you can take with you

6:35

all the time and that's amazing, but

6:38

it's small, kind of

6:40

pulls you away from other interactions. Those things are not great.

6:43

But there was sort of this recognition that just

6:46

like there was the transition from computers to

6:48

mobile, mobile was not gonna be

6:50

the end of the line. So as soon as we

6:52

started becoming like a

6:55

more, I don't know, I

6:57

guess it's a stable company. Like once we found our

6:59

footing on mobile and we weren't like clearly gonna

7:01

go out of business or something like that, I was like, okay, let's

7:04

start planting some

7:06

seeds for what we think could be

7:09

the future, right? It's like mobile is already kind of

7:11

getting defined, you know, by 2012, 2014-ish, it was generally

7:13

too late to

7:17

really shape that platform in a meaningful way. I

7:19

mean, we had some experiments, I mean, they didn't

7:21

succeed or go anywhere. So pretty quickly, I was

7:23

like, okay, we should focus on the future because

7:25

just like there's a shift from desktop to mobile,

7:28

new things are gonna be possible in the future. So

7:30

what is that? I think the kind

7:33

of simplest version of it is

7:36

basically what you started seeing with Orion, right?

7:38

It's like the vision is a normal pair

7:40

of glasses that

7:42

can do two really fundamental things. One

7:45

is put holograms in the

7:47

world to deliver this realistic sense of presence,

7:49

like you were there with another person or

7:52

in another place, or maybe you're

7:54

physically with a person, just like we did, you can

7:56

pull up a virtual Pong game or, you know,

7:58

whatever. You can work on things. together,

8:00

you can sit at a coffee shop, pull up your

8:02

whole workstation of different monitors, you know, you can be

8:05

on a flight or in the backseat of

8:07

a car and pull up a full screen

8:09

movie theater and like, okay, all these things, great

8:11

computing, full sense of presence, like you're there

8:13

with people no matter where they are. The

8:15

thing too is

8:18

it's the ideal device

8:21

for AI. And the

8:23

reason for that is because glasses are

8:25

sort of uniquely positioned for

8:28

you to be able to let them see what

8:30

you see and hear what you hear and

8:32

give you very subtle feedback back to you

8:34

where they can speak in your ear or

8:36

they can have silent input that kind of

8:38

shows up on the glasses that other people

8:40

can't see and doesn't take you away from

8:42

the world around you. And I think that

8:44

that is all going to be really profound.

8:47

Now, when we got started with this, I

8:49

had thought that kind of the hologram part

8:51

of this was going to be possible before

8:53

AI. So it's sort of an

8:55

interesting twist of faith that the AI part

8:57

is actually possible before the holograms are really

8:59

able to be mass produced at kind of

9:01

an affordable price. But that was sort

9:03

of the vision. I think that it's pretty easy to wrap your head around.

9:06

And there's already a billion to 2 billion people

9:08

who wear glasses on a daily basis. Just

9:10

like everyone who didn't have smartphones

9:12

were kind of the first people to upgrade two

9:14

smartphones. I think everyone who has glasses is pretty

9:17

quickly going to upgrade to smart glasses over the

9:19

next decade. And then I think it's going to

9:21

start being really valuable. And a lot of other

9:23

people who aren't wearing glasses today are going to

9:25

end up wearing them too. That's kind of the

9:27

simple version. And then I think

9:29

as we've developed this out, there

9:32

are all these sort of

9:34

more nuanced directions that have emerged too. So

9:36

we've started, while that was kind of the

9:38

full version of what we wanted to build,

9:41

there are all these things that we

9:44

said, okay, well, maybe it's really hard to

9:46

build normal looking glasses that can do holograms

9:48

at an affordable price point. So what parts of that

9:50

can we take on? And that's where we did the

9:53

partnership with SLR Luxottica. So it's like, okay, before you

9:55

have a display, you can get

9:57

normal looking glasses that can have a camera, they can

9:59

have a microphone. great audio, can

10:01

capture content, you can stream video at this

10:03

point, but the most important feature at this

10:05

point is the ability to access

10:07

meta AI and just kind of have kind

10:09

of a full AI there, multimodal, because it

10:11

has camera. And I mean, that product is starting

10:14

at $300. And initially

10:16

I kind of thought, hey, this is sort

10:18

of on the technology path to building full

10:20

holographic glasses. At this

10:22

point, I actually just think both are gonna exist long-term. Right,

10:24

I think that there are gonna be people who

10:27

want the full holographic glasses. And I think that there are

10:29

gonna be people who prefer kind

10:31

of the superior form factor or lower

10:33

price of a device where they

10:35

are primarily optimizing for getting AI. I

10:38

also think there's gonna be a range of things in between,

10:40

right? So there's the full kind of field of view that

10:42

you just saw, right? 70 degrees, really wide

10:44

field of view for glasses. But I think

10:46

that there are other products in between that too. There's

10:49

like a heads up display version. For

10:51

that, you probably just need 20, 30 degrees. You

10:55

can't do kind of the full

10:58

kind of world holograms where

11:00

you're interacting with things. Like you're not gonna play

11:02

ping pong in 30 degree field of view, but

11:05

you can communicate with AI. You

11:07

can text your friends. You can get directions.

11:09

You can see the content that

11:11

you're capturing. So I think that there's

11:13

a lot there that's gonna be compelling. And that's, I think, gonna

11:16

be at each step

11:18

along this continuum from display list to small

11:20

display to kind of full holographic, you're

11:22

packing more technology in. So each step up is gonna

11:25

be a little more expensive,

11:28

is gonna have a little more constraints on the form

11:30

factor, even though I think we'll get them all to

11:32

be attractive. You'll be able to do the kind of

11:34

simpler ones and much smaller form factors permanently. And

11:36

then of course, there's the mixed reality headsets, which kind

11:39

of took a different direction, which is going towards the

11:41

same vision. But on

11:43

that, we said, okay, well, we're not gonna try to

11:45

fit into a glasses form factor for that one. We're

11:47

going to say, okay, we're gonna really go for

11:50

all the compute that we want. And we're

11:52

gonna say, okay, this is gonna be more of like a headset

11:54

or goggles form factor. And my guess is that that's

11:56

gonna be a thing long-term too, because there are a bunch of

11:58

uses where people want the full. of

30:00

like R and D and talking about it, but like these,

30:03

the Ray-Bans are kind of a

30:05

signifier of that. And I'm wondering if you agree with that. I

30:07

agree. I mean, I still think it's early. I

30:09

think you really want to be able to not

30:12

just ask the AI

30:15

questions, but ask it

30:17

to do things. Yeah. And

30:19

know that it's going to reliably

30:21

go do it. And we're

30:24

starting with simple things, right? So voice control

30:26

of your glasses, although you can do that

30:28

on phones too, things like reminders, although you

30:30

can generally do that on phones too. But

30:34

I think as the model capabilities grow over the

30:36

next couple of generations and you get more of

30:39

what people call these agentic

30:41

capabilities, I think it's going

30:43

to start to get pretty exciting.

30:46

For what it's worth, I also think that all the AI

30:48

work is going to make phones a lot more exciting. It's

30:50

the most exciting thing I think that has

30:52

happened to our family of apps

30:54

roadmap in a long time, is all of the different

30:57

AI things that we're building. So if I were at

30:59

any of the other companies, I think that if I

31:01

were trying to design what the next few versions of

31:03

iPhone or Google's phone should be, I

31:05

mean, I think that there's a long and

31:07

interesting roadmap of things that they

31:10

can go do with AI that as

31:12

an app developer, we can't. So I think that that's

31:14

like a pretty exciting and interesting thing for them to

31:16

go do, which I assume that they will. On the

31:18

AI social media piece, one of the wilder

31:20

things that your team told me you guys are going to

31:22

start doing is showing people

31:24

AI generated imagery personalized to

31:26

them in feed. I think it's

31:29

starting as an experiment, but like

31:31

if you're a photographer, you would

31:33

see MetAI generating content that's maybe

31:35

personalized for you alongside content from

31:37

the people you follow. And

31:40

that's this idea that I've been thinking about of AI

31:42

and kind of invading social media, so to

31:44

speak. I mean, maybe you don't like the word invading, but

31:46

you know what I mean. And what does that

31:48

do to how we relate to

31:51

each other as humans? Like how much AI stuff

31:54

and AI generated stuff is going to be filling

31:56

feeds in the near future in your view. Well,

31:58

here's how I come at this. So

32:00

in the history of running the company where

32:02

we've been building these apps for 20 years

32:05

every cult three to five

32:07

years there's some new major

32:10

format that comes along that is

32:14

typically additive to the experience, right? So, you know

32:17

initially people kind of updated their profiles Then

32:19

they were able to post statuses that were

32:21

text and then links and you got photos

32:23

early on then you added videos then

32:26

with mobile Basically,

32:28

you know snap invented stories the first version

32:30

of that and that became a pretty

32:33

kind of widely used format the

32:36

whole version of short form videos

32:38

I think is sort of a still

32:40

ascendant format, but at

32:43

each step along the way you

32:45

want to Write

32:48

it's like you keep on making the system richer

32:50

by having more different types of content that people

32:52

can share and in different ways to Express themselves

32:54

and when you look out for the next 10

32:56

years of okay If this trend

32:59

seems to happen where every three five

33:01

years Whatever the pace is

33:03

that there are new formats. I

33:05

think given the pace of change in the tech industry

33:07

I think you'd bet that that continues or accelerates and

33:10

I think you'd bet that Probably

33:12

most of the new formats are going to be

33:14

kind of AI connected in some way given that

33:16

that's the kind of driving theme for the industry

33:19

at this point, so Given

33:21

that kind of set of assumptions. We're

33:23

sort of trying to understand What

33:26

are the things that are most useful to people within

33:28

that? There's one vein of this which

33:30

is Helping people and creators

33:32

make better content using AI that I

33:34

think is gonna be pretty clear right

33:36

just make it like super easy for

33:39

Like aspiring creators or advanced creators to make

33:41

much better stuff than they would be able

33:43

to otherwise That can take the form

33:45

out of like alright Like my

33:48

daughter is writing a book and She

33:51

wants it illustrated and like we sit down

33:53

together and work with meta AI and imagine

33:55

to help her come up with images to

33:57

illustrate It it's like okay. That's like a

33:59

thing That's like she didn't have the capability to

34:01

do that before. She's not a graphic designer, but now

34:03

she kind of has that ability. I

34:06

think that that's going to be pretty cool. Then I

34:08

think that there's a version where you have just

34:11

this great diversity of

34:13

AI agents that are as part of this

34:15

system. And this, I think is a big difference between

34:18

our vision of AI and most of the other companies

34:20

is, yeah, we're building MetAI as

34:22

kind of the main assistant that you can

34:24

build. That's sort of equivalent to, you know,

34:26

the singular assistant that may be like a

34:28

Google or an open AI or different folks

34:30

are building, but it's

34:32

not really the main thing that we're doing. You

34:35

know, our main vision is that we think that

34:37

there are going to be a lot of these,

34:39

right? It's every business, all the hundreds of millions

34:41

of small businesses, you know, just like they have

34:43

a website and an email

34:45

address and a social media account today, I

34:48

think that they're all going to have an AI that helps

34:50

them interact with their customers in the future that does some

34:52

combination of sales and customer support and all that. I

34:54

think all the creators are basically going to want some

34:57

version of this that basically helps them

34:59

interact with their community when they're just limited by their

35:01

own amount of hours in the day to interact with

35:03

all the messages that are coming in and they want

35:05

to make sure that they can show some love to

35:07

people in their, in their community. And

35:09

those I think are just the two most obvious

35:11

ones that even if we just did those, that's

35:14

many hundreds of millions, but then there's going to

35:16

be all this more creative stuff that's UGC that

35:18

people create for better kind of wilder use cases

35:20

that they want. And our view is, okay, these

35:22

are all going to like live across these social

35:24

networks and beyond. I don't think

35:26

that they should just be constrained

35:28

to waiting until someone messages them. Right.

35:31

I think that they're going to have

35:33

their own profiles. They're going to

35:35

be creating content. People will be able to follow them if they

35:37

want. You'll be able to comment on their stuff. They

35:40

may be able to comment on your stuff if you're connected with

35:42

them. I mean, there will obviously be different, different logic and rules,

35:44

but that's one way that there's

35:46

going to be just a lot more

35:48

kind of AI participants in the kind

35:50

of broader social construct that we have.

35:53

And then I think you get to the test that you mentioned, which

35:55

is maybe the most

35:58

abstract, which is just having. kind

36:00

of the central meta

36:02

AI system directly

36:05

generate content for you

36:08

based on what we think is going to be

36:10

interesting to you and putting that in your feed.

36:12

On that, there's been this trend over time where

36:14

the feed started off as primarily

36:17

and exclusively content for

36:19

people you followed, friends. I

36:21

guess it was friends early on, then it kind

36:23

of broadened out to, okay, you followed a set

36:26

of friends and creators. And

36:28

then it got to a

36:30

point where the algorithm was good enough where we're actually

36:32

showing you a lot of stuff that you're not following

36:34

directly because in some ways that's a better

36:36

way to show you more interesting stuff than only constraining it to

36:38

things that you've chosen to follow. And I

36:42

think the next logical jump on that is like, okay,

36:44

we're showing you content from your friends and

36:46

creators that you're following and creators that you're

36:48

not following that are generating interesting things. And

36:51

you just add onto that a layer of, okay, and we're also

36:53

going to show you content from that's generated

36:55

by an AI system that might be something

36:57

that you're interested in. Now, how big do

36:59

any of these segments get? I

37:01

think it's really hard to know until you build them out over

37:03

time, but it feels like it is

37:06

a category in the world that's going to exist and

37:08

how big it gets is kind of dependent on

37:11

the execution and how good it is. Why do

37:13

you think it needs to exist as a new

37:15

category? I'm still wrestling with why people

37:18

wants this. I get the companionship stuff that

37:20

Character AI and some startups have already shown

37:22

there's like a market for, and you've talked

37:25

about how meta is already being used for

37:27

role-playing. But the

37:29

big idea is that AI has been used to intermediate

37:32

and feed how humans reach

37:34

each other. And now all of a

37:36

sudden, AIs are going to be in feeds with us.

37:39

Well, I think the main difference. And that feels big.

37:42

Yeah, but in a lot of ways, the

37:44

big change already happened, which is people getting

37:46

content that they weren't following. The

37:49

definition of feeds in social interaction has

37:52

changed very fundamentally in the last 10 years. Now,

37:55

in social systems, most

37:58

of the direct interaction is happening

38:00

in more private forums in messaging our

38:02

groups. This is one of the

38:04

reasons I think why we were late with Reels initially

38:07

to competing with TikTok is because we hadn't made this

38:09

mental shift where we kind of felt like, no, feed

38:11

is where you interact with people. Actually

38:14

increasingly feed is becoming a

38:16

place where you discover content that you then take

38:18

to your private forums and interact with people there.

38:20

So a lot of the way that I interact

38:22

with people, it's like, yeah, I'll still have the

38:24

thing where a friend

38:26

will post something and I'll comment on it

38:29

and engage directly in feed. Again, this is

38:31

additive, you're adding more over time. But

38:33

the main way that you engage with Reels isn't

38:36

necessarily that you go into the Reels comments and

38:38

comment and talk to people you don't know. It's

38:40

like, you see something funny and you send it

38:43

to friends in a group chat. I

38:45

think that that paradigm will absolutely continue

38:47

with AI and all kinds of interesting

38:49

content. It is

38:51

facilitating connections with people, but

38:55

I think already we're in this mode where

38:58

our connections through social media are shifting to

39:00

more private places and

39:02

the role of feed in the ecosystem is

39:04

more as a, I call it a discovery

39:06

engine of content to kind of, of icebreakers

39:09

or interesting kind of topic starters

39:11

for the conversations that you're having

39:13

across this broader spectrum of places

39:16

where you're interacting. Do you worry

39:18

about people interacting with AIs like

39:20

this, making people less

39:22

likely to talk to other people, like it

39:24

reducing the engagement that we have with humans?

39:28

I mean, the sociology that I've

39:30

seen on this is that most

39:32

people have way fewer friends physically than they

39:34

would like to have. I think people cherish

39:36

the kind of human connection that they have

39:38

and the more we can do to make

39:41

that feel more real and give you more

39:43

reasons to connect, whether it's through something funny

39:45

that shows up so you can message someone

39:47

or a pair of glasses that lets your

39:50

sister show up as a hologram in your living room when

39:52

she lives across the country and you wouldn't be able to

39:54

see her otherwise. That's always kind of

39:56

our main bread and butter in the thing that we're doing. In

40:00

addition to that, I mean,

40:03

if the average person, I think, maybe

40:05

they'd like to have 10 friends and I

40:08

mean, there's the stat that it's sort of

40:10

sad, but I think the average

40:13

American feels like they have fewer than three

40:15

real kind of close friends. So does

40:18

this take away from that? My guess is

40:20

no. I think that what's

40:22

gonna happen is it's going to help

40:25

give people more of the support that

40:27

they need and give people

40:29

more kind of reasons and ability to connect

40:31

with either a broader range of people or

40:33

more deeply with the people that they care

40:35

about. We

40:38

need to take another quick break. We'll be right back. What

40:49

is AI actually for? Everybody is

40:52

talking about AI. Wall Street is

40:54

obsessed with AI. AI will save

40:56

us. AI will kill us. It's just

40:58

AI, AI, AI, everywhere you turn. But

41:01

what does any of that actually mean, like

41:03

in your life right now? That's

41:05

what we're currently exploring on the VergeCast. We're

41:08

looking for places and products where AI might

41:10

actually matter in your day-to-day life. And I

41:12

don't just mean for making your emails less

41:14

mean, though I guess that's good too. Lots

41:17

of big new ideas about AI this month on

41:19

the VergeCast, wherever you get podcasts. I'm

41:23

Anu Subramanian from Vox Media. While

41:25

I see them all around the city, I've

41:27

never ridden in an autonomous vehicle myself. I

41:30

do have some questions about the tech. You may

41:32

as well. Hello,

41:34

from Waymo. This experience may feel

41:36

futuristic. This is so cool. Vox

41:41

and Waymo teamed up for an in-depth

41:43

study about AV perception. And what they

41:45

found was that as people learned more

41:47

about Waymo, their interest in choosing one

41:49

over a human-driven vehicle almost doubled. Person

41:52

approaching. Waymo can see 360 degrees and

41:54

up to 300 meters away, which

41:57

helps it obey traffic laws and get you where you're

41:59

at. going safely. Swiss

42:01

Re found that compared to human drivers, Waymo

42:04

reported 100% fewer injury claims and 76%

42:08

fewer property damage claims. And

42:10

speaking of safety, folks identifying as

42:12

LGBTQIA and non-binary showed the highest

42:15

interest in AVs, and women showed

42:17

the greatest increase in interest after

42:19

learning more. Arriving

42:22

shortly at your destination. So

42:25

that actually felt totally normal. AVs

42:28

are here, and the more you know, the more exciting

42:30

this tech becomes. You

42:32

can learn more about Waymo, the world's

42:34

most experienced driver, by heading to waymo.com.

42:40

When it comes to business, you know this

42:42

podcast has you covered. But who do you

42:44

turn to when you need smart financial decisions?

42:47

If your answer is our sponsor, NerdWallet, then

42:49

you're absolutely right. And if it's not,

42:51

allow us to change your mind. Not

42:54

only have the nerds over at NerdWallet spent thousands

42:56

of hours researching and reviewing over 1,300 financial

42:59

products, but they have the tools you need

43:01

to make smarter decisions. Looking

43:03

for a credit card? Go beyond the basic

43:05

comparisons. At NerdWallet, you can filter for the

43:08

features that matter to you and read in-depth

43:10

reviews. Ready to choose

43:12

a high-yield savings account? Get access to

43:14

exclusive deals and compare rates, bonuses, and

43:16

more. House hunting? View

43:18

today's top mortgage rates for your home sweet

43:20

home. Make the nerds

43:23

your go-to resource for smart financial decisions.

43:25

Head to nerdwallet.com/learn more.

43:29

NerdWallet. Finance smarter. NerdWallet Compare

43:31

Incorporated. And MLS 1617539. We're

43:42

back with Metaseo Mark Zuckerberg, talking about the

43:44

current state of threads and why the company

43:46

is trying to back out of politics. How

43:52

are you feeling about how threads is doing

43:54

these days? I mean, threads on fire. It's

43:56

great. I mean, it's I mean, these things, it's

43:58

like there's only so quickly that something can get to a billion. people. So

44:00

it's going to, you know, we'll kind of keep

44:02

on pushing on it over. I've heard it's still

44:04

using Instagram a lot for growth. Like I'm, I

44:07

guess I'm wondering when do you see it getting

44:09

to like a standalone growth driver on its

44:12

own? I mean, I think that these things

44:14

all connect to each other. I mean, I think threads

44:16

helps Instagram. I think Instagram helps threads. I don't know

44:18

that we have some strategic goal, which is like make

44:21

it so that threads is completely disconnected from Instagram or Facebook.

44:23

I actually think we're going the other direction. It started off

44:25

just connected to Instagram and now we also connected it so

44:27

that content can show up. You know, taking a step back,

44:30

I mean we just talked about how for

44:32

most people they're interacting in more private forums. If

44:34

you're a creator, what you want to do is

44:36

have your content show up everywhere, right? Because you're

44:39

trying to build the biggest community that you can

44:41

in these different places. So it's this huge value

44:44

for people. If they can generate a real

44:46

or a video or some text based content,

44:48

and now you can post it in threads,

44:50

Instagram, Facebook, um, and more places

44:52

over time. So I think the direction there is

44:55

generally kind of more flow, not

44:57

less and kind of more interoperability. And that's,

44:59

that's why I've been kind of pushing on

45:01

that as a theme over time. I'm not

45:03

even sure what X is anymore, but I

45:05

think what it used to be and what

45:07

Twitter used to be was a place where

45:09

you went when news was happening. I know

45:11

you and the company seem to be distancing

45:13

yourself from recommending news, but with

45:15

threads, it feels like that's

45:18

what people want and people thought threads might

45:20

be, but it seems like you all are

45:22

intentionally saying we don't want threads to actually

45:25

be that. Yeah. I mean, it's

45:27

interesting. There's there are different ways to look at

45:29

this. I always looked at Twitter, not as

45:31

primarily about real time news, but

45:33

as a kind of short form,

45:36

primarily text discussion oriented app to

45:38

me, the fundamental defining aspect of

45:40

that format is that you

45:42

don't, when you make a post, the

45:45

comments aren't subordinate to the post. The

45:47

comments are kind of at a peer level, and

45:50

that is a very different

45:52

architecture than every other type

45:54

of social network that's out there. It's

45:56

a subtle difference, but within these

45:58

systems, these subtle differences. is lead to very

46:01

different emerging behaviors. So because of that, people

46:03

can take, they can fork discussions, and

46:06

it makes it a very good discussion-oriented platform. Now, news

46:08

is one thing that people like discussing, but it's not

46:10

the only thing. And I always looked at Twitter and

46:12

I was like, hey, this

46:14

is such a wasted opportunity. Like this is

46:17

clearly a billion person app. You

46:19

know, maybe in the modern day when you have

46:21

multiple, like many billions of people using social apps,

46:23

it should be multiple billions of people. For whatever

46:25

reason, I mean, there are a lot of things

46:27

that have been complicated about Twitter and the corporate

46:30

structure and all that, but they just weren't quite

46:32

getting there. And eventually, I kind

46:36

of thought, hey, I think we can

46:38

do this, right? I think we can get this,

46:40

like build out the discussion platform in

46:43

a way that can get to a billion people and

46:45

be more of a ubiquitous social platform

46:47

that I think achieves the, like

46:50

its full potential. But our

46:52

version of this is we want it to be a kinder place.

46:55

We don't want it to start with kind of the

46:57

like direct kind of head

46:59

to head combat of news and especially

47:01

politics. And they feel

47:05

like that constrains the growth of the product at all. I mean,

47:07

I think we'll see. Or that that needs to exist in the

47:09

world. Cause I feel like with X's

47:11

seeming implosion, it's not really existing anymore.

47:13

Maybe I'm biased to someone in the

47:15

media, but I do think people want,

47:18

when something big happens in the world, they want an

47:20

app that they can go to and see everyone that

47:22

they follow talking about it immediately. Yeah,

47:24

well, we're not the only company. And

47:27

there's like a ton of different competitors

47:29

and different companies doing things. And I

47:32

mean, I think that there's a talented team over at Twitter

47:34

and X and I wouldn't write them off.

47:37

And then obviously there's all these other folks. There's a

47:39

lot of startups that are doing stuff. So I don't

47:41

feel like we have

47:44

to go at that first. I think that

47:46

like maybe we get there over time or maybe

47:48

we decide that it's enough

47:50

of a zero sum trade or maybe even a

47:52

negative sum trade where like that use case should

47:54

exist somewhere. But maybe that use

47:56

case prevents a lot more

47:59

usage. kind of a lot more

48:01

value in other places because it makes it a somewhat

48:03

less friendly place. I don't think we

48:05

know the answer to that yet, but I do think the

48:07

last 10 years, eight years

48:10

of our experience has been that

48:12

the political discourse, it's

48:14

tricky, right? It's on the one hand, it's

48:16

obviously a very important thing in society. And

48:19

on the other hand, I don't think it

48:21

leaves people feeling good. So I'm torn between

48:23

these two values. On the one

48:25

hand, I think like people should be able to

48:27

have this kind of open discourse and that's good.

48:29

On the other hand, I don't want to design

48:32

a product that makes people angry. Right?

48:34

It's like, I mean, there's an informational lens

48:36

for looking at this and there's

48:38

kind of a, you're designing a product and like, what's

48:40

the feel of the product, right? It's like, I think

48:43

anyone who's designing a product cares a lot about how

48:45

the thing feels and- But

48:48

you recognize the importance of that discussion happening

48:50

in the world. I think it's useful. And

48:52

look, we don't block it. We just

48:54

make it so that, for the content

48:56

where you're following people, if you want to talk to your

48:58

friends about it, if you want to talk to them about

49:00

it in messaging, there can be groups about it. If

49:03

you follow people, it can show up in your feed. But we

49:05

don't go out of our way to recommend

49:08

that content when you're not following it. And

49:10

I think that that

49:12

has been a healthy balance for us and

49:14

for getting our products to generally feel the

49:17

way that we want. And

49:19

culture changes over time. Maybe this stuff will

49:22

be like a little bit less polarized and

49:24

anger inducing at some point. And maybe it'll

49:27

be possible to have more of that while

49:29

also at the same time having a product

49:31

where we're proud of how it feels. But

49:34

until then, I think we want to design a product

49:36

that, yeah, people can get the things that they want,

49:39

but fundamentally I care a lot about

49:41

how people feel coming away from the

49:44

products. Do you see this decision to

49:46

downrank political content for people who aren't

49:48

being followed in feed as a political

49:50

decision, I guess? Because I don't

49:53

know. You're also at the same time, not

49:56

really saying much about the election this year. You're not

49:58

donating. You've said you... kind of want to stay out

50:00

of it now. And I see

50:02

the way the company is acting and it

50:04

reflects your personal kind of way you're operating

50:07

right now. And I'm wondering like how

50:09

much more of it is also about what you and the

50:11

company have gone through and the

50:14

political environment and not necessarily just what users

50:16

are telling you. Like, is there

50:18

a through line there? I mean, I'm sure it's all

50:20

connected. I think in this case, it wasn't a trade-off

50:22

between those two things because this actually was what our

50:24

community was telling us. And people were saying generally, we

50:27

don't want so much politics. Like

50:30

this isn't, you know, like we don't feel good. Like we want

50:32

content, we

50:35

want more stuff from our friends and family. We want more stuff

50:37

from our interests. That was kind

50:39

of the primary driver. But I think

50:41

it's definitely the case that our corporate

50:43

experience on this shaped this. And

50:45

I think there's something, there's a

50:48

big difference between something being political and being partisan.

50:51

And the main thing that I care

50:53

about is making sure that we

50:55

can be seen as a

50:57

nonpartisan and as much as something

50:59

can in the world in 2024 be

51:02

sort of like a trusted institution by

51:04

as many people as possible. And I

51:07

just think that the partisan politics is so

51:09

tough in the world right now that

51:12

I've made the decision that I kind of feel like for

51:14

me and for the company, the best thing

51:16

to do is to try to be as nonpartisan as

51:18

possible in all of this and kind

51:20

of be as neutral and distance ourselves as much

51:23

as possible. And it's not just the substance. I

51:25

also think the perception matters. So

51:27

that's why maybe it doesn't matter

51:29

on our platforms, whether I endorse a candidate or

51:31

not, but I don't wanna go anywhere near that.

51:35

And yeah, sure. I mean, you

51:37

could say that's a political strategy, but

51:39

I think for where we are

51:41

in the world today, it's very

51:43

hard. Almost every institution has

51:46

become partisan in some way. And

51:48

we are just trying

51:51

to resist that. And maybe I'm too

51:53

naive and maybe that's impossible, but we're

51:55

gonna try to do that. On the

51:57

acquired podcast recently, you said that the

51:59

political. miscalculation was a 20 year mistake.

52:01

Yeah, from a brand. From a brand perspective.

52:04

And that it was gonna take another 10

52:06

or so for you to fully work through

52:08

that cycle. Yeah, yeah. What makes you think

52:10

it's such a lasting thing? Because you look

52:13

at like how you personally have kind

52:15

of evolved over the last couple of years and I think

52:17

perception of the company has evolved and I'm wondering

52:19

like what you meant by saying it's gonna take another

52:21

10 years. I'm just talking about

52:23

where our brand is and

52:25

our reputation are compared to where I

52:28

think they would have been. There's no

52:30

doubt that even now here and okay,

52:32

yeah, sure. Maybe things have improved somewhat over the

52:34

last few years. You can feel the trend, but

52:37

it's still significantly worse than it was in 2016. You

52:40

know, it's I mean the internet industry overall

52:43

and I think our company in particular

52:46

just we're seeing way more positively

52:49

and now look, there

52:51

were real issues, right? So I

52:54

think that it's always very difficult to talk about this stuff in

52:56

a nuanced way because I think

52:58

to some degree before 2016, everyone was sort of too

53:00

rosy about the internet overall and didn't talk enough about

53:02

the issues and then the pendulum sort of swung and

53:04

people only talked about the issues and didn't talk about

53:06

the stuff that was positive and it was all both

53:09

there the whole time. So when I talk about this,

53:11

I don't mean to come

53:13

across as simplistic or like

53:15

you guys didn't do anything wrong or anything like that.

53:18

There weren't issues with the internet or things like that.

53:20

I mean, obviously every year, whether it's politics or other

53:22

things, there are always things that you look back at

53:24

and you're like, Hey, yeah, like if I were playing

53:26

this perfectly, I would have done these things differently. And

53:29

but I do think it's the case that I

53:32

didn't really know how to react to something as big

53:34

of sort of a shift in the world as what

53:36

happened. And it took me a while to

53:38

find my footing. And I do

53:40

think that it's tricky when

53:42

you're caught up in these kind of big debates

53:44

and you're not kind of

53:46

experienced or sophisticated in engaging with that. I

53:49

think you can make some big missteps.

53:51

And I do think that some of the things that

53:53

we were accused of over time, it just, you know,

53:55

I think it's just been pretty clear at this point,

53:57

you know, now that all the investigations have been done

53:59

that. they

54:01

weren't true. And- You're

54:03

talking about Cambridge Analytica. I think Cambridge Analytica

54:06

is a good example of something that it's

54:08

like people thought that all this data had

54:10

been taken and that it had been used

54:12

in this campaign. And I think it turns

54:14

out it wasn't. It turns out it wasn't.

54:16

Yeah, so it's all the stuff. Okay, and

54:18

the data wasn't even accessible to

54:20

the developer. And we'd fixed the issue five years

54:22

ago. So in the moment, it was

54:24

really hard for us to have

54:26

a rational discussion about that. And

54:28

I think part of the

54:31

challenge is that for the general

54:33

population, I think a lot of people, they read the

54:35

initial headlines and they don't necessarily

54:37

read the, and

54:41

frankly, a lot of the media, I don't think

54:43

was like as loud to write about when all

54:45

the investigations concluded that said that like a lot

54:47

of the initial allegations were just completely wrong. So

54:49

I think that's like a, that's a real thing.

54:52

So, okay, so you take these hits. I

54:54

didn't really know how to kind of

54:56

push back on that. And maybe some of

54:58

it you can't, but I like

55:01

to think that I think we could have played some of

55:03

the stuff differently. And I do think

55:05

it was certainly the case that when you

55:07

take responsibility for things that are not your

55:09

fault, you become sort

55:11

of a weak target for people

55:14

who are looking to blame other things and

55:16

find a target for them. It's sort of

55:18

like, this is a different part of, it's

55:21

somewhat related to this, but when you think about like, like

55:24

litigation strategy for the company. One of the

55:26

reason why I hate settling lawsuits is

55:28

that it basically sends a signal to

55:30

people that, hey, this is a company that

55:33

settles lawsuits. So maybe like we can sue

55:35

them and they'll settle lawsuits. So- So

55:38

you wouldn't write a blank check to the government

55:40

like Google did for its antitrust case. No, I

55:42

think the right kind of way

55:44

to approach this is when you believe in something,

55:47

you fight really hard for it. And I think

55:49

this is a repeat game. This isn't like, it's

55:52

not like there's a single issue and we're

55:54

going to be around for a long time. And I

55:56

think it's really important that people know that

55:59

we're a company that has conviction and that

56:01

we believe in what we're doing and we're

56:03

gonna back that up and defend ourselves. And

56:05

I think that that kind of sets the

56:07

right tone. Now, I

56:10

think over the next 10 years, I think we're

56:12

sort of digging ourselves back to neutral on this,

56:14

but I like to think that if we hadn't had a lot

56:16

of these issues, we would have made progress over the last 10

56:18

years too. So I sort of give it this timeframe, maybe 20

56:20

years is too long, maybe it's 15, but it's

56:23

hard to know with politics. It feels like mental health

56:25

and youth mental health may be the next wave of

56:27

this. That I think is the next big fight. And

56:29

on that, I think a lot of the data on

56:31

this, I think is just not

56:34

where the narrative is. The narrative, yeah, I

56:36

think the narrative is a lot of people

56:38

sort of take it as if it's like

56:40

an assumed thing that there's some link. And

56:42

like, I think the majority of the

56:45

high quality research that's out there suggests that there's

56:47

no causal connection like a kind

56:49

of a broad scale between these things. So, no,

56:51

look, I mean, I think that that's different from

56:53

saying like in any given issue, like is someone

56:55

bullied? Should we try to stop bullying? Yeah, of

56:57

course. But yeah, overall,

56:59

I think that this is one

57:01

where there

57:04

are a bunch of these cases. I think that there will be a

57:06

lot of litigation around them. And it's

57:08

one where we're trying to make sure

57:10

that the academic research that shows something

57:13

that I think is, to me, it sort of

57:15

fits more with

57:19

what I've seen of how the platforms operate, but

57:21

it's counter to what a lot of people think.

57:23

And I think that that's gonna be a reckoning

57:25

that we'll have to have is basically as

57:28

the kind of the majority of the high

57:30

quality academic research gets shown is

57:32

like, okay, can people accept this? And I think that's

57:35

gonna be a really important set of debates over the

57:37

next few years. At the same time, you guys have

57:39

acknowledged there's affordances in the product, like the teen rollout

57:41

with Instagram recently that you can make to make

57:43

the product a better experience for young

57:45

people. Yeah, and I think this is an interesting

57:47

part of the balance is you

57:51

can play a role in trying to make something

57:53

better, even if the thing wasn't

57:55

caused by you in the first place. There's

57:57

no doubt that being a parent. is

58:00

really hard. And there's a big

58:02

question of in

58:05

this internet age where we have

58:07

phones, what are the right

58:09

tools that parents need in order

58:11

to be able to raise their

58:13

kids? And like, I

58:16

think that we can play a role in

58:18

giving people controls over parental controls over the

58:20

apps. I think that parental controls are

58:22

also really important because parents have different ways that they wanna

58:24

raise their kids. Or just like schooling

58:27

and education, people have very significantly

58:29

different local preferences for how they wanna raise

58:31

their kids. I don't think that

58:33

most people want some internet

58:36

company setting all the rules for this

58:38

either. So obviously, when there

58:40

are laws passed, we'll kind of

58:42

follow the government's direction and

58:44

the laws on that. But I actually think

58:47

the right approach for us is

58:49

to primarily kind of

58:52

align with parents to give them the tools that they want

58:54

to be able to raise their kids in the way that

58:56

they want. And some

58:58

people are gonna think that more technology use is good.

59:00

That's sort of how my parents raised

59:03

me growing up. I think it worked pretty well. Some

59:05

people are gonna be kind of more, you

59:08

know, wanna limit it more. And we wanna give them the tools to be able

59:10

to do that. But I don't think that this is primarily

59:12

or only a social media thing. Even

59:16

the parts of this that are technology. Age

59:18

verification. I think phones, like the phone platforms have

59:21

a huge part of this. I mean, it's, yeah,

59:23

there's this big question of how do you do

59:25

age verification? I mean, I

59:27

can tell you what the easiest way is, which is

59:29

like, all right, like every time you go do a

59:31

payment on your phone, I mean, there already is child,

59:33

you know, basically like child age verification. So I

59:36

don't really understand, well, I guess I

59:38

understand, but I think it's not very,

59:40

you know, excusable from my perspective why

59:43

Apple and I guess to

59:45

some extent, Google don't wanna just

59:47

extend the age verification that they already have

59:49

on their phones to be a parental control

59:51

for parents to basically be able to say,

59:53

you know, what apps can my kids use?

59:55

It's hard for me to not see the

59:57

logic in it either. I don't, I don't.

1:00:00

I think they don't wanna take responsibility. But

1:00:03

maybe that's on Congress then to pass who

1:00:05

has to take responsibility. Yeah. Yeah.

1:00:07

And we're gonna do our part and we're

1:00:09

gonna build the tools that we can for

1:00:11

parents and for teens. But

1:00:14

at the end of the day, and

1:00:16

look, I'm not saying it's all the phone's fault

1:00:18

either. Although I would say that like the ability

1:00:20

to get push notifications and get kind of distracted

1:00:23

is from my perspective seems like

1:00:25

a much greater contributor to mental health issues than

1:00:28

kind of a lot of the specific apps. But

1:00:30

there are things that I think everyone should kind

1:00:32

of try to improve and work on. But

1:00:35

yeah, I mean, that's sort of my view on all of that.

1:00:38

I guess on the regulation piece, as it

1:00:41

relates to AI, you've been very

1:00:43

vocal about what's happening in the EU. And

1:00:45

you recently signed an open letter and

1:00:47

I believe it was saying basically that

1:00:49

you guys just don't have clarity on

1:00:52

consent for training, how it's supposed to

1:00:54

work. And I'm wondering

1:00:57

what you think needs to happen there for things to

1:00:59

move forward because like med AI is not available in

1:01:01

Europe, new llama models are not. Is

1:01:04

that something you see getting resolved

1:01:06

at all, I guess, and

1:01:09

what would it take? Yeah, I

1:01:11

don't know. It's a little hard for

1:01:13

me to parse the European politics. I

1:01:15

have a hard time enough with American

1:01:18

politics. I mean, it's, and I'm American,

1:01:20

but in theory, my

1:01:22

understanding of the way this is supposed to work is

1:01:24

they kind of passed this

1:01:26

GDPR regulation. And

1:01:30

you're supposed to have this

1:01:32

idea of sort of a one-stop

1:01:35

shop, like home regulator,

1:01:37

who can basically on behalf of the whole EU interpret

1:01:41

and enforce the rules. We

1:01:43

have our European headquarters and

1:01:46

we work with that regulator and I

1:01:48

think they're like, okay, they're pretty tough on

1:01:50

us and pretty firm. But at least

1:01:52

when you're working with one regulator, you can kind of understand

1:01:55

how are they thinking about things and you

1:01:57

can make progress. And the thing that

1:01:59

I think has been... tricky is

1:02:01

there have been, from my perspective,

1:02:03

a little bit of a backslide where

1:02:06

now you get all these other DPAs

1:02:08

across the continent, sort of

1:02:10

also intervening and trying to do things and

1:02:12

it just seems like more of an kind

1:02:14

of internal EU political thing, which is like,

1:02:16

okay, do they want to have this one

1:02:19

stop shop and have clarity for companies so

1:02:21

companies can kind of like can execute or

1:02:24

do they just want it to be this kind

1:02:26

of very complicated regulatory system? And

1:02:28

look, I think that's for them to sort out, but I think

1:02:30

that there's no doubt that when you have like dozens

1:02:33

of different regulators that can ask you kind

1:02:35

of the same questions about different things, it

1:02:37

makes it a much more difficult environment to

1:02:39

build things. Do you understand

1:02:41

that that's just us? I think that that's all

1:02:43

the companies. But do you understand the concern people

1:02:45

have about training data and how

1:02:47

it's used and this idea

1:02:49

that their data is being used for

1:02:52

these models, they're not getting compensated and

1:02:54

the models are creating a lot of value. And

1:02:56

I know you're giving away llama, but you're

1:02:58

still, you've got med AI and I

1:03:00

understand the frustration that people have about that. I

1:03:03

think it's a naturally bad feeling to be like,

1:03:05

oh, my data is now being used in a

1:03:07

new way that I have no control or compensation

1:03:09

over. Do you sympathize with that? Yeah. I

1:03:12

mean, I think that there in

1:03:14

any new medium in technology, there's

1:03:16

like the concepts around fair use

1:03:18

and like where the boundary is between

1:03:21

what you have control over, but when you put

1:03:23

something out in the world, to what degree do

1:03:25

you still get to control it and kind of

1:03:27

own it and license it? I think

1:03:29

that all these things are basically going to need to

1:03:31

get relitigated and

1:03:34

rediscussed in the AI era.

1:03:36

So I'm going to get it. I think that

1:03:38

these are important questions. I think this is not

1:03:40

like a completely novel thing to AI in the

1:03:43

grand scheme of things. I think it was a

1:03:45

lot of them, there were questions about it with

1:03:47

the internet overall too and with different technologies over

1:03:49

time. But I think getting to

1:03:51

clarity on that is going to be important

1:03:53

so that way the things that society

1:03:56

wants people to build, they can go

1:03:58

build. look like to you there.

1:04:01

I mean, I think it starts with having some framework

1:04:03

of like, okay, what's the process going to be if

1:04:06

we're working through that? But

1:04:08

you don't see a scenario where creators get

1:04:10

like directly compensated for the use of their

1:04:12

content models. I think that there's a lot

1:04:14

of different possibilities for how stuff goes in

1:04:17

the future. Now I do think

1:04:19

that there's this issue, which is a lot of,

1:04:21

like, while psychologically I understand what you're saying, I

1:04:25

think individual creators or

1:04:27

publishers tend to

1:04:29

overestimate the value of

1:04:32

their specific content. Right, so it's

1:04:34

like, okay, maybe like, in

1:04:36

like in the grand scheme of this, right? So

1:04:39

we have this set of challenges with, you know, news

1:04:41

publishers around the world, which is like, okay, they're

1:04:43

like a lot of folks are constantly kind

1:04:45

of asking to be paid for the

1:04:47

content. And on the other hand, we have our community, which

1:04:50

is asking us to show less news because it makes them

1:04:52

feel bad, right? And I mean, we talked about that. So

1:04:54

it's like, there's this issue, which is, okay,

1:04:56

it's like, actually, we're showing some amount of

1:04:58

the news that we're showing because we think

1:05:00

it's socially important against what our

1:05:03

community wants. Like if we were actually just following what

1:05:05

our community wants, we'd show even less than we're showing.

1:05:08

And you see that in the data that people just

1:05:10

don't like to engage with this stuff. We've had these

1:05:12

issues where sometimes we, like, publishers

1:05:14

say, okay, if you're not gonna pay us, then

1:05:16

pull our content down. And it's just like, yeah,

1:05:18

sure, fine, pull your content down. I mean, that

1:05:21

sucks. I'd rather people be able to share it,

1:05:23

but to some degree, some

1:05:25

of these questions have to

1:05:27

get tested by their

1:05:30

negotiations and they have to get tested

1:05:32

by people walking. And then

1:05:34

at the end, once people walk, you

1:05:36

figure out where the value

1:05:39

really is. If it really is

1:05:41

the case that news was a big thing

1:05:43

that the community wanted, then, I mean, look,

1:05:45

we're a big company. We could probably,

1:05:48

we pay for content when it's valuable to

1:05:51

people. We're just not gonna pay for content when it's not valuable

1:05:53

to people. So I think

1:05:55

that you'll probably see a

1:05:57

similar dynamic with AI, which is...

1:05:59

is my guess is that there are

1:06:01

going to be certain partnerships that get made when content

1:06:03

is really important and valuable. And

1:06:05

I guess that there's probably a lot of people who

1:06:08

kind of have a concern about

1:06:10

like the feel of it, like you're saying, but

1:06:13

then when push comes to shove, if

1:06:15

they demanded that we don't use their content,

1:06:18

then we just wouldn't use their content. And

1:06:20

it's not like, you know, that's going to

1:06:22

change the outcome of the stuff that much.

1:06:24

To bring this full circle where we started

1:06:26

as you're building augmented reality glasses and

1:06:28

what you've learned over just the societal implications of

1:06:30

the stuff you've built over the last decade.

1:06:34

How are you thinking about this as

1:06:36

it relates to glasses at scale, because you're

1:06:38

literally going to be augmenting reality, which is

1:06:40

it's a responsibility. I think that's going to

1:06:42

be another platform too. And you're going to

1:06:44

have a lot of these questions as well.

1:06:46

I mean, I think the interesting thing about

1:06:49

holograms and augmented reality is it's going to

1:06:51

be this intermingling of the physical and digital

1:06:53

much more than we've had in other platforms

1:06:56

or on your phone. It's like, okay, yeah, we

1:06:58

live a primarily physical world, but then you have

1:07:00

the small window into this digital world. And I

1:07:02

think we're going to basically have this world in

1:07:04

the future that is increasingly,

1:07:07

you know, call it half physical, half digital,

1:07:10

or I don't know, 60% physical, 40% digital.

1:07:13

And it's like going to be blended together. And I think that

1:07:15

there are going to be a lot

1:07:17

of interesting governance questions around that, right? In

1:07:19

terms of is kind of

1:07:21

all of the digital stuff that's overlaid

1:07:24

physically going to fit within sort

1:07:26

of a physical kind of national

1:07:29

regulation perspective? Or

1:07:32

is it sort of, is it actually coming from

1:07:34

a different world or something? You know, and I

1:07:36

think that these will all be very interesting questions

1:07:38

that we will have a perspective. I'm sure we're

1:07:40

not going to be right about every single thing.

1:07:43

I think like the world will kind of need to sort out

1:07:45

where it wants to be. Different countries will have different values and

1:07:47

take somewhat different approaches on things. And I think that that's, it's

1:07:50

part of the interesting process of this, that

1:07:52

the tapestry of how it all gets built

1:07:54

is like, you need to work through so

1:07:56

that it ends up being positive for as

1:07:59

many of the possible. stakeholders as possible. More

1:08:02

to come. A lot to come. Thanks, Mark. I'd

1:08:08

like to thank Mark Zuckerberg for joining the Coder and

1:08:10

thank Alex Heath for guest hosting. I'd also like to

1:08:12

thank you for listening. I hope you enjoyed it. You

1:08:14

should subscribe to Alex's newsletter command line, which comes out

1:08:16

every week. It is absolutely jam-packed with industry insight, scoops,

1:08:18

and smart analysis. It's a must-read. If you'd like to

1:08:20

let us know what you thought about this episode or

1:08:23

really anything else, drop us a line. You can email

1:08:25

us at decoder at the verge.com. We really do read

1:08:27

all the emails. Or you can hand me up directly

1:08:29

on Threads, a meta product. I'm at Reckless 1280. You

1:08:31

also have a TikTok, or as long as TikTok lasts,

1:08:34

it's at Decoder Pod. It's a lot of fun. If

1:08:36

you'd like to be Coder, please share with your friends

1:08:38

and subscribe wherever you get your podcasts. Decoder is a

1:08:40

production of The Verge and part of the Vox Media

1:08:42

Podcast Network. Our producers are Kate Cox and Nick Stat.

1:08:44

Our editor is Callie Wright. This episode was additionally produced

1:08:46

by Brett Putman and Viren Pavik. Our supervising producer is

1:08:48

Liam James. Decoder music is by Breakmaster Cylinder. We'll see

1:08:50

you next time. What

1:08:57

does impactful marketing look like in today's day and

1:08:59

age? If you're a marketing professional, what decisions can

1:09:01

you make today that will have a lasting impact

1:09:04

for your business tomorrow? We've

1:09:06

got answers to all of the above and

1:09:08

more on the PropG Podcast. Right now, we're

1:09:10

running a two-part special series all about the

1:09:13

future of marketing. It's time

1:09:15

to cut through the noise and make a

1:09:17

real impact. Tune into the future of marketing,

1:09:19

a special series on the PropG Podcast sponsored

1:09:21

by Canva. You can find it

1:09:23

on the PropG feed wherever you get your podcasts.

1:09:28

This episode is brought to you

1:09:30

by Choiceology, an original podcast from

1:09:32

Charles Schwab. Hosted by

1:09:34

Katie Milkman, an award-winning behavioral scientist

1:09:37

and author of the best-selling book

1:09:39

How to Change, Choiceology is a

1:09:41

show about the psychology and economics

1:09:43

behind our decisions. Hear true

1:09:45

stories from Nobel laureates, authors, athletes,

1:09:47

and everyday people about why we

1:09:50

do the things we do. Tune

1:09:53

into Choiceology at schwab.com/podcast

1:09:55

or wherever you listen.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features