Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
Welcome to The Vergecast, the flagship podcast
0:05
of bot farms. I'm your friend David
0:07
Pierce, and this is the second episode
0:09
in our mini-series all about AI in
0:12
the real world. AI is
0:14
so abstract, and it's a term that
0:16
we now use in so many ways
0:18
that honestly it can feel sort of
0:20
meaningless. So we've been
0:22
on a quest to find actual
0:24
examples of actual AI showing up
0:26
and being useful or at the
0:28
very least interesting in our actual
0:30
lives. For this episode,
0:33
the last one in our little series for now, I
0:35
have a feeling we'll come back to this subject, but
0:37
last one for now. I'm talking to
0:39
Michael Salmon, who recently launched an app
0:41
called Social AI that has become kind
0:43
of a viral phenomenon on the internet.
0:46
We'll get into what it is and how it
0:48
works in pretty serious detail here, but
0:51
basically I'd explain Social AI this way.
0:53
Imagine a social network, Twitter or threads
0:56
or whatever, but every user
0:58
other than you, every single one
1:00
other than you is a bot. Does
1:03
that sound interesting? Pointless, terrible,
1:06
dystopian, amazing? Maybe all
1:08
of those things? I wasn't
1:10
sure where I fell on that line
1:12
when I started talking to Michael, but
1:14
we ended up having frankly one of
1:17
the most fun conversations I've had in
1:19
a while, all about how AI works
1:21
and how we're actually supposed to use
1:24
it. Spoiler alert, he
1:26
thinks about this less as a
1:28
network and more as an interface,
1:30
and I find that fascinating. We
1:32
happen to agree, actually, Michael and
1:34
I, that a chat bot cannot
1:36
possibly be the future of everything
1:38
in technology, and Michael has some
1:41
big ideas about what else we might be able
1:43
to do. All that is coming
1:45
up in just a second, but first I
1:47
have to tell my bots what's going on. They
1:49
worry when I'm gone for too long. This is the VergeCast. We'll
1:51
be right back. your
2:00
business forward with the security and speed
2:02
of the world's most experienced cloud. Welcome
2:08
back. Let's get into my conversation
2:10
with Michael Saiman from Social AI. Michael's
2:13
had kind of a fascinating history in the tech industry,
2:15
by the way. He got a job at Facebook when
2:17
he was 17 after Mark
2:19
Zuckerberg discovered an app he'd written and just offered
2:21
him a job. I think it was an internship,
2:23
but he ended up working there for a while.
2:26
After that, he went to Google to work on Google
2:28
Assistant. Then he went to Roblox and then Twitter. He's
2:31
been through a surprising number of the
2:33
biggest and most interesting companies in tech.
2:35
And in particular, he's seen a huge
2:37
part of the evolution of social media
2:39
and social networks. He worked on stories
2:41
at Instagram. He worked on status at
2:43
WhatsApp. He worked on shorts at YouTube.
2:46
Like I said, he worked at Twitter
2:48
and a whole bunch of other things.
2:50
And now he's on his own. He's
2:52
building apps through his one man startup
2:54
that he calls Friendly Apps. At
2:56
the beginning of our conversation, Michael told
2:59
me he had been thinking about building
3:01
an AI social network through much of
3:03
that time. The idea for
3:05
what would become social AI has been in
3:07
his head for a really long time. It's
3:09
just that until now, he couldn't actually pull
3:11
it off. I actually tried building a version
3:13
of social AI like five years ago, and
3:16
the tech just wasn't there. And it was
3:18
really bad. What was it five years ago?
3:20
It was, I mean, I
3:22
called it influencer. And the idea was
3:24
that anyone could be an influencer. So
3:26
like, I've been trying to do this
3:28
for a while, but it just quite
3:30
wasn't there. I
3:32
originally, you know, because we didn't have the language
3:35
models, we tried to build it. And by we,
3:37
I mean just me, but trying
3:39
to build this to kind
3:42
of like give people the
3:44
feeling of a social media app, but not
3:46
really having to deal with all of it.
3:49
The idea was like, okay, if someone if someone's
3:51
addicted to cigarettes, how do you get them off
3:53
of it? Well, you can't just tell them to
3:55
stop, right? Like, maybe it's like giving them something
3:58
maybe like a nicotine patch or something. Right.
4:00
So like, what is the way that you get somebody
4:02
to like, be able to get that
4:05
experience out the way, but maybe not, you
4:08
know, harm themselves or feel bad. So
4:10
anyway, so so I built that it didn't
4:12
really look quite right and it didn't work
4:14
well, so I didn't
4:16
ship it. What did it feel like
4:18
actually, before we get to the chat GPT thing of
4:21
it all, because that's a there's an interesting sort of
4:23
history of text story there. But yes, like when
4:25
you built the thing before
4:27
in the before
4:30
times, what what
4:32
didn't work? What didn't feel right? Like
4:34
what wasn't ready? I just could not
4:36
simulate the entire social network in a
4:39
way that felt interesting,
4:41
even mildly. Like
4:43
I, I did the approach
4:45
that Google did. Like I used to I used to work at
4:48
Google, I used to work at Facebook. And
4:51
I took a similar approach to all
4:53
of the assistance that there were at
4:55
the time. Giant if all statements, like
4:57
just massive. And OK,
4:59
like it kind of worked. But like
5:02
everything else before these language models
5:04
kind of took off, it
5:06
was very robotic and very conditional,
5:09
right, depending on what you wrote.
5:11
And and it just didn't
5:13
quite feel. It
5:15
just didn't let you forget
5:18
about the technology like you were
5:20
reminded in the app that like you
5:23
had to like do certain
5:25
things to get certain comments. And and
5:27
so at that point, it was
5:29
really more of a game. Like I designed
5:31
it more like a game because the
5:34
technology just wasn't there to like
5:37
make a simulated
5:40
social network not feel like a
5:42
game. And so
5:44
I had to go that route. But but even then
5:46
it just it did not feel right. Users,
5:50
if I imagined if they were to try
5:52
it, would have felt like this
5:55
app was more like a Farmville
5:57
game and less like a
5:59
social network. And
6:01
honestly, that's not what I was
6:03
trying to build. I was trying to build something that felt
6:05
like a social network, and so I had to wait. Then
6:10
once the early
6:12
versions of GPT 3.5 came
6:14
out, I thought, okay, let
6:17
me give it another shot. And tried to
6:19
design a version of it. The
6:22
model would sometimes say random stuff.
6:25
It was extremely expensive to
6:28
run all of the different
6:30
prompts and things that I needed for it to
6:32
work. And I told myself,
6:34
there is no way I'm going to
6:37
be able to run
6:39
this at this cost, and
6:41
it's completely fuzzy, and the responses are
6:43
no good. And so
6:45
I said, okay, well, I have to wait.
6:47
I have to wait until it gets cheap, and I have to
6:49
wait until it gets more accurate. And
6:52
so every month, for like
6:54
two years, I was building this startup. I would
6:56
just wait. I would just look at the latest
6:59
model, try it out with some
7:01
of my tests, and from
7:04
there, just keep going. I
7:06
would look at the
7:09
outputs that they could
7:11
give and how much I could tune them, and
7:14
then I'd look at the cost. When
7:16
Gem and I released theirs and lowered their costs, I said,
7:18
okay, we're getting closer. As
7:21
soon as OpenAI had their models
7:23
dropping in prices, I said, okay, I think
7:25
it's time. So about a month ago,
7:27
I went and built the app,
7:30
and I just told myself, look, this is like the
7:32
last attempt that I'm going to do at building this
7:34
app. Like,
7:37
I've done this too many times. I'm
7:40
just going to go with it. Okay,
7:42
and now here we are. Yeah, and here we
7:44
are. And I launched it, and of
7:46
course, that's the one time that you
7:49
don't think it's going to go a certain way is
7:51
when it does. And I mean,
7:53
it's great. So I'm
7:56
so curious why this idea has been so sticky
7:58
in your head. clearly
8:00
been sitting around as a thing you have wanted
8:02
to build for a very long time. What
8:04
is it about this thing that is so sticky
8:07
and enticing to you? Social
8:10
networks are not what they used to be, and
8:12
I think fundamentally the internet has changed. The
8:15
internet used to be a tool of communication
8:17
between people, and frankly, I love that. There's
8:20
a part of me. So I was
8:22
born in Miami, but at 16, I
8:24
flew out to California with my mom
8:26
because Mark Zuckerberg had emailed me when
8:28
I was in high school asking if
8:31
I wanted to meet about working there
8:34
and stuff like that. I
8:36
remember flying out there with my mom, not
8:38
knowing really anything about Zuckerberg, and
8:41
my mom not knowing anything about him even
8:43
more, and just
8:45
thinking, okay, this is such a different world from
8:47
where I come from, but I'm excited. I
8:50
had built apps before, social apps, and
8:52
so I was quite excited. I
8:54
spent a lot of time working at Facebook from that
8:56
point on. I didn't go to college. I spent four
8:59
years there helping them build out Instagram stories and a
9:01
few other features, and it was just
9:03
such a fun time. It was like 2013, 2014. The
9:07
company was in a different era. Social
9:10
media as a whole was in a different era,
9:12
and people were having a lot of fun. I
9:14
think over the past couple years, I think we've
9:16
just seen social media has
9:18
changed, and it's changed because the internet's
9:20
changed, and the technology's changed. Where
9:24
the internet used to be a place where
9:26
you could connect between massive amounts of people,
9:29
the internet, as a communication tool
9:31
in that sense, is
9:33
kind of falling apart. The
9:37
internet now has technology that
9:39
allows itself as
9:42
a data set to simulate
9:45
a human connection. You
9:47
communicate with the internet rather than
9:50
through the internet, and I think
9:52
that change that's happened really puts
9:55
question marks around how social media should work
9:57
because the whole premise of social media... is
10:00
that you're using the internet to communicate through
10:02
it to other people.
10:05
Fair. I just want to say, by the way, that idea
10:08
of communicating with the internet and not
10:10
through the internet is like
10:12
as succinctly and well as I have heard anybody
10:15
put this moment that we're in. I'm going to
10:17
steal that from you and that is very good.
10:19
So thank you for that. No
10:21
worries. That makes
10:23
it sound like
10:25
social AI has always
10:27
been in your head kind of
10:30
part product, part social
10:33
commentary art projects. Is that
10:35
fair? Yeah, I always
10:37
like to poke. I've
10:40
always liked to poke. I mean, does it kind of
10:42
poke fun at the facade
10:46
that a lot of companies are trying to put up?
10:48
Of course it does, right? But
10:51
I think it's also, weirdly enough,
10:53
my attempt at trying to solve
10:55
some of these problems. The
10:57
problems, for example, where you can't discern the
11:00
difference between a human and an AI on
11:02
a social platform. So like, yeah,
11:04
one way is to try and invent
11:06
a detector for humans, but that hasn't
11:08
gone very well. I'll
11:11
say instead, well, how about we
11:13
just come out with a product
11:15
that tells people how
11:18
the internet works now and says, hey, look,
11:20
this is the reality, kind
11:22
of sucks in some ways. It's kind of
11:24
great in others. And we
11:27
have to, you know, we have
11:29
to embrace it. Like, let's embrace it. And
11:32
let's do that so that we don't
11:34
harm ourselves, right? Because going
11:37
on social media sometimes and seeing comments that
11:39
you think are from people that aren't can
11:42
be harmful. So I'm curious
11:44
kind of what it was that clicked in
11:46
your brain that went from this technology is
11:49
not ready to this technology is ready. I
11:52
think the moment that I got in a
11:54
fight with my boyfriend and I decided to
11:56
open up my app to see if there
11:59
were any ideas for how I could resolve
12:01
the problem. That I think was the moment.
12:03
That's a very good answer. You know, that's
12:05
that was the moment, the moment I got
12:07
in a fight and I decided I'm going
12:10
to use this app to try and vent
12:12
about my problem, because if I go on
12:14
actual social media, I'm doing some harm. Right?
12:16
Like, and so and so I think that's
12:18
just goes to show like, the product
12:21
I'm building is not to give people an
12:23
illusion of people, right? I know they're all
12:25
AI. It's so that I don't
12:27
go on social media and use it in
12:29
a harmful way. Like, my
12:31
ideal my ideal scenario is
12:34
one where people have people around them
12:36
to listen to hear them and to
12:39
help them when they need it, right?
12:41
People to people communication is number one
12:43
for humans. And I don't
12:46
think we should forget that, right? But
12:50
there are a lot of people I've noticed since
12:52
COVID who don't have those people around them. And
12:55
so if they don't have those people around
12:57
them, and they need that conversation, what
12:59
are they doing? And if what
13:02
they're doing is going on public social media, and
13:04
talking about what's going on in their life and
13:06
getting advice from bots AI
13:08
without knowing what they're getting advice from
13:10
without any understanding of
13:13
the dynamics in these algorithms to encourage
13:15
certain types of content on different spaces,
13:18
then they're they're harming themselves. So
13:20
is it okay? Well, you know, I'm not
13:23
trying to replace the human to human
13:25
connection. I'm trying to help people find
13:27
a way to have
13:29
a secondary option when that human isn't
13:31
around for them, so that they don't
13:34
have to rush to social media, make a mistake. And so
13:36
when when I had when I
13:38
got in a fight, and I didn't go on social
13:40
media, and instead, I went on this side, I said,
13:42
Okay, it's ready. Yeah, I
13:44
mean, that I can imagine
13:46
that being a very telling moment. But I think
13:49
that distinction is really interesting,
13:51
where what you're saying is kind of one
13:55
is not a replacement for the other, that
13:57
they're actually they're designed to be and
13:59
best And
16:01
the truth is that they're not
16:03
that great at it because we
16:05
still go back to Google, because we
16:07
want multiple answers, you want multiple responses.
16:10
And so what I've built with social
16:12
AI is not so much a social
16:14
network, but a new way to
16:16
interact with a language model, where
16:18
you don't get one response, but you
16:20
get multiple. And being
16:22
able to drill down in a thread-like
16:24
interface, in a social interface with
16:27
the language model, it just feels
16:29
more natural. When
16:31
I used the app, for example, I was running late to
16:34
a flight. I got delayed, my first
16:36
flight got delayed, my next flight was
16:38
in 45 minutes, I was in Dallas, and
16:41
I didn't know if I was going to make it.
16:43
The flight had just landed. So I opened up social
16:45
AI, and I just kind of
16:47
panic-ranted about what happened, right?
16:50
I didn't have to think about, like, oh, I need to
16:52
instruct it to tell me because I need the right answer.
16:55
And what if it's not the right answer? And maybe I
16:57
need to go to Google and maybe I need to go
16:59
to Red. No, no, no, no, you know, I simply
17:02
ranted about what happened. I said, my
17:05
flight got delayed. I just landed. I have 45
17:07
minutes to make it to my next flight. I
17:09
don't know if I'm going to make it. I'm
17:11
at Dallas. They're telling me
17:13
I got to go to terminal D, I'm in terminal C. I
17:16
don't know. And I just posted that.
17:18
And immediately, I got
17:20
dozens of responses and
17:23
replies on this social interface that
17:25
gave me all sorts of, like, various
17:29
replies. Some of them would tell me, you're not
17:31
going to make it, you know, go to the
17:33
front desk, just figure out if you could get
17:35
another flight. Another one said, you'll make it if
17:37
you run quickly, you just need to
17:39
look up, see if you can find the SkyTrain. And
17:41
if you go down the SkyTrain, you
17:44
should be able to get there in time. Just make
17:46
sure you're running quickly, though. Another
17:48
person said, are you in the front or the
17:50
back of the airplane? Like, you know, like, different
17:52
questions. And so what's interesting is for a human,
17:54
it's natural to see that I'm going to go
17:56
and look through all the responses and dig in
17:59
on the one. And
20:00
once you do that, you just start posting.
20:02
I'll type, like, I'm
20:05
suddenly tired of all
20:08
the food that I like. Anybody
20:12
have any ideas about
20:14
how to spice things up? Literally.
20:18
And then you press post, and
20:21
a few seconds later, responses start
20:23
appearing. Let me just read you a few. The
20:25
first one is from Sunny Ray,
20:28
who is at Sunshine Fellow, presumably
20:30
an optimist one. It says,
20:32
try adding some new spices to your meals
20:34
or explore international cuisines for fresh flavors. Sure.
20:38
There are a bunch here that say try
20:40
new cuisines or flavors. Advice Wizard 2023 literally
20:42
just says try new cuisines or flavors. I
20:46
have Fanzone Hero,
20:48
who says try some bold spices like
20:51
Zatar or Sumac. It's magic. I like
20:53
that. Fanatic
20:55
follower says, whoa, David, that sounds like a
20:57
challenge. How about some trying some bold exotic
20:59
spices? Anxious
21:01
Nina, presumably an anxious one,
21:04
says, what if you accidentally make something
21:06
that sours your taste forever? That's terrifying.
21:09
Downcast Greg says, spicing things up sounds
21:11
like a recipe for disappointment. So
21:13
you get the idea, right? Different bots,
21:16
different vibes, different kinds of responses. These
21:18
are all my reply guys now. And
21:21
I can respond to one and go
21:23
down a rabbit hole with that particular bot
21:25
tuned to that particular mood. I
21:27
can also favorite bots. I can favorite replies.
21:30
And Michael says that all of that goes
21:32
back into the algorithm
21:34
and into kind of the instructions
21:36
being given to JadgBT every
21:38
time I try to use it. On
21:40
the surface, it all feels
21:42
and sounds like normal human
21:45
social media, except that they're
21:47
all bots but me. And
21:49
I think I'm not the only one who felt
21:51
kind of strange about that fact at first. It
21:54
looks like Twitter and feels like
21:56
Twitter and it super isn't Twitter.
21:58
It just felt... And
22:01
honestly, the reaction to social AI was really
22:03
fascinating. So that is what
22:05
Michael and I talked about next. I
22:08
do think the reaction
22:10
to this app has been in many ways, just
22:13
as fascinating as the app itself. Yes. My
22:17
read of it when it came out was there were basically
22:20
three responses. One
22:22
was like, this is cool
22:25
and interesting and kind of fun,
22:28
fun, both social commentary and an interesting
22:30
idea about the future. One
22:33
was, this is stupid and dystopian.
22:35
And then one was like, this is a
22:37
joke, right? This has to be a bit
22:40
and an art project and not a real product.
22:43
Is that a fair representation of the reaction? Am I missing
22:45
anything? I think what's
22:48
interesting is the loud reactions on social media.
22:51
There's two things that were interesting to me, or
22:54
let's say three. First, the
22:56
loudest reactions from people on social media were
22:58
from those who thought it was either a
23:00
joke or that it was dystopian or
23:04
that they're like, oh my God, the end of the world. That
23:07
kind of reaction, those are the loudest.
23:09
Always. The quietest were actually spending
23:12
10 minutes per session on the
23:14
app. The second thing I noticed
23:16
was that the reaction
23:18
from people was
23:20
one thing, but there were
23:24
bots on social
23:26
media reacting to... Because
23:29
half of social media has bots now. I
23:33
found it quite ironic that
23:36
there were bots reacting to
23:38
an app of bots telling
23:40
humans that an app
23:43
with bots is so
23:45
terrible. And
23:47
I found it interesting that that
23:50
was happening. I
23:52
was like, huh, it seems like some of
23:54
these bots don't want bots around or maybe they don't
23:56
want people to know that they're bots. I don't know.
24:00
You know, and so like there was
24:02
a portion of bots on social media that
24:04
were reacting negatively to bots and I just
24:06
thought that was ironic. Well,
24:09
it kind of proves your whole point, right? If it's
24:11
just a bunch of bots yelling at bots about the
24:13
social network, that's all bots. That's that
24:15
I imagine you're sitting there looking at that being
24:17
like exactly. Yes, you
24:20
know, yes. I think the other
24:22
you know, the other issue I think that I found
24:24
was just how many people don't realize that a lot
24:26
of these platforms are filled with bots. And
24:28
that that kind of was alarming to me. But
24:31
but ultimately, I think that the last bit
24:33
here of feedback that I've gotten is people
24:36
feel a little liberated. You know, they feel a
24:38
little bit liberated. They don't feel the pressure of
24:40
going on social media to share some thought that
24:43
they might feel embarrassed about. But
24:45
they also feel like they're
24:47
able to hear other perspectives that
24:50
they otherwise wouldn't feel comfortable admitting
24:53
to want to hear. And
24:56
so they don't let their guard down, you know, in public
24:58
conversation online, people keep their
25:00
guard up. And and I
25:03
think that keeps an echo chamber. It's interesting
25:05
because people said, oh, echo chamber, echo chamber.
25:07
The number one, you know, number
25:09
one, number two, number three most selected
25:12
follower type on social AI
25:15
is contrarians, debaters, problem
25:18
solvers, thinkers,
25:21
critics, right. And so people
25:24
are selecting followers on social
25:26
AI that challenge them. And
25:29
I think there's something interesting about that. Why would someone
25:31
go out of their way to be
25:33
challenged on an app like this? Can they not
25:35
be challenged on real social media? Is
25:38
there a reason why not? And
25:40
how does this address that? Right? I wonder if
25:42
that goes back to what you were saying about
25:45
how it feels when
25:48
you perceive it to be real people on social
25:50
media, because I think to
25:52
some extent, that fact doesn't surprise
25:54
me, because one thing you hear
25:57
from people who use AI a lot is that it is
25:59
it's especially useful if what you really
26:01
want to do is beat up
26:03
an idea and brainstorm and get new perspectives
26:05
on things. And I
26:07
think to some extent what you've built
26:10
is just a like endless feedback
26:12
mechanism, but with no stakes because
26:14
no one else sees what's happening.
26:17
No one else is human
26:19
on there. So even the part of
26:21
it that feels sort of real, it
26:24
feels like there's still something in your
26:26
brain that is like, this is a
26:28
safe space. I can see a world
26:31
in which, I mean, and I've even found this in using
26:33
it, there is something very powerful in the
26:35
interface is the same, but the stakes are
26:37
so much lower. Yes. And
26:40
I think it helps put people's guards down. I
26:42
think it helps people, like you
26:44
said, people have been using chat GPT for a
26:46
lot of this, but how many times have people
26:48
gone on chat GPT and said, Hey, can you
26:50
help me think through this? And it
26:52
gives you one answer and you're like, I don't know about that.
26:54
And then you go, well, what other ideas do you have? And
26:57
then it goes and gives you something else. And then you're like,
26:59
well, what else? You know, and then it gives you something else
27:01
that's kind of similar, but you're like, I don't know. And then
27:03
you keep going, well, what else? And by the time you keep
27:05
asking what else, I forgot the context of the thing you were
27:07
talking about in the beginning and just start saying random stuff. And
27:10
so like the interface just feels wrong
27:12
for the use case. But
27:15
look, I don't blame open AI. I
27:17
don't think that it's like, oh, they just weren't capable.
27:19
Like who the hell was going to know? Right. If
27:22
anything, I think they built out a
27:24
chat interface because it just felt like
27:26
the obvious testing ground to prove a
27:28
product. And it became a product
27:31
that they didn't think was going to resonate as
27:33
quickly. So, you know, of course we started a
27:35
chat because of that. And I don't think that
27:37
it's bad. I just think we haven't seen the
27:39
best of it yet. Yeah, I
27:42
think that I think that's totally fair. So speaking
27:44
of that, actually, the edges
27:47
of this technology, I'm
27:49
very curious about. And I suspect you've seen a lot of
27:51
that as people are starting to
27:53
really use and try new stuff
27:56
with social AI. Obviously,
27:58
this stuff has gotten a lot better. And
36:00
I think the interface will
36:03
allow it. If you have comparable models
36:05
on both platforms, and one of them
36:07
gives you multiple responses from different points
36:09
of view, and the other one
36:11
just gives you one answer, and
36:14
you're working with a technology that's probabilistic,
36:17
like who has the upper hand? You
36:19
know, like I can give 10 answers, and if
36:21
one of them is good, you're happy. But if
36:23
chat GPT gives you one answer and it's not
36:26
right, you're frustrated. That makes me
36:28
think of the thing people always say about the
36:30
TikTok algorithm, which is that the reason it feels
36:32
like magic is because you don't get annoyed when
36:34
it's wrong, because you just keep swiping. That's
36:36
right. And I think social has, like you're saying very
36:38
much the same thing. The signal to noise ratio is actually horrendous
36:42
on social media, but we're also sort of used
36:44
to it now. That's right. And
36:46
if you just scroll past it and move on,
36:48
and we all kind of understand how to find
36:50
needles and haystacks in a way that when chat
36:52
GPT recommends a movie I don't want to watch,
36:55
it feels bad because it gave me an answer.
36:57
Exactly. That's part of the reason why people still go
36:59
to Google. Because Google
37:02
doesn't have any more accurate stuff these
37:04
days compared
37:06
to what it was because of all the AI that's
37:09
in there too. So it's not
37:11
like Google is any more accurate, but
37:13
it's interesting because Google gives you this
37:15
chat GPT, the answer at
37:18
the top. And then you
37:20
have all of these various links that
37:22
give you different perspectives. And let's be
37:25
honest, most of these links are now run with
37:28
so many paywalls and things that
37:30
you can't even get to the answer for
37:32
any of these links. But the original intent
37:34
behind Google and why it worked was
37:37
it gave you options to
37:39
look through. And so it allowed it
37:41
to be wrong and
37:43
it increased its chance of being right
37:45
at least once or twice. And
37:47
we're used to using the internet this way.
37:50
We go through the internet looking for information,
37:52
trying to find which thing is helpful
37:54
to us. And so
37:56
I think it's interesting
37:58
that maybe we
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More