Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
I think the gap between simulations and
0:02
reality is getting closer and closer, right?
0:04
Like the GTA is just like kind
0:06
of one example, but even in like
0:08
a lot of, like I said earlier,
0:11
a lot of like AAA games, they're
0:13
getting closer and closer to like reality,
0:15
right, like graphics level, like fidelity, like
0:17
all of that. I actually think that
0:19
the Simile Real gap is closing
0:22
and if you are able to
0:24
build and rig up basically all
0:26
the controls that a robot is
0:29
in like a 3D video game
0:31
or 3D simulation and you basically
0:33
have the train the agent to be
0:36
allowed to do like you know all the
0:38
scenarios that a robot could do
0:40
in real life you can actually
0:42
like that gap this simulation
0:44
to reality gap that Sim
0:47
to real gap. It's actually
0:49
like pretty close and you should
0:51
be able to generalize that to
0:53
the robot in Like, you know, a
0:55
couple of years. Welcome back to the
0:58
Free Coat Camp podcast, your source for
1:00
raw, unedited interviews with developers. This week,
1:02
we're talking with CTO and robotics engineer
1:05
Peggy Wong. We'll learn how she grew
1:07
up a first-generation public school kid from
1:09
Milwaukee who used Free Coat Camp as
1:12
a key resource to build her developer
1:14
chops. Her love of robotics helped her
1:16
get into Stanford, and from there, we'll
1:19
talk about her work on augmented reality.
1:21
at Oculus, self-driving cars at Lyft,
1:23
and AI agents at her Y-cominator
1:25
funded Game Dev startup. Support for
1:27
this podcast comes from a grant
1:30
from Wicks Studio. Wicks Studio provides
1:32
developers tools to rapidly build websites
1:34
with everything out of the box,
1:36
then extend, replace, and break boundaries
1:38
with code. Learn more at Wicks
1:40
studio.com. Support also comes from the
1:43
11,252 campers who support free code
1:45
camp through a monthly donation. Join
1:47
these kind folks and help our
1:49
charities mission by... going to donate
1:51
dot freeco camp.org for this
1:54
week's musical intro with yours
1:56
truly on drums guitar base
1:58
and keys we're going to
2:03
1986 with
2:07
arcade
2:10
classic
2:12
Outrun.
2:14
The
2:17
song
2:19
is
2:21
Passing
2:24
Breeze.
2:32
Oh.
3:12
Thank you. Welcome to the Free Code
3:15
Camp podcast. Thanks for having me, Quincy.
3:17
This is super great to be here and
3:19
it's an honor as well. Yeah. Well, it's great
3:22
to talk to somebody who's working on
3:24
the leading edge of AI and like applying
3:26
a lot of these tools because we hear
3:28
so much hype about AI, but like, what
3:30
is it actually being used for? And you
3:33
strike me as somebody who is picking up
3:35
the state of the art and figuring out
3:37
ways to apply it. Oh, thank you. Yeah,
3:39
I mean, I'm happy to talk more
3:42
about it. I'm sure we'll get into
3:44
a lot of this on a podcast,
3:46
but I've been working in
3:48
robotics since high school,
3:50
and then I've been working on
3:53
AI since, you know, freshman year
3:55
of college. And so this is
3:58
like really my life's passion. I'm
4:00
a huge proponent of like
4:02
how like AI is going to
4:04
kind of change the state of
4:07
you know robotics agents what
4:09
we're doing as a company
4:11
ego and also you know
4:13
like how how that's going
4:15
to change human life sort of better
4:17
in the future. But yeah, I'm sure
4:20
Quincy will get into this a lot
4:22
later. I can talk for hours about
4:24
this topic. Awesome. Well, we are going
4:26
to dig deep and learn as much
4:28
as we can from you in terms
4:30
of what the current capabilities are and
4:33
what you're excited about. One thing I
4:35
did want to discuss is, you know,
4:37
CES, the Consumer Electronics Show held in
4:39
Las Vegas every year, just wrapped up,
4:41
and I wanted to see whether... As a
4:44
time of recording, like, it
4:46
literally just finished. So, was
4:48
there anything that was on
4:50
display that you thought was
4:52
like a particularly striking or
4:55
interesting application of AI?
4:57
Gosh, there's so many interesting things,
4:59
but for me personally, I think
5:01
the best two things that struck
5:03
me was the invidious digits, which
5:06
Jess and Huang showed like, I
5:08
think like a $3,000 like... personal
5:11
computer, GPU, that you
5:13
can run. That was super
5:15
interesting because that, if it
5:17
is true that, you know,
5:19
that could be mass produced
5:21
and launched like very soon,
5:23
that would actually change the
5:25
state for the AI costs
5:27
because if you're able to
5:30
run like these AI models like
5:32
locally, instead of like
5:34
using cloud providers like,
5:36
you know, open AI and
5:39
anthropics, cloud. That means that
5:41
you basically don't have to
5:43
pay per token costs, which
5:45
is like, you know, you
5:47
pay like a certain amount
5:49
of money every time you
5:51
run an AI call. And so if
5:53
you make it something that's
5:55
available on device,
5:57
essentially, using these.
6:00
that will hopefully decrease the
6:02
cost so that an everyday
6:04
person can only like pay like
6:06
a one-time fee to run you
6:08
know as many AI models as
6:10
they want on their personal like
6:12
computers or using this
6:14
like personal hardware like
6:16
the invidia digits space.
6:18
So that's that's something
6:20
I'm really excited about and
6:23
I think that will also enable
6:25
a lot of applications
6:27
new applications in robotics.
6:30
And I can get into that too, but
6:32
I think yeah, well I think a
6:34
lot like a big question a lot
6:36
of people still have is where we're
6:38
several years post chat GPT like I
6:40
guess raising awareness of the capabilities
6:42
and the rate at which capabilities
6:44
are improving like how are people
6:46
applying these tools in exciting ways
6:48
like did you see any applications
6:50
at CES where like oh wow
6:53
I never thought of that or
6:55
that's gonna be a big game
6:57
changer in terms of people actually
6:59
using AI. in kind of a consumer
7:01
facing way and not just as
7:04
something that's kind of abstracted away.
7:06
Obviously, the price performance of AI
7:08
is shooting up through the roof
7:11
and that's... Yes. But in terms
7:13
of actual applications
7:15
that like you as a deaf might use.
7:17
Yeah, I think like the biggest
7:19
consumer use case is actually
7:22
still probably Traged BT. I
7:24
think like, you know, today like I
7:26
was like... talking to my brother who's
7:28
who's in college and they literally use
7:30
like chaggy BT to do all their
7:33
homework assignments and this is kind of
7:35
crazy because I think like one of
7:37
the neat things about AI adoption is
7:39
that the people who like start using
7:41
it and are I guess like instead of
7:44
like digital natives they're now like
7:46
AI natives they're all like younger
7:48
kids they're all students they all
7:50
like use like AI to help
7:52
them finish their homework assignments. And
7:54
they kind of grew up in
7:56
that era and eventually, I think in
7:58
like a couple years. when they get
8:01
to college, when they enter the
8:03
workforce, they're going to be like, because
8:05
they grew up on this technology
8:07
and have used it in their
8:09
school and their work, they're going
8:11
to continue using it and be
8:13
more open to the application of
8:15
AI in the future as they
8:17
grow up. And I guess like
8:19
a concrete example that I'm very
8:21
excited about, especially at CES, is just
8:24
like the cool, especially the
8:26
cool new robots that especially
8:28
like kind of like that
8:30
low-cost like manufacturer mostly in
8:32
like Asia and China where there's
8:34
like human like robots that are
8:37
like basically now like actually way
8:39
cheaper in an order of thousands
8:41
of dollars instead of like tens
8:44
of thousands or hundreds of
8:46
dollars which makes it actually
8:48
pretty affordable for consumers and
8:51
then the second thing that
8:53
makes it really interesting is
8:55
that in conjunction with the
8:58
whole like invidia GPU, like
9:00
invidia digits announcement, if you
9:02
add basically local AI on these
9:04
robots. theoretically, we could see something
9:06
very soon where these robots are
9:08
able to do very generalized tasks
9:10
in today's world, such as like
9:12
helping you fold your laundry, wash
9:14
your dishes, do all the household
9:16
chores, and having like one robot
9:18
to do that instead of like
9:20
building like specialized robots to do like
9:23
each of these tasks. So I think
9:25
that's something I'm very excited about.
9:27
And I think like we're finally
9:29
reaching a point where like,
9:31
you know, personal robots and
9:34
personal assistance. can like physical
9:36
assistance can actually become potentially
9:38
affordable for the average consumer in
9:40
a couple years. Yeah well let's like
9:42
if let's say hypothetically like AI just
9:44
becomes like an appliance like as a
9:46
little Rosie the robot like if you're
9:49
familiar with the Justin show and you're
9:51
like hey Rosie can you cook dinner can
9:53
you know wash the clothes can you do
9:55
other kind of like helpful tasks around the
9:57
house like we've had washing machines for like
10:00
nearly a century probably and
10:02
those have been in incredible
10:04
labor-saving device it's not necessarily
10:07
like I guess we have a robot that
10:09
interfaces with the washing machine to
10:11
put the laundry in it and
10:13
then maybe they fold it things
10:15
like that like I could definitely see how
10:17
that is an improvement being able to
10:19
give more like declarative like oh the
10:21
laundry or maybe they just look at
10:24
the hamper and they're like oh I
10:26
better go do the laundry right like
10:28
maybe make those kinds of decisions on their
10:30
own how much of a game changer do you think
10:32
that really is in terms of like saving
10:34
people time like let's say hypothetically you had
10:36
a live-in robot friend that just did stuff
10:38
around the house and you didn't need to
10:40
worry about it anymore how much time do
10:43
you think you could save a week? Anywhere
10:47
from two to 10 hours, I guess. I
10:49
hate doing laundry. So I think like having
10:51
a robot that is able to like empty
10:53
out my dirty clothes, put it in
10:55
a washing machine, stand there for like
10:58
two hours, right? Because like whenever you're
11:00
doing laundry, you kind of have just
11:02
like be at home. Just like stand
11:04
like near to laundry, like switch out
11:07
the clothes, like. Mix and match, right?
11:09
Like, you know, several different types of
11:11
delicates and colors and like blacks and
11:13
whites and, you know, all that crazy
11:15
stuff. And then, like, some of them
11:18
can be dried, some of them can't
11:20
be dried, right? And then, like, folding
11:22
the laundry and, like, putting it back
11:24
in your closet or in your wardrobe
11:26
or something like that. I feel like for
11:29
me personally like laundry is like
11:31
definitely like a game changer but
11:33
also just like keeping things clean
11:35
around the house right like potentially
11:38
a robot that can also cook
11:40
for you too like I feel like that
11:42
would be awesome for sure yeah I don't
11:44
particularly like cooking I think cooking as
11:46
well but I have to I have
11:48
to learn it to you know survive
11:50
and in the modern society so I
11:53
think like just like cooking something
11:55
that's like pretty good or like better
11:57
than what I can cook you know
11:59
it's gonna to be a game changer as
12:01
well and it also saves on like just
12:03
like food costs right like like I can
12:05
just like buy groceries instead of like going
12:08
out to eat if I'm like you know
12:10
feeling hungry and tired and and don't want
12:12
to cook and I think like what's really
12:14
interesting is that like be even though we
12:16
have like these sort of appliances for
12:18
ages like humans like people people like
12:21
us still have to use them right
12:23
like the huge timesaver for like doing
12:25
the dishes dishes and like washer laundry,
12:27
but you still have to
12:29
like spend time like physically
12:31
like put put these objects
12:33
like in in the places
12:35
and like do do the
12:37
errands and I think like you know
12:40
a generalized robot would be able
12:42
to you can have one robot
12:44
that does like all of these
12:46
things but also you know like do
12:49
it in the same way and like
12:51
save you like hours per week. Yeah,
12:53
I mean, you said 10 hours a week. That's
12:55
a lot. I mean, if your hourly rate is,
12:57
as a software, I mean, we're talking about
12:59
hundreds of dollars saved a week.
13:02
So, like, hypothetically, if you were
13:04
to take that, let's say, hypothetically,
13:06
they can introduce a humanoid or
13:08
something like, it doesn't necessarily have
13:10
to be humanoid, but it has
13:13
to be able to. you know, reach into
13:15
a dishwasher and get the dishes out and put
13:17
them up on the shelf. So, you know, I
13:19
just be like, the way our spaces are already
13:21
just designed, our houses and our apartments and everything
13:23
are with human form factor in mind. Exactly. I'm
13:25
a layperson, I don't know a lot about robotics,
13:28
but I'm just kind of like imagining that humanoid
13:30
robot would be like an ideal approach considering that
13:32
our environments are already, is that one of the
13:34
audience for not just having... Yeah, yeah, yeah, yeah, yeah. Like
13:36
they have like, like, like, like, like, like, like, like, like,
13:38
like, like, clothes machines where you just
13:41
dump the clothes in and it folds
13:43
the clothes and it takes a long
13:45
time with the crap You know robotic
13:47
a lot of space. Yeah Yeah And and
13:50
like you have to fit in your house
13:52
somehow and like you know have Have a
13:54
place to put it and and
13:56
like like spaces are not very
13:58
designed like they're not really designed
14:00
for this, especially like I live in
14:02
San Francisco and like houses here in
14:04
the city like San Francisco, New York
14:07
are so small that you like literally
14:09
don't have room to like fit another
14:11
like appliance. But if you have a robot like
14:13
well maybe it can replace your vacuum
14:15
cleaner or like you know like it's
14:17
like a humanoid robot that's like relatively
14:19
like small that you can just like
14:22
fit in a corner somewhere and it
14:24
can just like do all the tasks
14:26
for you. Like I think that would be You know
14:28
a huge time saver like it will be a huge
14:30
cost saver as well. I mean if you think
14:32
about like the iPhone and like smartphones in
14:34
general the iPhone was the one that brought
14:36
in the revolution But of course there are
14:39
lots of types of smartphones now, but yeah
14:41
smartphones like there was like this thing that
14:43
really stood out to me somebody was like
14:45
flipping through like a Radio Shack catalog from
14:47
like 15 years earlier before the smart phones
14:50
and they were like literally all the things
14:52
in this Like practically everything in
14:54
this catalog that would cost me
14:56
thousands of thousands of dollars, take up
14:58
tons of space, would involve tons of
15:00
material that would ultimately be solid waste
15:02
in the landfill somewhere. Like those things
15:04
can be done with an iPhone, like
15:07
flashlights, you know, different ways of measuring
15:09
different things, different ways of recording things,
15:11
different things of access and media. Like
15:13
smartphones. And now for everything, right? Yeah,
15:15
I mean, they just became kind of
15:18
these Swiss army knives knives. like technology
15:20
knives that we can carry around in
15:22
our pocket and we can do so
15:24
much things like almost like humans have
15:26
superpowers because that so you think that
15:28
there could be like a single type of
15:31
robot that is essentially kind of
15:33
like the iPhone for you know home
15:35
automation oh yeah for sure I mean
15:37
I think that's that's like literally the
15:39
future is like you basically have like
15:42
whatever it is like human life robots
15:44
human like agents whatever like kind of
15:46
that new new term is these days
15:48
like that's definitely going to be a
15:51
future I think like the emphasis on
15:53
humanoid is a bit more important
15:55
because like Quincy you said like
15:57
the iPhone is like so general
15:59
and it can do like many
16:01
different tasks that it's like it's
16:03
not just like specific to one
16:05
thing. So and and I guess
16:07
like phones before that were actually
16:09
like very specific right so if you
16:11
look at the pre- iPhone era you have
16:13
like kind of like these all these different
16:15
like consumer tech that does different
16:18
things so like you mentioned like
16:20
flashlight like well we have a
16:22
we have an actual physical flashlight
16:24
that people would use or the
16:26
phones before that we're like flip
16:28
phones or blackberries. You had pagers,
16:30
right? You had like walkie-talkies, you
16:32
had like all these like different
16:34
specific forms of technology, and then
16:36
the iPhone kind of combined them
16:38
into all one thing. And so I
16:40
think like this is a very similar
16:42
analogy to what we talked before with
16:44
the whole like, oh, you have these like
16:47
washing machine, you have these like dishwashers,
16:49
and you have these like ovens. But
16:51
if you have a humanoid robot, they're able
16:53
to kind of almost combine. combine them
16:55
and like be able to do a little
16:57
bit of everything. Right? And we're able
16:59
to generally. You wouldn't even necessarily need
17:02
a washing machine or a, like if a
17:04
robot had all the, you know, abilities that
17:06
a washing machine, they could just use any
17:08
sink to like wash your clothes and ring
17:10
them out and drive them and everything like
17:12
that. And you could risk all of your
17:15
waste. And then no, no, we're back to
17:17
the medieval ages. Yeah. But like, I
17:19
mean, like little scrubbing board, like,
17:21
the whole reason people don't
17:23
use scrubbing boards outside of
17:25
like, you know, pioneer reenactment
17:27
and stuff like that is
17:29
because it's incredibly time intensive.
17:31
Yeah. And actually I've heard
17:33
that if you try to
17:35
wash. dishes with water you're gonna end
17:37
up using more water than you would if
17:39
you just use a washing machine because washing
17:41
machines are more efficient okay reuse that water
17:44
yeah yeah and like it's possible that a
17:46
robot wouldn't necessarily need to have all those
17:48
different trappings of a washing machine with
17:50
like the cycles and all the motors and
17:52
everything and they could just you know because
17:55
their time is inexpensive and maybe it would
17:57
take a little bit longer for them to
17:59
go through and wash your clothes or you
18:01
know ring dry your clothes but you
18:03
wouldn't need to buy a dryer you
18:06
wouldn't need to buy a washer or
18:08
a dishwasher I mean there are probably
18:10
at least four or five major appliances
18:13
that require maintenance and breakdown and multi
18:15
thousand dollar items that a humanoid robot
18:17
could potentially solve and again when I
18:20
say humanoid I mean like human
18:22
form factor like approximately the size
18:24
of a human and with like
18:26
two arms you know to potentially
18:28
do manipulate objects in physical space.
18:30
Yeah, definitely I think humanoid
18:33
form factor is super important because
18:35
as humans we're able to like
18:37
do a variety of things. We're
18:40
not, I mean obviously people have
18:42
like specialized professions in their daily
18:44
jobs, but like you know if we
18:46
take that away and like just like what
18:48
we do in our personal lives like humans
18:51
are actually able to do like a variety
18:53
of different tasks and like different scenarios like
18:55
I mean you can you can run and
18:57
you can play sports you can you can
19:00
do like all these errands you can talk
19:02
to other people you can like do specialized
19:04
tasks and in your job play musical
19:06
instruments play chess and play chess on
19:09
a physical board And you can sit
19:11
in front of a keyboard and type
19:13
and like just kind of effortlessly your
19:15
fingers will move in a way that
19:18
like communicates whatever it is you
19:20
want to a computer, right? And code and
19:22
build anything that you want, especially
19:24
with free code camp, right? Yeah. Yeah.
19:26
So I guess one of the observations I've
19:28
had from this, and I could talk about
19:31
this all day, I imagine you could too.
19:33
It's just. There is a tremendous amount
19:35
of potential in getting robotics
19:37
right and potentially incorporating like
19:39
we've had very rudimentary
19:42
robots for decades I mean there was like
19:44
the robot on Lost in Space like when
19:46
he was a kid so it's like 70
19:48
years old or something like that right like
19:51
we've had those that notion of robots and
19:53
we've even had like the notion of humanoid
19:55
like robots like if you've seen like Blade
19:58
Runner or a lot of these movies. But
20:00
the thing that has changed is
20:02
the actual brain like the smarts
20:04
of these robots and their capabilities
20:06
and and that is like the
20:08
big kind of step change we've
20:11
had in AI or in like robotics
20:13
I guess has been the actual
20:15
software side. Yeah. But have there
20:17
been big breakthroughs in hardware
20:19
recently? So it's really interesting
20:21
because I think like the big
20:23
breakthrough in robotics in part. mostly
20:25
actually does come from the software
20:28
and the AI side, especially like
20:30
generalist robots, right? Like specialized robots
20:32
are very, like, they're not easy
20:34
to build, but they're like, like very execution
20:36
based, right? Like it's like building a washing
20:38
machine. If you just want the robot to
20:41
do one thing, like you can build a
20:43
robot that does one thing. People do it
20:45
all the time in manufacturing. To build
20:47
a generalist robot, especially like
20:49
a generalist human robot, that's
20:51
a very different problem. And
20:53
that actually kind of parallelizes
20:55
like kind of the advancements
20:57
in AI as well. Like previously a lot
21:00
of AI is like very specific.
21:02
It's very like object detection oriented,
21:04
right? Like you have to identify
21:06
whether a picture is a dog
21:08
or a cat. And but like
21:10
when you train that AI, it can
21:13
only do that one specific task.
21:15
It can't like, I don't know,
21:17
like identify. a car if you
21:19
know that's not in a training
21:21
data set right it can't like
21:23
identify that that's a house or
21:25
it can't identify that that's you
21:27
know some some other object or even
21:30
like a few other things yeah call
21:32
across in the street but in today
21:34
like and like this is like the
21:36
big shift with AI between kind
21:38
of like these very specific small
21:41
like machine learning like training supervised
21:43
learning models to today's like large
21:45
language models all alums and people
21:47
are like like Sam Altman are
21:50
talking about oh we're gonna reach
21:52
AGI and I think like artificial
21:54
general intelligence of but I think
21:56
that's actually possible because we're already
21:59
kind of seeing that shift in
22:01
the AI space from like these
22:03
like very specific models that can
22:05
only do one thing really right
22:07
and and if they see anything that's
22:09
like outside of the training set
22:11
they completely fail. You can kind
22:13
of like parallel like that whole
22:16
advancement in AI from these specific
22:18
models to these general models with
22:20
LLLMs and you can kind of
22:22
see that same mirror that same
22:24
I guess advancement in robotics where you
22:26
have like a machine that does a
22:28
very specific task to like a humanoid
22:31
robot that can do like a variety
22:33
of different tasks. And so I think
22:35
like the advancement in AI is like
22:37
actually like one of the biggest unlocks
22:40
for robotics. I think a secondary
22:42
unlock is actually the cost of
22:44
hardware has like decreased. So I
22:47
mean obviously like Jensen Huang is is
22:49
at the forefront of this with invidia
22:51
and like the video and founder.
22:53
Yeah. He is making these GPUs
22:55
better, faster, cheaper, and that is
22:58
allowing a lot of new ability
23:00
to train these large AI models
23:02
that can do all these generalized
23:04
tasks. But at the same time,
23:07
there is also on the robotic
23:09
side, just like hardware, advanced
23:11
manufacturing, all of that has
23:13
gotten cheaper as well. So
23:15
now you have, again, like
23:17
these like. couple thousand dollar
23:20
robots right you can you
23:22
can buy like a robot
23:24
dog for like $2,000 $3,000
23:26
now and then maybe like
23:28
a small humanoid robot for
23:30
like 10k but like before that
23:33
right like these robots will
23:35
cost like like 40k a
23:37
hundred k a'smof I think was
23:39
oh yeah like there was like
23:41
the normal robot that had
23:43
like the backpack and could
23:45
oh yeah yeah yeah Boston Dynamics too,
23:48
like all of those those those
23:50
those robots. I mean, obviously they're
23:53
like much more advanced and
23:55
they're like designed for like, you
23:57
know, like harder conditions, but I
24:00
I think like in terms of
24:02
just like how like consumer costs
24:04
and hardware has gone down like
24:06
that does open the door for
24:08
a lot of people to actually
24:10
be able to afford to to
24:12
buy like some of these robots
24:14
or like train their own AI
24:17
models. Right so it sounds like
24:19
you're almost as excited about just
24:21
like the I guess accessibility in
24:23
terms of like robotics being something
24:25
that people humans normal humans and
24:27
not just like. nation-states can make
24:29
exactly yeah you know potentially investment
24:32
like and that's why we had
24:34
that conversation about like okay if
24:36
it can save 10 hours a
24:38
week and you multiply that toward
24:40
my hourly conversation like what's the
24:42
payback period I think that is
24:44
the kind of math that a
24:47
lot of people do when they're
24:49
trying to decide whether a labor
24:51
saving or time-saving invention is worth
24:53
investing in. Like one of the
24:55
ways I can justify having like
24:57
a really nice Macbook pro is
24:59
I've probably used more than 4,000
25:02
hours. And even though it costs
25:04
$3,000, the hourly, I guess, cost
25:06
of ownership is like 75 cents
25:08
or something like that, right? Oh
25:10
gosh, yeah, yeah. So yeah, it
25:12
proves their productivity too. That's another
25:14
thing. It improved your productivity. I
25:17
think like, other than the fact
25:19
that robots just like save you
25:21
time, but like, maybe, you know,
25:23
it. It saves you time and
25:25
then you can use that saved
25:27
up time to do something else
25:29
that you like really want to
25:32
do whether that's like a new
25:34
hobby whether that's like catching up
25:36
with friends right whether that's like
25:38
you know learning how to code
25:40
code more on free code camp
25:42
right like it just it opens
25:44
up a lot more opportunity than
25:47
just you know the time saving
25:49
and the cost itself yeah well
25:51
I want to dive into like
25:53
your background on how you got
25:55
interested in robotics I mean was
25:57
this something that you were always
25:59
interested in as a kid was
26:02
there like some moment that you
26:04
remember in your childhood that you
26:06
were like whoa I'm like like
26:08
this is what I want to
26:10
be doing yeah so I kind
26:12
of um I definitely credit robotics
26:14
as like getting me into coding
26:16
which is really interesting so So
26:19
I actually can give a little
26:21
bit more about my background. So
26:23
I was born in China. I
26:25
came to the US when I
26:27
was about two years old. And
26:29
then my parents got their master's
26:31
degrees around Milwaukee, Wisconsin. And so
26:34
I actually moved out here to
26:36
Milwaukee, Wisconsin when I was about
26:38
two years old. They ended up
26:40
getting jobs, you know, in around
26:42
the Midwest, mostly in the Chicago
26:44
area, sometimes in like rural Illinois,
26:46
and then like back to Wisconsin,
26:49
also near Milwaukee. And so I've
26:51
always kind of been around the
26:53
Midwest, we spent like maybe like
26:55
five years around like Chicago, like
26:57
rural Illinois, before we moved back
26:59
to Milwaukee. And so yeah, I
27:01
mean, I call like Milwaukee, my
27:04
home. Or I guess like, I
27:06
call San Francisco my home, but
27:08
like Milwaukee is kind of like
27:10
where my hometown I guess. And
27:12
what's really interesting about Milwaukee is
27:14
that it's an old school like
27:16
industrial manufacturing town. So when people
27:19
think of the Midwest, they typically
27:21
think of like flyover states besides
27:23
like maybe like Chicago. But I
27:25
think, you know, even like as
27:27
late as in like the 50s
27:29
to the 80s, like there was
27:31
a huge. I mean, industrial revolution
27:34
in the US, and a lot
27:36
of that actually came from railroads,
27:38
and a lot of that path
27:40
also came through Chicago, which is
27:42
why Chicago became one of the
27:44
major transportation hubs of the United
27:46
States. And that kind of like
27:49
industrial like revolution and that manufacturing
27:51
capability actually like expanded, I mean,
27:53
Milwaukee always kind of. due to
27:55
its close proximity with Chicago had
27:57
a lot of that manufacturing capability
27:59
as well. And there are still
28:01
like a lot of like, I
28:04
guess like old school, like manufacturing.
28:06
Yeah. robotics companies based out of
28:08
Milwaukee, like Johnson Controls, like Rockwell
28:10
Automation, like GE Healthcare. I think
28:12
even GE in general, although I
28:14
can't confirm that. And so you
28:16
kind of just like grow up,
28:18
like growing up in Milwaukee is
28:21
like a lot of your friends'
28:23
parents kind of like work in
28:25
these areas. And whenever you talk
28:27
to them about work, they're always
28:29
like, oh, like, you know, we're
28:31
making this like cool new surgical
28:33
robot to, like, make, you know,
28:36
better surgery or making this, like,
28:38
cool, like, better MRI machine for
28:40
GE, or they're, like, you know,
28:42
making, like, robots, like, much more
28:44
efficient at, like, manufacturing cars in
28:46
the case of, like, Rockwell automation.
28:48
And so we actually have like,
28:51
like, like, a variety of like
28:53
really cool like I guess like
28:55
that culture made it very cool
28:57
to kind of have like like
28:59
robotics clubs yeah so going a
29:01
lot yeah right I mean but
29:03
you had lots of friends there
29:06
were in robotics too who's not
29:08
like you were just like the
29:10
lone kind of geeky kid who
29:12
was in robotics did you have
29:14
other friends that we're interested in
29:16
you know actually like maker fair
29:18
type stuff like building things I
29:21
would say so. Actually, I definitely
29:23
found like more, I mean, it
29:25
was almost like the same amount
29:27
of people who are interested in
29:29
that and like the people who
29:31
are interested in that like out
29:33
in San Francisco, which was kind
29:36
of surprising. But I would say
29:38
like a lot of like some
29:40
of my friends, especially like as
29:42
we grow older into high school.
29:44
are very interested in like manufacturing
29:46
in general, like whether that's like
29:48
robotics or whether that's like just
29:51
like cars, car manufacturing, welding, like
29:53
a lot of these kind of
29:55
industrial like industrial applications, all of
29:57
them were pretty interested in that.
29:59
And yeah, and I think like,
30:01
you know, I got, I got
30:03
pretty interested in that too through
30:06
all like these stories. And I
30:08
ended up joining my school's, my
30:10
high school robotics team. And that
30:12
actually was super interesting because it
30:14
eventually, like, I think like this
30:16
kind of goes back into like
30:18
what we were talking about before,
30:21
like what is like kind of
30:23
the cool, like newest thing to
30:25
do. Like what is the newest
30:27
innovation in robotics? And to me
30:29
robotics has always been a combination
30:31
of like hardware, the electrical boards
30:33
and stuff, and also the brain
30:35
and the computer. And when I
30:38
joined these, the robotics club, they
30:40
basically asked me, there's like three
30:42
main teams on the robotics team.
30:44
There's the mechanical manufacturing team, there's
30:46
the electrical team, and then there's
30:48
the computer team. And I was
30:50
like. trying to choose and then
30:53
you know what was really interesting
30:55
was that they always like brought
30:57
this up they're like oh like
30:59
we can definitely build like any
31:01
type of robot to do like
31:03
a specific task but like what
31:05
actually makes the robot work is
31:08
actually the brain and a computer
31:10
and so that kind of I
31:12
feel like that line like still
31:14
stuck with me like from from
31:16
that day like all the way
31:18
to today as well and I
31:20
was like Oh, like, that's really
31:23
interesting. Like, if you compare that
31:25
to humans, like, what are humans
31:27
most valued for? Obviously, like, you
31:29
know, they could, they could do,
31:31
like, specific, like, physical tasks, but,
31:33
like, a lot of, like, the
31:35
GDP growth and, like, the knowledge
31:38
work actually comes from the brain.
31:40
Yeah, I mean, like, a forklift
31:42
is way stronger than human and
31:44
way more efficient at, like, like,
31:46
machines that are like way more
31:48
efficient than the human form is
31:50
that makes humans useful is the
31:53
thinking. There's this great scene in
31:55
Star Trek Voyager, I believe. That's
31:57
the one with the doctor who's
31:59
like a hologram. He's like stuck
32:01
on the holodeck and he's a
32:03
very competent doctor and everything and
32:05
they're like, I think at one
32:08
point they like lost the doctor
32:10
or something like his, he went
32:12
off the ship because he had
32:14
like this hologram thing. Sorry spoilers
32:16
for you know. Basically he gets
32:18
this 29th century piece of technology
32:20
that allows him. to like basically
32:23
leave the holiday and just oh
32:25
yeah but they needed a doctor
32:27
and they're like oh well just
32:29
build one and like they just
32:31
kind of like took his form
32:33
factor and everything it looked like
32:35
him but it just didn't have
32:37
his capability and it didn't matter
32:40
that he had hands that could
32:42
like you know steadily you know
32:44
hold the scalpel and all this
32:46
stuff it just wasn't the same
32:48
right because he didn't have that
32:50
that that medical knowledge and that
32:52
that I guess has to experience
32:55
the brains of that robot, if
32:57
you will, that they're capable. It's
32:59
very funny so much. But yeah,
33:01
I feel like there's something deeper
33:03
than just like watching the robot
33:05
recite his anatomy verbatim. Oh my
33:07
gosh, which, you know, it's a
33:10
very interesting thing to do. And,
33:12
you know, something that, you know,
33:14
AI can do today even, which
33:16
is, which is kind of crazy.
33:18
When you see like these science
33:20
fiction movies, like, like basically come
33:22
to life. Right, like within like
33:25
the last couple of years, that's,
33:27
yeah, it's a. So you joined
33:29
the software part of the robot.
33:31
Yeah, so I learned how to
33:33
code basically on my high school
33:35
robotics team and you know, a
33:37
lot of the older students were
33:40
very, very kind and they kind
33:42
of mentored me and got me
33:44
started. And what's really interesting was
33:46
like while I was like kind
33:48
of learning how to code, I
33:50
came across your free. recoding camp,
33:52
you know, website, and that's that's
33:55
one reason like how I kind
33:57
of like, you know, like kind
33:59
of try to learn how of
34:01
code on my own. And then
34:03
obviously like I was only like
34:05
like yeah I was I was
34:07
like pretty involved with my robotics
34:10
club like all four years of
34:12
high school and I think like
34:14
I was like pretty excited about
34:16
like kind of the future applications
34:18
of robotics even like back then
34:20
and I really wanted to do
34:22
more of that in college and
34:25
so I ended up you know
34:27
graduating and going to Stanford and
34:29
yeah kind of like pursuing more
34:31
like research like AI research robotics
34:33
research at Stanford and yeah I
34:35
mean I can I can talk
34:37
more about that but also wanted
34:39
to as Quincy if there's like
34:42
anything specific you want me to
34:44
focus on. One thing that I'm
34:46
really interested in learning a little
34:48
bit more about is what the
34:50
experience at Stanford was like for
34:52
people not everybody gets into Stanford
34:54
it's a very selective school is
34:57
expensive to attend. You were able
34:59
to get in with you know
35:01
just your test scores and your
35:03
extra curriculum is like working really
35:05
hard. My understanding is you didn't
35:07
have like this you know a
35:09
smooth path into there, you had
35:12
to work really hard to get
35:14
into Stanford. Yeah, yeah, yeah. So,
35:16
like, I'm probably one of, like,
35:18
10 people from Wisconsin in my
35:20
year at Stanford. Again, and I
35:22
think, like, five of those people
35:24
got in because they were athletes.
35:27
And then, not all of them
35:29
were up from Milwaukee either. So
35:31
I'm, I think, like, from Milwaukee.
35:33
I'm probably one of, like, three
35:35
people from Milwaukee that year that
35:37
got into Stanford. So it was
35:39
and I went to a public
35:42
high school. So it wasn't like
35:44
a private school or anything. And
35:46
yeah. I mean, getting into Stanford
35:48
was kind of a culture shock
35:50
because it seemed like a lot
35:52
of the students who are there
35:54
come from the East Coast or
35:57
West Coast and they went to
35:59
like very very good high schools.
36:01
Sometimes they went to like private
36:03
high schools and they had a
36:05
lot of peers who also get
36:07
into like Stanford or other like
36:09
Ivy League institutions and I was
36:12
like, oh, I really can't relate
36:14
to that because I'm like, I
36:16
think. I was the first person
36:18
in like 10 years in my
36:20
high school who had gotten into
36:22
Stanford and like again, Stanford just
36:24
like doesn't really accept people from
36:27
Wisconsin. There's only like 10 of
36:29
us maybe every year. And so
36:31
yeah, I was actually quite pleasantly
36:33
surprised when I got in because
36:35
I just like didn't think I
36:37
was going to get in just
36:39
like because they just didn't accept
36:42
people like like me. And I
36:44
think like what, like, obviously everybody
36:46
works hard, right? Everybody who gets
36:48
into Stanford and Ivy League institutions
36:50
and everybody works hard. So the
36:52
biggest question is like, how you
36:54
like differentiate yourself. And I think
36:56
like. for me specifically, I talked
36:59
a lot about my passion for
37:01
robotics, getting into Stanford and talking
37:03
about how I want to kind
37:05
of bring this technology into the
37:07
world in the form of a
37:09
business or a startup. And I
37:11
think like that actually kind of
37:14
relates longer to what I've been
37:16
working on today with my company
37:18
ego. But it's kind of interesting
37:20
like how that's like. It's more
37:22
about kind of like that story
37:24
tell and like what motivates you
37:26
in addition to like all those
37:29
like high test scores that are
37:31
almost like a baseline necessity. Yeah
37:33
and I want to talk a
37:35
little bit more about that because
37:37
a lot of people listening to
37:39
this may be in high school
37:41
themselves but more likely maybe they
37:44
have kids that they would like
37:46
to eventually go to a really
37:48
good school like a really good
37:50
engineering program. Stanford one of the
37:52
best in the world that you
37:54
know many many people from all
37:56
over the world try very hard
37:59
like I don't know the exact
38:01
figures for applications but they're extremely
38:03
selective and it is not trivial
38:05
one does not simply get into
38:07
Stanford I want to talk about
38:09
like what you had to do
38:11
to get into Stanford in terms
38:14
of like test scores and obviously
38:16
your personal narrative extracurriculars like if
38:18
you don't mind like just spending
38:20
a minute or two talking about
38:22
that for the benefit of people
38:24
who are considering applying to an
38:26
elite institution like Stanford or who
38:29
want their kids to be able
38:31
to maybe their kids are still
38:33
young like my kids are young
38:35
but I would be thrilled if
38:37
they could get into a school
38:39
like Stanford you know 10 years
38:41
from now so like what should
38:44
parents encourage their kids to start
38:46
doing? I think a lot of
38:48
it is honestly like personal motivation
38:50
like I'd say like one of
38:52
the biggest things I see among
38:54
my friends at Stanford is that
38:56
like a lot of them are
38:58
very like personal motivated and like
39:01
and they have like a particular
39:03
passion or like a specific thing
39:05
that they're very excited about and
39:07
I think that actually shows a
39:09
lot and like all these like
39:11
applications or like what you do
39:13
obviously like test scores are kind
39:16
of a necessity like yeah if
39:18
you don't mind how you did
39:20
on standardized test scores like which
39:22
I understand some universities don't really
39:24
require those anymore but like they
39:26
may come back I don't know
39:28
but like how hard do you
39:31
have to work to prepare for
39:33
each CCP I took the STT
39:35
and ACT my I said he
39:37
was worse than my ECT I
39:39
got a 35 out of 36
39:41
on my ECT And I was
39:43
valedictorian of my high school class.
39:46
So I think those things definitely
39:48
hoped, but those things are not
39:50
like the differentiated factor. Like you
39:52
don't have to be like valedictorian
39:54
or you don't have to get
39:56
like a 35 on your ACT.
39:58
But you should probably get like,
40:01
you know. above like a 32
40:03
or 33 and should probably be
40:05
in like the top like I
40:07
don't know like 10 to 20
40:09
students of your high school and
40:11
just just a you know have
40:13
like a like a baseline kind
40:16
of where like academically where you
40:18
where you need to be. But
40:20
yeah, I would say like, but
40:22
then there's, you know, the opposite
40:24
side, which is like a lot
40:26
of people in the top 10
40:28
to 20 of their high school
40:31
and have like a 36 under
40:33
ACT don't end up getting into,
40:35
you know, Stanford and Ivy leagues.
40:37
And I think like the reason
40:39
is because they couldn't tell a
40:41
good story about like what they're
40:43
personally very motivated by and also
40:46
like what they're passionate about. And
40:48
so- And so- And a lot
40:50
of them may not really be
40:52
that differentiated from one of- And
40:54
again, I don't mean to slight
40:56
anybody, but like I met, like
40:58
kids whose parents are software engineers
41:00
at Intel who grew up like
41:03
with, you know, half a million
41:05
dollars in household income and stuff
41:07
like that, like there are a
41:09
diamond dozen, there are tons of
41:11
people like that in Palo Alto
41:13
in San Jose and stuff like
41:15
that. There are far fewer people
41:18
who are, you know, first generation.
41:20
or second generation Americans like yourself
41:22
who, you know, like one of
41:24
the things you told me before
41:26
we started talking was like for
41:28
the first two years, you didn't
41:30
even get to see your parents
41:33
when you were living in the
41:35
state. They were busy working and
41:37
finishing graduate school and stuff like
41:39
that. And then you're living in
41:41
Milwaukee, which is not even Chicago
41:43
in terms of like yes. That's
41:45
something like coastal elites say about
41:48
anything that's not touching the figure
41:50
of the Atlantic, right? So I
41:52
do think that the fact that
41:54
you had that interesting background maybe
41:56
helped you differentiate yourself from the
41:58
children of elites in New York
42:00
City. in San Francisco and stuff
42:03
like that, right? Yeah, I'm hoping
42:05
that's probably the reason. We'll never
42:07
know. I mean, like, it is
42:09
so competitive that there are people
42:11
who have perfect SAT scores that
42:13
don't get into these schools, right?
42:15
And that's the thing that you
42:18
do need to go above and
42:20
beyond merely being academically. you know,
42:22
excellent and you need to be
42:24
excellent in other areas that are
42:26
distinct and interesting. And it sounds
42:28
like for you, programming and robotics
42:30
was that kind of key difference
42:33
here. Yeah. And it is definitely
42:35
really interesting because that actually like,
42:37
I feel like I am a
42:39
planner and I feel like I
42:41
plan like. several years in advance,
42:43
like what I want to be
42:45
doing in the next couple years.
42:48
And once I commit to something,
42:50
I'm pretty locked in and focused.
42:52
And so I think like, you
42:54
know, once like I started, you
42:56
know, high school in robotics, I
42:58
did it all four years. I
43:00
was pretty committed. I did like
43:02
robotics research at Stanford. I ended
43:05
up getting a bunch of internships.
43:07
So I guess like I can
43:09
talk a little bit more about
43:11
like Stanford and how that got
43:13
me into like ego which is
43:15
what I'm currently working on right
43:17
now. So I guess, oof, yeah,
43:20
at Stanford, I decided to basically,
43:22
well, one thing, I've never really
43:24
realized that AI was like that
43:26
big of a thing until I
43:28
got to Stanford and then everybody
43:30
was like talking about AI and
43:32
computer science and how it's going
43:35
to be like the next big
43:37
thing. And so that was actually
43:39
really interesting kind of being at
43:41
the forefront of that innovation. And
43:43
at the time I had already
43:45
kind of decided that I want
43:47
to focus more on the computer
43:50
and like the brain side of
43:52
robotics. And so a lot of
43:54
like the robotics research I was
43:56
doing was also more focused on
43:58
kind of AI. and like how
44:00
to use the brain to basically
44:02
control the robot. And it was
44:04
really interesting because actually
44:07
one of the first kind
44:09
of like large scale applications
44:11
of robotics at the time,
44:13
and this is before we
44:15
have humanoid robots, which is
44:17
why I'm like so excited
44:19
about humanoid robots, is like
44:21
prehumanoid robots. You can't actually
44:23
make a humanoid robots that do
44:25
like a variety of tasks like
44:27
you can. or at the edge, like
44:29
custom of breakthroughs today, you made
44:31
very specific robots for very physical
44:34
tasks. And the most general robot
44:36
at the time was actually in
44:38
self-driving cars and autonomous driving. And
44:41
I was like, oh, like that's
44:43
really interesting. And a lot of
44:45
the developments actually that enable
44:47
self-driving cars are mostly kind
44:49
of the brain side and the AI
44:52
side, in addition to like some level
44:54
of algorithm, like sensor fusion and sensors.
44:56
And so that was like something I
44:58
saw and I was like, oh, I
45:01
really, really want to get into that
45:03
and learn more about like what is
45:05
that new technology to actually make
45:08
self-driving cars possible. And
45:10
so obviously I ended up doing
45:12
computer science with the focus in
45:15
the AI track and I actually
45:17
ended up. doing a lot of
45:19
like just like talking to a
45:21
lot of people in the self-driving
45:24
car space just like about their
45:26
thoughts and like where they think
45:28
the direction of the industry is
45:30
going and I ended up getting
45:33
an internship at lift level five
45:35
so lift is obviously along with
45:37
Uber one of the two largest
45:39
right and sharing yeah And it's crazy
45:41
because my parents have not heard of
45:44
what lift was when once I got
45:46
the internship because again Wisconsin everyone has
45:48
a car right so it's kind of
45:50
really crazy how like big to disconnect
45:52
between like you know like everybody who
45:55
uses like lift an Uber in like
45:57
San Francisco and New York and then
45:59
like place is like Wisconsin where
46:01
it's like everybody has a car
46:03
so you don't really need to use like
46:05
Uber and Lyft. Yeah, so I ended
46:08
up doing a bunch of research on
46:10
the behavior planning team. So the behavior
46:12
planning team is essentially the planning
46:15
behavior, like it's exactly what it
46:17
sounds like, how you tell the
46:19
car, like how to drive in
46:21
like different scenarios and how do
46:24
you generalize that across like multiple
46:26
scenarios? And I was on that
46:28
team and I wrote some
46:30
mildly interesting algorithms, which
46:33
is basically telling the
46:35
car how to like
46:37
move and stop in
46:40
like different stop
46:42
signs. And yeah, and then
46:44
that was like something that
46:46
really was super
46:49
interesting. And then what
46:51
happened was that COVID
46:53
hit. And then all
46:55
like you can't really
46:57
work on hardware anymore
47:00
because the two years
47:02
of service and stuff,
47:04
right? Exactly. And
47:06
yeah, so I ended up
47:09
actually switching away
47:11
from self driving
47:13
cars and into
47:15
Oculus and the story
47:17
behind. Yeah, our headset,
47:19
found a by. Yeah, yeah,
47:21
yeah. Yeah, yeah. So I did
47:24
an Oculus internship in
47:26
college at Stanford because
47:28
I wanted to try
47:30
like different applications of
47:32
robotics outside of just
47:35
self-driving cars. So that
47:37
was kind of like
47:39
an experimental phase for
47:41
me. And I ended up
47:43
doing an internship at Oculus
47:46
where I did like
47:48
train like ground truth,
47:50
death sensing algorithms. And... Exactly.
47:53
Yep. So I did a ton of work
47:55
on that and my work actually ended
47:57
up being shipped as a part of.
48:00
the Oculus Quest. And yeah,
48:02
so that actually has like
48:04
huge applications. And interestingly,
48:06
all of these themes
48:08
kind of revolve around
48:11
robotics as well, because
48:13
basically the way that
48:15
you perceive a 3D world,
48:17
like that perception system is
48:20
very similar, whether that's using,
48:22
you know, basically essentially. trying
48:24
to do 3D reconstruction or
48:27
deaf sensing using like ARBR
48:29
technologies versus using like self-driving
48:32
cars and like robotics because in
48:34
all of these cases you still have
48:36
to figure out what are the
48:38
different like things around you and how far
48:40
they are. In the case of a robot
48:43
it's like if they're that far like
48:45
you have to be able to grasp
48:47
and like pick it up very accurately
48:49
to be able to track that. In
48:51
the case of ARBR it's definitely much
48:54
more of like, oh, how can I
48:56
warn the user who's like playing a
48:58
VR game to not hit like that
49:00
table or not that couch that becomes
49:03
super close to me? So the work
49:05
that I've done ended up being
49:07
a part of the Oculus Guardian
49:09
system, which warns you that things
49:12
are too close. I haven't seen
49:14
it in action, but my understanding
49:16
is, like, You don't actually see things
49:19
in the periphery until it's relevant for
49:21
you to see it like, oh, your
49:23
hand is swinging very close to this
49:25
window. Like that, right? So it is
49:27
a way of like, because it would
49:29
break immersion if you could just constantly
49:31
see the liver around you. So they
49:34
figure out how to selectively kind of
49:36
hide and show things to keep you
49:38
safe while you're swinging your light table
49:40
around or whatever it is. Exactly. And
49:42
part of that. Figuring out where
49:45
how close things are is that depth
49:47
sensing right? It's like figuring out exactly
49:49
how close things are because Aculus doesn't
49:51
have any type of 3D sensors like
49:54
when I say 3D sensors I
49:56
mostly mean like light are But yeah
49:58
Aculus only has cameras so like
50:00
predicting how far things
50:02
are from cameras is
50:04
basically an AI task.
50:06
Yeah, so and then I
50:09
obviously after graduation I
50:11
went back to Oculus on
50:13
more on the AR side
50:16
of things at this point
50:18
and I worked on AR
50:20
avatars which is basically like
50:23
face tracking for air avatars
50:25
which is basically how
50:27
Like if you do something with your facial
50:30
emotion or show some emotion right now, like
50:32
when we're talking to each other, how do
50:34
you mirror that in like a AR or
50:36
a VR setting in like a with
50:38
a 3D avatar? Yeah, because the 3D
50:41
avatar, by the way, AR is augmented
50:43
reality if we fail to find that
50:45
earlier. I like to always define acronyms.
50:48
So like, for example, if I'm using
50:50
some sort of app, they're not going
50:52
to try to. reproduce every
50:55
pore of my skin exactly
50:57
here on my head and
50:59
like you know the the
51:01
exact amount of gray hair
51:03
oh no you have any
51:05
gray hair Quincy you can
51:07
look really young so what
51:10
it will do instead is
51:12
it'll just kind of like
51:14
this Nintendo Wii version of
51:16
yeah yeah is what they're
51:18
called me with two eyes
51:20
oh my gosh that is such a
51:23
Yeah, exactly. I mean, Quincy explained
51:25
it. I couldn't have explained it any
51:27
better than Quincy had. So basically, yeah,
51:29
it's basically trying your best to
51:31
mirror whatever your expression has on
51:34
a 3D character with potentially like less
51:36
fidelity, right? A 3D virtual character
51:38
that doesn't look exactly like you in
51:40
kind of like almost like low fidelity,
51:42
like kind of me setting, like both
51:44
in a augmented reality and in a
51:46
me setting, like both in augmented reality
51:48
and in a virtual reality and
51:50
in virtual reality. So that was
51:53
a lot of, you know, traditional
51:55
with the mix of like new
51:57
kind of AI technology and machines.
52:00
learning to collect massive
52:02
amounts of training data
52:04
and do 3D reconstruction
52:07
on humans to be
52:09
able to train that
52:11
pipeline which is eventually
52:14
released on like basically
52:16
phones and VR headsets
52:19
worldwide within in real
52:21
time. So that was super
52:24
interesting and I
52:26
actually met my co-founder you
52:28
know like Oculus and he
52:30
actually worked on the Horizon
52:33
world part of things. So
52:35
Horizon is the like kind
52:37
of the VR like social
52:39
like social space like UGC
52:41
platform where I think they're
52:43
trying to do something very
52:45
similar to Roblox where they
52:48
can have like people like
52:50
hang out in the 3D space
52:52
and like play like video games
52:54
together in that setting. And I
52:57
was thinking about leaving just
52:59
for personal reasons and some
53:02
of the politics in the
53:04
org and we actually hit
53:07
it off really well and
53:09
we decided to go ahead
53:11
and start a company together
53:14
and it's called ego. We're
53:16
building human-like AI agents and
53:18
games and what's really interesting
53:21
is that we've kind of
53:23
come a full circle because
53:26
robotics is basically how like
53:28
AI like embodied I would
53:30
say embodied agents right embodied
53:32
AI agents can do things
53:35
and do a variety of
53:37
different things with a body
53:39
in the physical world but
53:41
what's really interesting is that
53:43
that is essentially the same
53:45
technology stack that where you
53:47
can have agents that do
53:49
everything a human could do
53:52
in a virtual 3D space
53:54
in things like games. So
53:56
if I understand correctly, you're
53:58
kind of giving. like the AI
54:00
a form factor like having a physical human
54:02
body like if they need to go somewhere
54:05
they like computers can instantaneously
54:07
transport themselves anywhere right and
54:09
but they don't they're not
54:11
corporeal they don't have some
54:13
physical form they're just you know
54:15
there right exactly I don't know how to
54:17
articulate it because there's no such concept in
54:19
normal natural language to articulate that sort of
54:22
stuff I'm sure there's yeah technical terms for
54:24
it but but they are kind of like
54:26
everywhere all at once wherever they need to
54:28
be and stuff like that but once you
54:31
apply like like you take like like I'll
54:33
use Grand Theft Auto as an analogy because
54:35
yeah oh we love GTA self a lot
54:37
of self-driving car training was actually done
54:39
in Grandma I'm not sure if they
54:42
do like the serious industrial training but
54:44
that's a common thing to use as
54:46
like a robotic student is my understanding
54:49
is to like yeah kind of
54:51
g8 some sort of like being
54:53
yeah in GTA like whether that's
54:55
a goat that just walks around
54:57
and like walk right through cars
54:59
and destroy everything or whatever I
55:01
think it was like a deer
55:03
I saw some demo like this
55:05
deer that was like like it's
55:08
very chaotic yeah but some crocodiles
55:10
and GTA six is the new
55:12
thing I think yeah so so I
55:14
guess my question then is so a lot
55:16
of what you're doing with ego
55:18
is kind of giving an AI a
55:20
body so it's like wow exactly like
55:23
the first thing an AI ever does
55:25
in a movie whenever it like takes
55:27
over a body or something like picks
55:29
up their hands and goes whoa you
55:31
know like that kind of thing
55:34
it's it's exactly that's exactly
55:36
what we're doing we think
55:38
that robotics obviously like a
55:40
lot of very cool people
55:42
are working on that and obviously
55:44
I've had huge interest in robotics for
55:46
a very long time. But I think
55:48
it's just so interesting how that can
55:50
be applied to like different industries like
55:53
gaming. So I guess like I didn't
55:55
like mention this at all during this
55:57
podcast because it was talking about robotics.
55:59
But I actually, like, partly,
56:01
like, this all kind of
56:03
fits together because I also
56:05
partly got into coding because
56:08
of gaming. I am, I
56:10
obviously played Pokemon, like,
56:12
Elmerald, I'm showing my age
56:14
here. But yeah, Pokemon, Elmerald.
56:17
And- The old Game Boy Advance
56:19
ones, probably. Yeah, Game Boys, Game
56:21
Boy Advance. I was not a
56:24
part of the, you know, Fire
56:26
Red, Leaf Green Generation. I'm not
56:28
that old. Yeah, and then obviously
56:31
I had the Wii, right? I
56:33
played like all the Nintendo devices
56:35
as I was a kid. And
56:37
I actually, like, during the time
56:39
when I was learning how to
56:42
code in robotics, I, when I
56:44
was like going online, I was
56:46
like literally looking up, like, how
56:49
do I like mod the Pokemon
56:51
games on emulators so that I
56:53
can? changed, like get cheats, right?
56:55
Get an unlimited master mauls. And
56:58
that requires some like assembly coding
57:00
and that, you know, got me
57:03
pretty interested in like coding beyond
57:05
the fact that it can just
57:07
be used for robotics. And it
57:10
was like kind of an intellectual
57:12
exercise itself. Yeah, and then, I
57:14
mean, obviously, I still game, I
57:17
play a lot of games. I
57:19
can't play too many games because
57:21
when I wouldn't have time to
57:23
do anything else. And so it's
57:25
kind of interesting how like my
57:28
personal life and like my professional
57:30
life kind of converged in the
57:32
sense of ego because games are
57:34
essentially simulations for robots in some
57:36
sense. GTA is a great example.
57:39
And then, but you know, also
57:41
like, you know, like a lot
57:43
of AAA games are
57:45
almost indistinguishable from reality
57:47
these days, right. Red Def
57:50
Redemption, I think is
57:52
a classic example. Baldersgate,
57:55
Baldersgate 3 is also
57:58
like pretty cool. the
58:00
graphics, even like final fantasy, like
58:02
these days, like look more and
58:04
more realistic. Yeah, and usually
58:06
they're stylized, but they could
58:08
be somewhat photo realistic if
58:11
they wanted to be, but usually
58:13
because the uncanny, uncanny value,
58:15
they don't try to make it
58:17
too, like, like photo realistic because
58:19
then it could be creepy, I
58:21
guess. Yeah. Too similar, but not
58:23
exactly human as creepy. It's also
58:25
like less of an art form
58:27
when it's like more realistic in
58:30
some weird sense because then you're
58:32
like, oh, it's just real life,
58:34
right? It's like that weird thing
58:36
between like, oh, what is art?
58:38
Is it is photography art or is
58:41
photography not art? Like, you know, yeah.
58:43
So, so with ego, let's just talk
58:45
about what it, what it does.
58:47
Like you use this term, I
58:49
think it's like endless games or
58:51
something like that. Infinite games.
58:53
Yeah. What exactly is an
58:56
infinite game? So the vision of ego
58:58
started when my co-father and I were
59:00
like, oh, we want to build an
59:02
infinite game, which is a game that
59:04
you can play forever. It's a, it's
59:07
basically what's explained in like
59:09
the sort of sort of sort of
59:11
online or like the matrix. Okay, yeah,
59:13
I've watched at least one episode of
59:15
it. My kids didn't like it. Yeah,
59:17
it's a. Sometimes we tell people
59:19
where we're building sort of online
59:22
and people are like, oh, what?
59:24
Like, what is that? But yeah,
59:26
that's actually the vision of ego
59:28
is we were building sort of
59:31
like, you know, infinite games sort
59:33
of online where people like, you
59:35
can like, essentially play like
59:37
any game that you're interested
59:40
in because the world and
59:42
the agent will just like generate
59:44
that for you. while you're playing
59:46
it. And based on your own
59:48
personal interest, based on what style,
59:50
like of art you want, based
59:52
on what type of game play
59:54
you want, the game will generate it
59:56
with AI based on what you're interested
59:59
in doing. Now, we realize
1:00:01
that that's like a very
1:00:03
ambitious and huge, huge project.
1:00:06
And we actually don't think
1:00:08
that the technology and the
1:00:10
infrastructure is there yet. So
1:00:13
we kind of have to build
1:00:15
all the building blocks to get
1:00:18
to there eventually. And what is
1:00:20
most interesting about the rise
1:00:22
of like AI and like
1:00:24
chat GBT and large language
1:00:26
models, like you know, like
1:00:28
all these other other models
1:00:31
like llama and clawed and
1:00:33
whatnot, is that you can actually
1:00:35
make human like agents
1:00:38
in games. So basically,
1:00:40
like, some might call it
1:00:42
like AGI or Sentience, like
1:00:44
other, there's like lots
1:00:47
of hypey terms being
1:00:49
thrown around in internet.
1:00:51
But I think it's
1:00:53
actually like a real thing
1:00:55
because sometimes even if you're just talking
1:00:57
to like chat to be T like
1:00:59
and you pretend like it's your therapist
1:01:02
or your girlfriend or your boyfriend and
1:01:04
you like talk to it the way
1:01:06
that you would talk to a human
1:01:08
it feels like it actually has like
1:01:11
real emotions yeah and so
1:01:13
when you give these AI agents
1:01:15
a body in a video game
1:01:17
like a virtual like 3D body
1:01:19
where it can actually move around
1:01:21
and it can perform actions and
1:01:23
it can like you know, maybe
1:01:25
shake your hand or like give
1:01:27
you a high five, right? They
1:01:29
actually look and feel and behave
1:01:31
as if they were like real
1:01:34
humans. And that's actually super interesting
1:01:36
to us. I think that enables
1:01:38
a variety of new applications
1:01:40
and games. And some of
1:01:42
the ones that we're focused
1:01:44
on is human like MPCs, right?
1:01:47
MPCs that you can talk
1:01:49
to that. Non-player characters.
1:01:52
Yeah. So, so, man, there's so
1:01:54
much impact there. One, one
1:01:56
thing is like, uh, I,
1:01:58
this phenomena of. phenomenon
1:02:01
of humans kind of perceiving
1:02:03
AI agents and like characters
1:02:05
that aren't even real as kind
1:02:07
of real and building like a
1:02:09
kind of a relationship with them.
1:02:11
I mean like Hatsunai Miku we
1:02:13
got like a lot Hatsunai. And
1:02:16
it was a pre-AI right? It's just
1:02:18
like a bunch of... It's just a
1:02:20
human. Engineers and musicians. Yeah. Bringing her
1:02:22
to life right. If you ever seen
1:02:25
like the blue, long blue... Twin Hills
1:02:27
Anime characters, she's everywhere. And it's basically
1:02:29
like this company that makes like a
1:02:31
voice. A whole alive. Yeah. Well, like
1:02:34
the original product was you could just
1:02:36
like program what you wanted her to
1:02:38
sing and then you could control the
1:02:40
pitch and everything and they had like
1:02:43
all these really high quality samples and
1:02:45
they sished them together. So it seamlessly
1:02:47
sounded like a human woman was
1:02:49
singing. Yes. We're seeing whatever you
1:02:51
want, you know, whatever words, whatever
1:02:53
notes, and whatever sequence and all
1:02:55
that stuff. So it gave you
1:02:58
the control of basically having your
1:03:00
own programmable vocalist, just like you
1:03:02
could program a drum machine to
1:03:04
kind of act like a drummer,
1:03:06
right? So that, but that suspension
1:03:08
of disbelief, if you will, the
1:03:10
human's experience, is... an interesting one
1:03:12
because as long as you know
1:03:15
that you're actually interacting with an
1:03:17
AI agent and it's not somebody
1:03:19
trying to scam you like at scale
1:03:21
with like you know I think
1:03:23
it's called like pig butchering scams
1:03:25
or something like all those romance
1:03:27
scams like tender tender scams like
1:03:29
that where computationally inexpensive to potentially
1:03:31
scam millions of people and most
1:03:33
of them will know what's going
1:03:35
on but some people will fall
1:03:37
for it and that'll pay off
1:03:39
the cost of the compute and
1:03:42
all that stuff so it's it's
1:03:44
even positive to keep if you
1:03:46
have absolutely no morals and you're
1:03:48
just a million bastard. That's very
1:03:50
sad. But like as long as there's
1:03:52
consensual interaction with an AI
1:03:54
agent and you know what's happening a
1:03:56
lot of people may be creeped out about
1:03:58
this but there's another. human phenomena that
1:04:01
is very important to the way
1:04:03
pretty much everything works in society
1:04:05
and that is the human brain
1:04:07
will perceive the static images that
1:04:09
are in rapid succession as like kind
1:04:11
of like a video type phenomenon
1:04:13
right like after all morphosize I think
1:04:15
right yeah I'm not sure what the
1:04:17
exact term is but basically like if
1:04:19
I'm staring at a movie and it's 24
1:04:21
frames per second I don't in a movie
1:04:23
theater it's not like okay there's an image
1:04:25
of a guy he's standing there oh Okay, here's
1:04:27
a new image. What is this?
1:04:30
Oh, this guy is standing like
1:04:32
slightly farther to the right. Oh,
1:04:34
look, here's another image. He's standing
1:04:36
even farther to the right. Like,
1:04:38
that's not how the human brain
1:04:40
works. It kind of interprets the,
1:04:42
you know, even 24 frames per
1:04:44
second as being a fluid kind
1:04:46
of like visual experience. And the
1:04:48
human brain could easily have not
1:04:50
worked like that. And then
1:04:52
movies just wouldn't be possible. Exactly.
1:04:55
Yeah, it's actually a I think like
1:04:57
there's the human like frame loop
1:04:59
and then there's also the I
1:05:02
mean 60 frames per second is
1:05:04
like already indistinguishable from like reality,
1:05:06
but I think we actually tested
1:05:08
this The top pro gamers like
1:05:11
reaction speed is somewhere between 100
1:05:13
to 300 milliseconds. Yeah So even if you
1:05:15
see an image like on the screen, like for you
1:05:17
to be able to react it will take at least
1:05:19
a tenth of a second. And if you go to
1:05:21
a science museum, they'll often have this thing
1:05:24
that will randomly drop a ruler and you
1:05:26
catch it and where you caught it tells
1:05:28
you your reaction speed to being able to
1:05:30
catch like the ruler getting dropped. And
1:05:32
I think like most people will
1:05:34
have a reaction speed of like...
1:05:36
point five seconds. So 250 milliseconds,
1:05:38
whereas a pro gamer might have
1:05:40
half that, which is phenomenal. And
1:05:43
unfortunately, they're going to lose that
1:05:45
as they get older because everybody
1:05:47
gets older. But to get back
1:05:49
to, so to some extent, like
1:05:51
the phenomenon of humans, being able
1:05:54
to build relationships with characters
1:05:56
that are not real, that are like
1:05:58
AI, essentially, is like. a positive work,
1:06:00
it can be used in a positive way
1:06:03
to create these kind of like agents
1:06:05
that people can build relationships with. Whereas
1:06:07
if that phenomenon didn't exist, people would
1:06:09
just be the whole time, oh, it's
1:06:11
not a real human being, whatever, just
1:06:14
walk away. But because that quirk of
1:06:16
humanity exists, there is space for these
1:06:18
infinite games where you can have like
1:06:20
extremely esoteric characters. Like let's say I
1:06:22
want to bring back from the dead
1:06:25
like some. very very specific musician from
1:06:27
like the brook period that very
1:06:29
few people are interested but i
1:06:31
want to jam with that person right
1:06:34
like a i could make something like that
1:06:36
possible and it's so specific that
1:06:38
there would not be like a market
1:06:40
for creating like a hot city new
1:06:42
version of the specific composer from the
1:06:45
brook period you know like they just
1:06:47
wouldn't be viable from an economic perspective
1:06:49
but with a i in the mix
1:06:52
suddenly things that were you know not feasible
1:06:54
previously yeah and can be
1:06:56
done kind of on the
1:06:58
fly inexpensively and it's and
1:07:01
it's super interesting because obviously
1:07:03
like there's this whole generation
1:07:05
of like AI therapist like
1:07:07
AI boyfriend girlfriend which is
1:07:09
interesting but like a lot
1:07:11
of these apps are still
1:07:13
like pretty pretty chat base so
1:07:16
During the pandemic, I played a lot
1:07:18
of animal crossing and a lot of
1:07:20
my friends also played a lot of
1:07:23
animal crossing. And a lot of people
1:07:25
have like really fallen off of playing
1:07:27
animal crossing after the pandemic ended and
1:07:29
people started going back to work. But
1:07:32
one thing that I've noticed across like
1:07:34
all of my friends who continue playing
1:07:36
animal crossing even after the pandemic ended
1:07:38
and would continue to spend like hundreds
1:07:41
of hours. in this game are people
1:07:43
who actually really develop like a personal
1:07:45
relationship with the characters, right? They're playing
1:07:48
because they want to interact with the
1:07:50
characters more, they want to like, you
1:07:52
know, build their relationship with their characters,
1:07:54
they want to like give them gifts
1:07:56
and things like that. And already you
1:07:59
can kind of like see like how building
1:08:01
a relationship with a virtual character
1:08:03
in a 3D space is already
1:08:05
like a huge phenomenon especially amongst
1:08:07
like today I think like people
1:08:09
are going like spending more and
1:08:11
more their time online and like
1:08:13
less of their time in real
1:08:15
life not sure if that's a
1:08:17
good thing or not but we
1:08:19
that's but that is kind of
1:08:21
the reality and Virtual characters that
1:08:23
only talk to you via text
1:08:25
or like voices only get you
1:08:27
so far but virtual characters that
1:08:29
can You know behave like humans
1:08:31
and have a body in like
1:08:33
a 3D like virtual space like
1:08:35
that's actually super super interesting and
1:08:37
that has applications actually beyond MPCs
1:08:39
and like building relationships with these
1:08:41
virtual virtual characters obviously that also
1:08:43
has applications where they can like
1:08:45
help you, for example, like train
1:08:47
a player up and coach them
1:08:49
for more competitive games that has
1:08:51
applications where these AI companions can
1:08:53
play with you as like a
1:08:55
character across like games, like both
1:08:57
single player and multiplayer games when
1:08:59
your friends are not online, has
1:09:01
applications in terms of just like
1:09:03
play testing games, right? And like
1:09:06
trying to find every bug and
1:09:08
like listing that report. And, you
1:09:10
know, having, like, painting the engineering
1:09:12
team to fix it, or in
1:09:14
some cases, maybe the AI can
1:09:16
do code gen and, like, fix
1:09:18
the bugs themselves. Yeah, that's pretty
1:09:20
exciting. The notion that, like, something's
1:09:22
broken and you'd be like, hey,
1:09:24
like, do you see that? Why
1:09:26
is that tree, like, floating above
1:09:28
the ground? Oh, let me fix
1:09:30
that real quick. And then the
1:09:32
AI agent puts in a poor
1:09:34
request. I mean, that was a
1:09:36
pretty remarkable. One thing that you
1:09:38
said there about like animal crossing
1:09:40
and the characters keeping people come
1:09:42
back I'm convinced that's why like
1:09:44
like World of Warcraft if World
1:09:46
of Warcraft games like that memo
1:09:48
or MMWRPG where they have like
1:09:50
a physically instantiated human-like body whether
1:09:52
that's like a dwarf or an
1:09:54
elf or something like that. But
1:09:56
like they're running around, they're doing
1:09:58
stuff together, they're going on raids
1:10:00
together. Imagine that you have all
1:10:02
these friends and you know they're
1:10:04
interesting people that are living in
1:10:06
like, you know, Omaha or wherever
1:10:08
that you're getting on and you're
1:10:10
grabbing your doctor pepper and you're
1:10:12
sitting down in your play with
1:10:14
them for a few hours and
1:10:16
going into some dungeon or going
1:10:18
in fighting some other... guilds or
1:10:20
something like that and it is
1:10:22
the people that keep you coming
1:10:24
back. Gameplay is not that competitive
1:10:26
it's like a you know kill
1:10:28
loot you know it's also like
1:10:30
very old though which is kind
1:10:32
of crazy yeah but the people
1:10:34
are what keep people interested right
1:10:36
like the conversations and like the
1:10:38
conversations and that feeling of camaraderie
1:10:40
And yes, you can't really achieve
1:10:42
that if you know that AI
1:10:44
is like not a real person.
1:10:47
And if they don't have a
1:10:49
life outside of this AI, if
1:10:51
they do, that backstory is fabricated
1:10:53
because they're not right. They don't
1:10:55
have to make rent. But you
1:10:57
know, it's what's really crazy is
1:10:59
now if you if you make
1:11:01
an AI that has like a
1:11:03
virtual body, right, like exactly the
1:11:05
way that a human player would
1:11:07
have. What if you can't tell
1:11:09
the difference whether that player is
1:11:11
an AI or they're human? right
1:11:13
that's I mean that cross is
1:11:15
kind of like an ethical boundary
1:11:17
like I would be disappointed and
1:11:19
upset if like somebody I built
1:11:21
up a long personal relationship I
1:11:23
found out they were an AI
1:11:25
now if I know like characters
1:11:27
an animal crossing you know they're
1:11:29
not real people yeah right you
1:11:31
know so you can build up
1:11:33
a relationship with them and you
1:11:35
know oh that's cute you know
1:11:37
like like in Mario 64, like
1:11:39
I would pick up the little
1:11:41
baby penguin and I'd carry it
1:11:43
over to the moment penguin and
1:11:45
I thought I had a personal
1:11:47
relationship. But nowhere in that process
1:11:49
did I feel like I was
1:11:51
interacting with a real, you know,
1:11:53
penguin that like had the, you
1:11:55
know, mortal fear of dying and
1:11:57
stuff like that blooming over itself.
1:11:59
I'll be so sad. Yeah, exactly.
1:12:01
Hollywood loves to do movies about like
1:12:03
these kinds of things like, oh, what
1:12:05
if so and so didn't realize they
1:12:08
were a character in a novel or
1:12:10
a character in a video game or
1:12:12
something like you're at, right? But like,
1:12:14
I think there has to be consent.
1:12:16
Like there's like, like a disclaimer, okay,
1:12:18
you know, so and so is a
1:12:20
character in this game. They are not
1:12:22
like logging off and going to work
1:12:24
at 7-11. and then going home and
1:12:26
fighting with their parents, you know? But
1:12:28
the beauty of AI is now you
1:12:31
can have both, right? So like
1:12:33
if you're human players or like
1:12:35
if you're human friends are not
1:12:37
online because they have work and
1:12:39
they have school and they have
1:12:42
like all these like other personal
1:12:44
obligations and they have to
1:12:46
make friends, right? You now
1:12:48
theoretically, right, now have 24-7
1:12:51
available. AI companions or AI like
1:12:53
players that you can play with whenever you
1:12:55
want just so you're not the only one
1:12:57
online. Yeah and that's not really fundamentally
1:12:59
different from like oh my friends aren't
1:13:02
available to play chess with me so
1:13:04
I guess I'll play against the the
1:13:06
computer. That's true yeah yeah but
1:13:08
I guess my point is computer would be
1:13:10
smart. No yeah you can always just push
1:13:12
me you can like set it easier than
1:13:15
you it's all yeah anyway. I think that
1:13:17
it needs to be like illegal or something
1:13:19
like that. So I think that is like
1:13:21
one little thing that I will opine
1:13:23
upon is I don't think it's
1:13:25
healthy for people to get like
1:13:27
catfished so to speak into like
1:13:29
either talking with a human. You're
1:13:31
actually talking with any agent like
1:13:34
I think that it needs to
1:13:36
be like illegal or something like
1:13:38
that like there needs to be
1:13:40
some sort of like required disclosure
1:13:42
whenever you're operating. with an
1:13:45
AI because I feel like it just feels
1:13:47
extremely violating when you get bait and switch
1:13:49
and you're like oh you know I really
1:13:51
have a strong you know sentiment toward this
1:13:53
person and like I love checking in with
1:13:55
them and then you find out they're not
1:13:57
real like that's like I mean just you know
1:13:59
like pardon anybody who's listening with
1:14:01
kids around but like that it's
1:14:03
like it's almost kind of like
1:14:06
that the stab in the heart
1:14:08
when you realize oh so-and-so isn't
1:14:11
real the the holidays based around
1:14:13
right so I won't get too
1:14:15
explicit really not really all these
1:14:18
years sad destroy your childhood anyway
1:14:20
so I want to just to
1:14:22
ask you a couple quick questions
1:14:25
about yeah because I'm very
1:14:27
excited to learn your perspective.
1:14:30
You've worked in self-driving. And
1:14:32
you've worked in ARBR
1:14:35
and stuff like that. How close
1:14:37
are we to, like, I guess, true,
1:14:39
you know, automated full
1:14:41
self-driving in your opinion
1:14:44
where I can get in my car
1:14:46
in Dallas and I can say,
1:14:48
take me to Peggy's place
1:14:50
in San Francisco because
1:14:52
we're going to go eat some... What
1:14:55
is something people love to eat in? We're
1:14:57
gonna eat some topos mission mission
1:14:59
mission burritos mission has the best burritos
1:15:01
I went to the best I know oh
1:15:03
my gosh okay just drive me there I'm
1:15:06
gonna I'm gonna drive me there I'm gonna
1:15:08
I'm gonna sleep I'm gonna sleep I'm gonna
1:15:10
you know play some video games on my
1:15:12
game boy advance that I've modified to have
1:15:15
like better battery life or something like that
1:15:17
and I'm just gonna hang out right and
1:15:19
maybe my cattle be at my side and
1:15:21
we're gonna arrive in approximately 36 hours in
1:15:23
the middle and we're going to be able to
1:15:26
get some brilliance. Like how far are we? And
1:15:28
Quincy, now you have to do this now that
1:15:30
the, you know, once the technology becomes available, we
1:15:32
have to, we have to, you have to schedule
1:15:34
some time for that. That would be like an
1:15:36
entire week of my life if I'm really interested.
1:15:38
Let's go. You can live stream it
1:15:40
or like record the the YouTube video
1:15:43
of that too. Like the Desert Bus
1:15:45
Challenge. Like, okay, we're still looking at
1:15:47
it flat ground. Well, look, there's no,
1:15:50
what, what, what, what, what, Pokemon Game
1:15:52
of Anna and now? I'm going to play like
1:15:54
all the generations. You are the road.
1:15:56
Well, we can be, well, most of
1:15:58
cameras. Okay. All right. So. So let's,
1:16:00
that hypothetical goal of me
1:16:03
just being able to sit
1:16:05
down, you know, turn the
1:16:07
keys into the admission or
1:16:09
press the button and then
1:16:11
just the car figures out
1:16:13
everything that needs to happen
1:16:16
between there and then to
1:16:18
safely get me to San
1:16:20
Francisco. I think I'm an
1:16:23
optimist. I think we are about
1:16:25
three to five years away from
1:16:27
that. Three to five years. Yeah.
1:16:29
For actually if you if you come to
1:16:32
San Francisco right now or I think
1:16:34
in a couple other cities like LA
1:16:36
even though there's like fires right now
1:16:38
like in like Phoenix or something like
1:16:40
the Arizona is very flat and has
1:16:43
very yeah like roads and so it's
1:16:45
a common testing ground. Yeah yeah yeah.
1:16:47
Phoenix. But LA is carefully mapped
1:16:49
out and they have lots of training
1:16:52
data for all the different roads and
1:16:54
stuff like that. Yes. It's not the
1:16:56
same as driving like. you know on
1:16:58
highway conditions driving like and if
1:17:01
it starts raining it's like a
1:17:03
lot of different things can basically
1:17:05
like if the car doesn't feel
1:17:08
it's safe then it will stop
1:17:10
operating basically yeah yeah so the
1:17:12
reason why I say this it's like
1:17:15
for like people who don't know Waymo
1:17:17
is has been operational in San
1:17:19
Francisco for I think like
1:17:21
the last two years and Yeah
1:17:24
last two years and then Cruise
1:17:26
which unfortunately recently got shut down
1:17:28
by GM had also been operational
1:17:31
in San Francisco for about the
1:17:33
same amount of time that that
1:17:35
Waymo had Waymo has been working
1:17:37
on this problem I believe since
1:17:39
like 2008 or like 2010 so
1:17:42
they've they've been working on and
1:17:44
this research on self-driving cars for
1:17:46
a very very long time and the
1:17:48
reason why I think it'll happen in the
1:17:50
next three to five years is
1:17:52
actually The I think the technology
1:17:55
has actually gotten
1:17:57
there. I think it's a matter
1:18:00
of engineering and productionizing the
1:18:02
technology. And the reason I
1:18:04
say that is yes, because
1:18:06
they do do like a
1:18:09
lot of like the manual
1:18:11
mapping and the AJ, you
1:18:13
know, they do have like a
1:18:15
lot of fair or safe systems
1:18:17
to ensure that these cars like
1:18:19
don't go rogue and you know,
1:18:21
start like crashing or whatever.
1:18:24
And it's interesting
1:18:26
because the safety standards for
1:18:28
like self-driving cars is actually
1:18:30
like way higher than human
1:18:32
human drivers and so like
1:18:34
Waymo hasn't like I think it
1:18:36
had like a couple like minor accidents
1:18:38
but none of it was it's its
1:18:41
fault actually and it's usually the fault
1:18:43
of the human driver and the fact
1:18:45
that Waymo has been like operating a
1:18:47
fleet of cars in San Francisco for
1:18:50
the last two years and had like
1:18:52
zero, nearly zero accidents, right,
1:18:54
is something insane. And
1:18:56
like, the approach of
1:18:58
kind of like generalization
1:19:01
is definitely like a hard
1:19:03
problem to work on. But
1:19:05
I actually think that we
1:19:08
are there already in terms of
1:19:10
like, just like, um, capability
1:19:13
with human drivers. I
1:19:15
think something that self-driving cars have
1:19:17
to show is that they're
1:19:19
actually better than human drivers
1:19:21
and that's like, especially with,
1:19:23
you know, the regulations and
1:19:25
like, just like how people,
1:19:28
how safe people feel like being
1:19:30
in them. So the bar for them
1:19:32
to reach that level of quality is
1:19:34
especially much higher than like a human
1:19:37
like car, like human driving a
1:19:39
car or a car manufacturer.
1:19:41
And I think the technology to do
1:19:43
that actually does already exist, right?
1:19:45
So Waymo has been doing tests
1:19:47
for highway driving. They're opening up
1:19:49
highway driving very soon. They're available
1:19:52
across a variety of environments. They
1:19:54
have tested it in bad weather
1:19:56
conditions, such as rain or snow.
1:19:58
Waymo actually drives fine. in the
1:20:00
rain, if you've written one in
1:20:03
San Francisco. And I think the
1:20:05
main, and it's actually trivial for
1:20:07
Google to map out every city
1:20:09
because they own Google Maps. And
1:20:11
they either can, it's, so in
1:20:13
terms of, like, for any other
1:20:15
company, I'd be a little bit
1:20:18
more worried about the whole mapping
1:20:20
process and like them updating the
1:20:22
maps like for every city that
1:20:25
they launch in, but for Google,
1:20:27
that's kind of a trivial problem.
1:20:29
Yeah, I actually think like three to
1:20:32
five years if not sooner.
1:20:34
That's very bullish. One question
1:20:36
I have is like, are
1:20:38
there any big engineering breakthroughs
1:20:41
that you think would accelerate
1:20:43
that? Fast large language models
1:20:45
that are able to generalize
1:20:48
because one of the cool
1:20:50
things I think like people
1:20:53
talk about artificial general intelligence,
1:20:55
AI like robotics that are
1:20:57
able to do a variety of
1:20:59
tasks. In some ways, if you
1:21:01
can think of it as an
1:21:04
approximation for human reasoning and human
1:21:06
brain, if you enable like
1:21:08
large language models to like
1:21:10
make decisions at a very,
1:21:12
very fast pace, like almost
1:21:14
like a human driver would,
1:21:16
right, in like accident prone
1:21:18
scenarios, you can actually like
1:21:21
help mitigate a lot of these
1:21:23
edge cases that like Waymo is
1:21:25
going to see on the road, right?
1:21:27
And then. So I think that's like the.
1:21:29
they already exist in some sense, they
1:21:31
need to get better and they need
1:21:33
to get faster. And if they're able
1:21:35
to do that, then I think like
1:21:37
self-driving cars that are able to generalize,
1:21:40
like minus, you know, kind of the
1:21:42
whole engineering effort will be able to
1:21:44
scale very, very, very quickly. Self-driving
1:21:46
cars and robotics in general. Awesome.
1:21:49
And on a related question, like, how far
1:21:51
do you think we are from like, ready
1:21:53
player one, like, I don't know if you...
1:21:55
Read the novel like oh yeah, read the
1:21:58
novel Washington City, yeah, oh, Oklahoma that's
1:22:00
your base yeah but like how far
1:22:02
are we from having this like
1:22:04
obviously there's like the the treadmill
1:22:07
you know the 3d treadmill that
1:22:09
helps like with my understanding is
1:22:11
for VR there are like some
1:22:14
fundamental limitations like how humans perceive
1:22:16
they make it yes orienting and
1:22:18
nauseating to like run around without
1:22:21
actually having the body run around
1:22:23
but like let's assume that the the
1:22:25
the the eight direction or or multi-directional
1:22:27
treadmills existed just like in in the
1:22:29
book or in the movie where you
1:22:31
can like be walking around and yeah
1:22:33
you can be in a stationary place
1:22:36
you don't have to worry about reaching
1:22:38
the edge of your room and you
1:22:40
can do things and it could be
1:22:42
like you're walking around in you know World
1:22:44
of Warcraft type environment how far
1:22:46
are we from that from not
1:22:48
like the hardware is associated with
1:22:51
like the the treadmill type things
1:22:53
but in terms of other aspects of
1:22:55
VR that could get us to where
1:22:57
it feels like a compelling experience and
1:22:59
it's it's not just like a kind
1:23:01
of a simplified like you know
1:23:03
Nintendo we me type experience but
1:23:06
like it is actually like it
1:23:08
feels like you're in World of
1:23:10
Warcraft because my understanding is it's
1:23:12
a lot harder to have World of Warcraft render
1:23:14
like on two different things and like how
1:23:16
high enough resolution Yeah, that's a high enough
1:23:19
frame rate and all that stuff to make
1:23:21
it feel real than it is to just
1:23:23
look at a monitor. That's you know 128
1:23:25
hertz or something like that That's that's actually
1:23:27
a hardware limitation. So actually in
1:23:29
terms of like software capability and
1:23:31
and actually somebody should like build
1:23:33
this and prove that it's actually
1:23:35
possible because I feel like that
1:23:37
would actually be super inspiring to
1:23:39
the whole field of ER is that, like,
1:23:42
you can actually, like, as a one-time
1:23:44
thing, and Apple Vision Pro kind of
1:23:46
proved, like, some aspects of this, you
1:23:49
can build a high, a super, super
1:23:51
high Fidelity VR headset for, like, a
1:23:53
very specific use case, and that is
1:23:56
to basically what exactly what you're talking
1:23:58
about, render, world of warcraft. in like
1:24:00
super fast, I think it's like more
1:24:02
than 120 hertz per second in
1:24:05
like full like 360 degree
1:24:07
view, with like decent quality
1:24:09
graphics. I think that is
1:24:11
actually possible to build today,
1:24:14
but it's not possible for
1:24:16
it to be economical and
1:24:18
like be able to mass
1:24:20
produced because you know Apple Vision, it
1:24:22
will be have to be higher level
1:24:24
quality than the Apple Vision
1:24:26
Pro. And the Applevision
1:24:29
Pro is like $4,000 and like, you
1:24:31
know, like most people don't and
1:24:33
like don't use it because
1:24:35
there's like not enough content.
1:24:37
So if somebody like, I don't
1:24:40
know, like Apple or like a
1:24:42
meta or like another like billion
1:24:44
dollar company would watch it like
1:24:46
take on this research endeavor and
1:24:49
basically build a super
1:24:51
high fidelity like we are
1:24:53
hardware that can render things
1:24:55
in full 360 at like
1:24:57
150 hertz or whatever and
1:25:00
somebody actually builds a game
1:25:02
right with that level of
1:25:04
graphics and quality and and
1:25:06
in that like 360 degree view
1:25:08
frame I think that that's
1:25:10
actually possible I just think
1:25:13
that it will cost a
1:25:15
lot of money in terms of research
1:25:17
and just like hardware
1:25:19
costs. But yeah, I mean, I'm
1:25:21
bullish. I think we'll get there
1:25:23
like pretty soon. But again,
1:25:25
it's like, no, like very
1:25:27
few companies are pushing to
1:25:29
forefront of VR today. And
1:25:31
so that's always kind of
1:25:33
a, a sad state of
1:25:35
affairs. Like, I'm not sure like
1:25:38
how much more money Apple is
1:25:40
investing in BR after the Vision
1:25:42
Pro like didn't quite take off.
1:25:45
So. One question I have related
1:25:47
to that is, does it need
1:25:49
to take off? Do we need
1:25:52
BR or can we continue to
1:25:54
suspend disbelief by looking at 2D
1:25:56
screens and still have really compelling
1:25:58
video game experiences? Like 3D TVs
1:26:00
didn't take off. People still watch
1:26:03
movies. It's just they don't bother with
1:26:05
the 3D aspect because it turns out
1:26:07
that it's immersive enough to watch a
1:26:09
really good movie on like, you know,
1:26:12
a 4K monitor or something like that.
1:26:14
And as I think Sergei Brin pointed
1:26:16
out, like, if you have like an...
1:26:19
you know, a smartphone and you hold
1:26:21
it a few inches from your face
1:26:23
and you watch it. It's like you're
1:26:26
watching it. I'm ex-theater. Basically, like Google
1:26:28
even had that little cardboard thing where
1:26:30
they... Do you think there's like some limit
1:26:32
to how immersive something can be if it's
1:26:35
just on a 2D screen? Because I can
1:26:37
immerse myself in a game of like chess
1:26:39
or dominion or something like that that I'm
1:26:41
playing in a browser and that's totally sufficient
1:26:44
because of the way the human brain works.
1:26:46
Are there phenomena like that where you don't
1:26:48
necessarily like how I was talking about you
1:26:50
don't necessarily need to have? you know, 150
1:26:53
hertz to make a, like, what seems to
1:26:55
be like a continuous video, because the way
1:26:57
the brain works, 24 frames per second is
1:26:59
enough to, like, help someone feel like
1:27:02
this old Alpaccino movie from the
1:27:04
1970s, it's sufficiently, you know? Yeah,
1:27:06
I mean, I think in VR, it's, uh,
1:27:08
the technology, I guess, like, barrier is a
1:27:11
lot higher just because, uh, part of a
1:27:13
reason the refresh rate has to be so
1:27:15
fast and it has to be, like, like,
1:27:18
like, like, super high quality in terms of
1:27:20
whenever you move your head, like
1:27:22
the screen also like, like the
1:27:24
perception that the screen also moves
1:27:27
has to move like that with you,
1:27:29
and that all has to be like
1:27:31
synced up. And then if you, if
1:27:33
it doesn't sink up in like
1:27:36
the correct frame rate, you feel
1:27:38
like really nauseated, right? And so
1:27:40
that's like the biggest like kind
1:27:42
of like tech blocker is like,
1:27:44
whenever like you move your head,
1:27:46
like the scene also moves,
1:27:48
right. and refreshes like really
1:27:51
fast. But in terms of
1:27:53
like the immersivity question,
1:27:56
I think it actually, well
1:27:58
one, I don't know. because I feel
1:28:00
like if we reach that point of
1:28:03
like ready player one VR
1:28:05
that's actually gonna look if
1:28:07
it truly looks and feels
1:28:09
indistinguishable from reality I think
1:28:12
a lot of people who are
1:28:14
escape have escaped its tendencies right
1:28:16
people who watch movies who read
1:28:19
fantasy novels who like blitz through
1:28:21
like 12 seasons of Game of
1:28:23
Thrones right like they're gonna want
1:28:25
that right and I don't know how
1:28:28
Big of a portion of the
1:28:30
population that's going to be but
1:28:32
I think that a good amount of
1:28:34
people would probably want that now they
1:28:36
might they might be like You know
1:28:38
like even like gamers, right like
1:28:41
people who are like I'd say
1:28:43
like casual gamers or people who
1:28:45
are more like hardcore gamers, right
1:28:47
like you never like hardcore gamers
1:28:49
are always gaming. They're like they're
1:28:51
like playing league like 24-7, right
1:28:54
you never see them out of
1:28:56
their basement and I feel like those
1:28:58
are the types of people who
1:29:00
would be down to kind
1:29:02
of be in a more
1:29:04
fully immersive world. Whether the
1:29:06
general population wants that, my guess
1:29:09
is probably no because you're right,
1:29:11
like a lot of people
1:29:13
are totally okay with just
1:29:15
like watching a movie at a
1:29:17
movie or just like a living
1:29:19
room TV. or are just
1:29:21
okay with like going out
1:29:23
and like taking a walk
1:29:25
in the park and seeing
1:29:27
the sunlight without all this
1:29:29
like AR, AR, AR, VR
1:29:31
stuff in real life. But
1:29:33
I think like given that
1:29:35
a lot of people do
1:29:38
game very very heavily and
1:29:40
who are willing to spend
1:29:42
a lot of time and
1:29:44
money and resources on gaming,
1:29:47
I think. There's a good
1:29:49
amount of the population who would be
1:29:51
very very into this sort of thing.
1:29:53
Yeah, let's talk about the role of
1:29:55
these AI agents in making games more
1:29:57
compelling. We talked a little bit about
1:29:59
it. But what does the feature look
1:30:02
like? Can you paint us a
1:30:04
picture of, let's say hypothetically, I
1:30:06
wanted to go back and live
1:30:08
in like kind of a broke
1:30:10
composer meta, where it's just like a
1:30:12
bunch of composers like trying to one
1:30:14
up each other and impress, you know,
1:30:17
the. the king or the geyser wherever
1:30:19
they are in the world and they're
1:30:21
just trying to and everybody's wearing these
1:30:24
fancy like clothes and it's like you're
1:30:26
if you watch the movie Amadeus amazing
1:30:28
movie it's like it's like that and
1:30:31
I just want to go to that
1:30:33
world and it's not cost effective for
1:30:35
a AAA game studio to create a
1:30:38
Baroque simulator type world, but we have
1:30:40
enough historical documentation about how people talk,
1:30:42
how people act then, that we
1:30:44
could potentially create a bunch of
1:30:46
AI agents, whom I could interact
1:30:48
with, so I could live out
1:30:50
my fantasies of, you know, being
1:30:52
a composer and one-uping Mozart handle
1:30:54
with them, or like that, right?
1:30:56
So, so it's not, it's a
1:30:58
game experience just specifically for what
1:31:00
I want. to have my power.
1:31:02
You have a really unique vision of
1:31:05
what exactly what you want. Yeah, like
1:31:07
maybe maybe there are hundreds of
1:31:09
thousands of people that would be
1:31:11
interested in that, but there aren't
1:31:13
necessarily 10 million people that would
1:31:15
rather be playing call of duty or
1:31:17
something, right? Yeah. Yeah. So, um, yeah. I think that's
1:31:20
actually actually super compelling. That's actually
1:31:22
one of the use cases that
1:31:24
we do want to enable with
1:31:26
ego is like you can basically
1:31:29
create your own like personal simulation.
1:31:31
of like exactly what you like
1:31:33
want, right? Whether that's a
1:31:35
broke period, like, you know, style,
1:31:37
composer, like, right, or whether that's
1:31:40
like an animal crossing style
1:31:42
like Hosey Bolliger, or whether that's,
1:31:45
I don't know, like ready, ready
1:31:47
player one Esk, landscape where you're
1:31:49
in like a dystopian world
1:31:52
and you're trying to save the
1:31:54
world and all of the characters
1:31:56
in that world or fuel like
1:31:59
for you. realistic. Like yeah I
1:32:01
think like I think that's that's
1:32:03
effectively the vision of what we
1:32:05
want to create with ego. I
1:32:08
think the biggest blocker to that
1:32:10
vision is that actually the characters
1:32:12
part it is actually the art
1:32:14
part of like how that's going
1:32:17
to be generated because I think
1:32:19
like the big big thing like
1:32:21
blocking a lot of this from
1:32:23
existing and why game studios are,
1:32:25
you know, so They spend a
1:32:28
lot of like big budget on
1:32:30
games is because you have to
1:32:32
budget out like where the production
1:32:34
cost goes and that's usually more
1:32:37
in the case of building these
1:32:39
like immersive environments and building these
1:32:41
these art Assets, so yeah, I
1:32:43
think We'll get there, but we
1:32:46
have to kind of build all
1:32:48
the infrared that gets us there
1:32:50
first before that vision becomes a
1:32:52
reality. So essentially like tooling. Just
1:32:54
creating the tools that allow for
1:32:57
game designers to sit down and
1:32:59
just have, I guess, more powerful
1:33:01
primitives that they're working with. I
1:33:03
don't know if that's a correct
1:33:06
way of putting it. I would
1:33:08
say we're specifically focused on agents
1:33:10
and the human-like agents. do see
1:33:12
the opportunity for a lot of
1:33:14
game designers to kind of like
1:33:17
design their own scenarios, whether that's
1:33:19
like scenes or characters or characters
1:33:21
that have different motivations, different memories,
1:33:23
different ways of interacting with the
1:33:26
world. I think that is something
1:33:28
that's like super compelling and what
1:33:30
we're working towards, but I think
1:33:32
in terms like. There's there's been
1:33:34
like a lot of like discussions
1:33:37
on like you know AR and
1:33:39
I don't really want to get
1:33:41
into like the philosophical and ethical
1:33:43
quandaries about that but I think
1:33:46
like Yeah, that that is probably
1:33:48
like the huge kind of like
1:33:50
limitation about that could already start
1:33:52
to be pretty compelling for people
1:33:54
to be able to create their
1:33:57
scenarios on a fly and to
1:33:59
be able to procedurally generate worlds
1:34:01
on the fly with with different
1:34:03
characters that could already start to
1:34:06
be pretty compelling for people and
1:34:08
I think like obviously like the
1:34:10
vision is you know exactly what
1:34:12
you describe like be able to
1:34:15
create any scene and then for
1:34:17
it to generate whole like simulations,
1:34:19
whole worlds, basically generate the game
1:34:21
as you're like thinking about what
1:34:23
to play next, not even like
1:34:26
typing what's about to play next,
1:34:28
and then the AI will generate
1:34:30
the world and the characters for
1:34:32
you, and you can like build
1:34:35
relationships with the characters, and yeah,
1:34:37
and you can like maybe romance
1:34:39
them or like, like make them
1:34:41
your best friend, right? And I
1:34:43
think that's actually... like that's actually
1:34:46
super super interesting yeah we're making
1:34:48
them extremely adversarial like a lot
1:34:50
of enemies are like based around
1:34:52
creating great you know great passionate
1:34:55
friendships and stuff like that but
1:34:57
yeah romantic but I think like
1:34:59
the notion of creating like a
1:35:01
nemesis who's constantly against you you
1:35:03
know yeah we do remind you
1:35:06
like like a kind of a
1:35:08
Moriarty to your Sherlock I think
1:35:10
that could be a really cool
1:35:12
use for you too and oh
1:35:15
yeah like we're not we're not
1:35:17
gonna talk about the AI art
1:35:19
you know there are a lot
1:35:21
of ongoing lawsuits and stuff like
1:35:24
that and of course I think
1:35:26
the artists have plenty of reason
1:35:28
to be aggrieved, musicians, everybody who's
1:35:30
creating anything, FreeCo Camp authors, of
1:35:32
which, who's, you know, we have
1:35:35
more than, I think, like 700,000
1:35:37
or 800,000 forum threads that were
1:35:39
most likely, as part of training
1:35:41
data. We get more bought traffic
1:35:44
now than we've ever gotten, like
1:35:46
just people training elements and stuff
1:35:48
like that, scraping. You know, we
1:35:50
have no scrap. So, but I
1:35:52
will say there is a lot
1:35:55
of public domain books, lots of
1:35:57
transcripts that precede any of this
1:35:59
stuff. All the people are long
1:36:01
since dead and no royalties are
1:36:04
necessary. No intellectual property will be
1:36:06
trounced upon if you just want
1:36:08
to use AI to create period
1:36:10
pieces. Like what I'm doing. training
1:36:12
now and I'm just on like
1:36:15
publicly available information and create games
1:36:17
using that let me know because
1:36:19
I'd like to talk to you
1:36:21
if you're but one thing I
1:36:24
will say is that I'm really
1:36:26
excited about the possibility like we
1:36:28
built like this visual novel game
1:36:30
a while back and even visual
1:36:33
novel love those I love love
1:36:35
those games and I mean like
1:36:37
it's it's the production value it's
1:36:39
like the true indie game deaf
1:36:41
oh okay let's talk about any
1:36:44
game development currently There are people
1:36:46
like I think Derek you created
1:36:48
splunky He did everything himself. I
1:36:50
believe including the music and and
1:36:53
it's just like a passion of
1:36:55
one man's vision for what a
1:36:57
game and my kids love that
1:36:59
game and they probably watch me
1:37:01
play it like 20 hours like
1:37:04
that It's a great game. Yeah,
1:37:06
like do you think tools like
1:37:08
ego like I mean assuming the
1:37:10
ego doesn't isn't just like a
1:37:13
standalone game but but that is
1:37:15
like package kind of like unreal
1:37:17
engine was actually based off of
1:37:19
unreal the game yeah real tournament
1:37:21
yeah like a lot of the
1:37:24
you know like everybody can say
1:37:26
wow this game looks amazing okay
1:37:28
well how would you like to
1:37:30
license this engine use it to
1:37:33
build similar games and that really
1:37:35
became like a much bigger business
1:37:37
than creating the game itself like
1:37:39
are you all interested in potentially
1:37:42
going in that tooling direction and
1:37:44
potentially licensing out like the capabilities
1:37:46
I think so I think one
1:37:48
of the We actually went through
1:37:50
Likewise Combinator, which is a startup
1:37:53
accelerator, and we got the chance
1:37:55
to talk to Paul Graham, PG.
1:37:57
And we told him about our
1:37:59
visual enough creating an infinite game
1:38:02
and the way that he pitched
1:38:04
our idea back to us and
1:38:06
he's like phenomenal at this sort
1:38:08
of things is he's basically said
1:38:10
you're building a game that's also
1:38:13
a game engine and and the
1:38:15
reason it is like that because
1:38:17
you're like while you're playing the
1:38:19
infinite game you're basically you. Have
1:38:22
to have like some sort of
1:38:24
game engine to be able to
1:38:26
build out like all these like
1:38:28
different scenes all these different simulations
1:38:30
all these different scenarios and characters
1:38:33
So you already have effectively a
1:38:35
game engine that's running in the
1:38:37
background I see no reason for
1:38:39
us to like not like give
1:38:42
this technology to Other people who
1:38:44
also like game designers and game
1:38:46
developers who also want to build
1:38:48
the games like that and But
1:38:51
I do think that there might
1:38:53
be like some limitations on our
1:38:55
part like you know as we
1:38:57
get there is like oh potentially
1:38:59
we would want them to kind
1:39:02
of build it on our platform
1:39:04
or like you know build it
1:39:06
build it on ego right but
1:39:08
that's that's a pretty big I
1:39:11
guess like that's a little bit
1:39:13
long term and then obviously it
1:39:15
can talk about that like later
1:39:17
on well I mean there there
1:39:19
are plenty of like analogs and
1:39:22
examples like the unreal engine like
1:39:24
unity 3-dages like Open source, but
1:39:26
like and it's even free until
1:39:28
you had a certain point and
1:39:31
then you they exactly And I
1:39:33
think that's a very egalitarian because
1:39:35
it ensures that like people hobbyists
1:39:37
and people that are just creating
1:39:39
extremely niche experiences Don't have to
1:39:42
pay a bunch of money up
1:39:44
front because that would restrict creativity
1:39:46
and I think like indeed developers
1:39:48
are like I think like even
1:39:51
it's really interesting because even like
1:39:53
tools like unity and unreal They're
1:39:55
not actually that indie friendly if
1:39:57
you think about it. Like they're
1:40:00
like way more. friendly than they
1:40:02
were in the past for sure.
1:40:04
But like, it's, it's, it's, if
1:40:06
it's your first time getting into
1:40:08
like game development, it's actually still
1:40:11
like quite hard to like wrap
1:40:13
her head around it and get
1:40:15
ready, like, just like kind of
1:40:17
build things out of the box.
1:40:20
And we actually think that, especially
1:40:22
like on, on like the coding
1:40:24
side and just like even in
1:40:26
terms of like the UI and
1:40:28
like making it more streamlined, there's
1:40:31
a lot you can do. to
1:40:33
make it easier for like Indie
1:40:35
Game Devs and kids, kids or
1:40:37
like people just beginning to learn
1:40:40
how to code or make games
1:40:42
better. And actually one good example
1:40:44
of that is Roblox, right? Roblox
1:40:46
Studio is actually way easier to
1:40:48
use than Unity or Unreal. It's
1:40:51
obviously not as powerful, but again,
1:40:53
you know, it's way easier and
1:40:55
that's kind of the tradeoff, right?
1:40:57
And so, yeah, I think there's
1:41:00
like definitely a lot of opportunity
1:41:02
there and I can't wait to,
1:41:04
you know, see what people create
1:41:06
more in the future with, especially
1:41:09
with better tooling, with better AI,
1:41:11
and potentially more time on their
1:41:13
hands with human-like robots. Yeah, 100%
1:41:15
I have two more closing questions.
1:41:17
First, let's say I have that
1:41:20
just like we were talking about
1:41:22
earlier, you know, developers and researchers
1:41:24
using Grand Theft Auto. as a
1:41:26
environment in which they could inexpensively
1:41:29
test out like self-driving car algorithms
1:41:31
and stuff like that. How far
1:41:33
do you think we are from,
1:41:35
you know, humanoid robots that are
1:41:37
empowered with like the kind of
1:41:40
AI agents you're using in your
1:41:42
game from actually being like embodied
1:41:44
and able to walk around the
1:41:46
world and actually do things. Like,
1:41:49
that's an extremely good question. There's
1:41:51
a lot of assumptions. Yeah. How,
1:41:53
I guess, just very big, like,
1:41:55
how many decades do you think
1:41:57
we are from that? Do you
1:42:00
think that's like... a 2070s thing
1:42:02
or a 2050s thing? So I
1:42:04
think it would be sooner, but
1:42:06
I also am an optimist. I'm
1:42:09
a technology optimist. I think things
1:42:11
will be happening sooner than they
1:42:13
would. I think the gap between
1:42:15
simulations and reality is. getting closer
1:42:17
and closer, right? Like the GTA
1:42:20
is just like kind of one
1:42:22
example, but even in like a
1:42:24
lot of, like I said earlier,
1:42:26
a lot of like AAA games,
1:42:29
they're getting closer and closer to
1:42:31
like reality, right? Like graphics level,
1:42:33
like fidelity, like all of that.
1:42:35
I actually think that the Sim
1:42:38
to Real Gap is closing, and
1:42:40
if you are able to like
1:42:42
build and rig up, basically all
1:42:44
the controls that are robot is
1:42:46
in like a... 3D video game
1:42:49
or 3D simulation and you basically
1:42:51
have the train the agent to
1:42:53
be allowed to do like you
1:42:55
know all the scenarios that a
1:42:58
robot could do in real life
1:43:00
you can actually like that gap
1:43:02
this simulation to reality gap that
1:43:04
simmed a real gap is actually
1:43:06
like pretty close and you should
1:43:09
be able to generalize that to
1:43:11
the robot in like you know
1:43:13
a couple years so Gosh,
1:43:16
I think if we were to
1:43:18
like want to build this in
1:43:20
simulations and video games, it'll probably
1:43:22
take like, I'd say like three
1:43:24
to five years for it to
1:43:26
like work pretty well. And then
1:43:28
maybe it'll take like one to
1:43:30
two years to transfer that onto
1:43:32
a robot. So maybe I would
1:43:34
say this decade, hopefully, you know.
1:43:36
Yeah, I would. I would love
1:43:38
for that to happen. Yeah, that's
1:43:40
really cool. I love your enthusiasm
1:43:42
and your passion and I think
1:43:45
that's like a Silicon Valley like
1:43:47
San Francisco type thing. Like I've
1:43:49
certainly experienced that when I go
1:43:51
to China as well. Like a
1:43:53
lot of... people in China are
1:43:55
like very optimistic in India as
1:43:57
well and like people are just
1:43:59
like really optimistic about technology and
1:44:01
you know a lot of people
1:44:03
in the United States are like
1:44:05
guarded they're like oh no but
1:44:07
what if you know Terminator what
1:44:09
if what if things to worry
1:44:12
about Terminator I feel like as
1:44:14
this AI you know revolution gets
1:44:16
closer and closer to like AGI
1:44:18
or whatever happens. I feel like
1:44:20
we're beginning to realize the very
1:44:22
like human nature of these like
1:44:24
conflicts and the human nature of
1:44:26
a political nature of these situations
1:44:28
and I feel like at the
1:44:30
end of the day it's still
1:44:32
humans making the decision. It's not
1:44:34
the AI making a decision. Like
1:44:36
humans would still want to control
1:44:39
the control and subjectate the AI
1:44:41
to their will, right? However that
1:44:43
may be. And I think like...
1:44:45
At the end of the day,
1:44:47
we don't have to worry about
1:44:49
Terminator. We have to worry about,
1:44:51
you know, like Putin or whatever,
1:44:53
right? So... I think we have
1:44:55
to worry about, like, a few
1:44:57
people making decisions, thinking that they're
1:44:59
making decisions on behalf of all
1:45:01
humanity. Exactly. It's kind of like
1:45:03
the agency problem, right? At the
1:45:06
end of the day, like, I
1:45:08
could release, you know, a self-driving
1:45:10
car on the road that I'm
1:45:12
like, this is perfectly trained. you
1:45:14
know people maybe sort of driving
1:45:16
through some cow pasture and threatening
1:45:18
all the cows or something. And
1:45:20
like I could impose my will
1:45:22
upon the world but like there's
1:45:24
not like a reciprocal kind of
1:45:26
like countervailing force that stops me
1:45:28
from like it's much easier for
1:45:30
me to do something dangerous or
1:45:33
put a bunch of people in
1:45:35
peril than it is for somebody
1:45:37
to offset that. If that makes
1:45:39
sense. Kind of like it's much
1:45:41
easier for me to put misinformation
1:45:43
out there than it is for
1:45:45
somebody else to come and correct
1:45:47
that misinformation. The old Mark training
1:45:49
thing, like a lie can get
1:45:51
halfway around the world while the
1:45:53
truth is still putting its boots
1:45:55
on. That's true. Yeah. And so
1:45:57
because of that. fundamental asymmetry I
1:45:59
think that's the main concern that
1:46:02
people have yeah because in terminator
1:46:04
it's just like some corporation that
1:46:06
thinks doing the profit maximizing thing
1:46:08
oh this will be great for
1:46:10
military applications and next thing you
1:46:12
know you're self-aware and decides that
1:46:14
it wants to do something that
1:46:16
is divorced from the I guess interest
1:46:18
of humanity but it wasn't like every single
1:46:20
human like rubber stamped out or gave the
1:46:22
thumbs up for that AI to be launched
1:46:25
it was just like a extremely small you
1:46:27
know somewhat inbred, you know,
1:46:29
academic or all of like
1:46:32
corporate people. Right. I'm not.
1:46:34
Right. And they know what
1:46:36
it was best. I think
1:46:38
that's the parable that people
1:46:40
are afraid of. Yes. But
1:46:42
I totally agree. I'm hardened
1:46:44
here that like you're not
1:46:47
that sweating it. Yeah. I mean,
1:46:49
I think like whoever ends
1:46:51
up making these decisions
1:46:53
will be. like you said, a
1:46:55
small group of people making these decisions.
1:46:57
So instead of like us, like worrying
1:47:00
about AI, maybe we should really worry
1:47:02
about who are the people right at
1:47:04
the forefront of AI or who are
1:47:06
the people controlling this type of technology.
1:47:09
Right. Like I think our focus should
1:47:11
be less on the technology itself and
1:47:13
more about the human nature and the
1:47:16
political nature of. of these problems.
1:47:18
And maybe we should, you
1:47:20
could, one could argue we should
1:47:22
have been doing that. Anyways, like,
1:47:24
you know, given kind of the state
1:47:26
of like the government in efficiency
1:47:28
in the United States, right? So
1:47:31
there's... We'll say that technology moves
1:47:33
a lot faster than human organizational
1:47:35
structures, decision making these things are
1:47:38
slow for a reason because we
1:47:40
don't want them making snap decisions.
1:47:42
aren't necessarily what we wanted in
1:47:45
the long run to happen. So
1:47:47
I definitely think that there's like
1:47:49
bureaucracy and there are like, you
1:47:52
know, guardrails and there are like
1:47:54
breaks and stuff like that in
1:47:56
there we'll find it. But we
1:47:58
can talk more about the... of philosophical implications.
1:48:01
But my other big question for
1:48:03
you that I wanted to ask
1:48:05
is like, let's transport, let's say
1:48:07
you have a cousin and she
1:48:09
is in like ninth grade. If
1:48:11
she wants to be where you
1:48:13
are or maybe well beyond where
1:48:15
you are 10 years from like,
1:48:17
I don't know, maybe 10 years
1:48:19
ago, I don't know your exact
1:48:21
age, you don't have to say
1:48:24
it. Let's go back to you
1:48:26
as a freshman, but you have
1:48:28
the benefit of everything you know
1:48:30
now, and you're in 2025 instead
1:48:32
of 10 years ago. And like,
1:48:34
what kind of decisions do you
1:48:36
think you would make if you,
1:48:38
like, with that benefit? And I
1:48:40
always, like, if I had a
1:48:42
superpower, it would be able to
1:48:44
see, like, 10 years into the
1:48:46
future or something like that. Like,
1:48:49
even, like, 15 minutes in the
1:48:51
future was incalculable, which on the
1:48:53
stock market, but like, like, aside,
1:48:55
but like you can make decisions
1:48:57
based on, because you, one of
1:48:59
the things you said that really
1:49:01
struck me early on in this
1:49:03
conversation is that you are a
1:49:05
planner, you plan things out level
1:49:07
years in advance, and then you
1:49:09
stick to the plan. With that
1:49:12
perspective, what would you do if
1:49:14
you were a high school freshman
1:49:16
in 2025? Ooh. That is very,
1:49:18
very tough. I
1:49:22
think I would, I know the
1:49:24
most about AI and robotics, and
1:49:27
I think we still have a
1:49:29
good 10 years. Like I said,
1:49:31
it will happen this decade, but
1:49:33
I don't know exactly when. And
1:49:36
there's still a lot of opportunities,
1:49:38
like even after the technology breakthrough
1:49:40
of like different industries and applications.
1:49:42
So if we're looking at a
1:49:45
10 year horizon. of these things
1:49:47
getting deployed and just getting newly
1:49:49
deployed, you can probably catch the
1:49:51
early end of that deployment by
1:49:54
the time you graduate college. So
1:49:56
I would say just like knowing
1:49:58
the industry that I know best,
1:50:00
it's still going to be AI
1:50:03
and robotics, kind of like focus
1:50:05
on computer science, focus on AI
1:50:07
training, focus on robotics. I think
1:50:09
that for sure, like something will
1:50:12
happen in this decade and that
1:50:14
would be a very good spot
1:50:16
to be in by the time
1:50:18
my cousin, my future cousin or
1:50:21
my... Invisible Cousin graduates college. But
1:50:23
I think like in general anything
1:50:25
in the space of AIs is
1:50:27
like super work going to obviously
1:50:30
I'm biased. Anything in the space
1:50:32
of A.R.V. like augmented virtual reality
1:50:34
that I will say ARBR is
1:50:36
a little bit more of a
1:50:39
question mark just because a lot
1:50:41
of that hinges on like big
1:50:43
corporate like company sponsorship effectively. But
1:50:45
I think I do think that
1:50:48
AI and robotics are a little
1:50:50
bit more democratized. So that could
1:50:52
help with that. Okay. So robotics.
1:50:54
Okay. One last question that came
1:50:57
up as. You were answering that
1:50:59
question. So I talked with a
1:51:01
gentleman a while back, Bruno Hade,
1:51:03
I think, was his name. He's
1:51:06
on the podcast. I'll find the
1:51:08
episode. I'll link it below. But
1:51:10
one of the things he talked
1:51:12
about, he's big on internet of
1:51:15
things. He's big on like just
1:51:17
like leveraging like microcontrollers and things
1:51:19
like that, like sensors, things like
1:51:21
that. And he built like a
1:51:24
fridge. Like a better word, it
1:51:26
is kind of like a fridge
1:51:28
that like simulates different climates. So
1:51:30
you can have like, you can
1:51:33
grow different props inside of it
1:51:35
with like light and like you
1:51:37
can simulate like the level of
1:51:40
humidity and you know air pressure
1:51:42
and like all these different considerations
1:51:44
to grow crops. as though you
1:51:46
were back in Sicily. Personally, that's
1:51:49
amazing. Yeah, cool. Yeah, it's like,
1:51:51
yeah. He tells these to restaurants.
1:51:53
So, like really high end restaurants
1:51:55
that care that much about. Oh
1:51:58
my gosh, amazing. One of the
1:52:00
things he said is that like
1:52:02
over the past few years, thanks
1:52:04
to like, you know, production. improvements
1:52:07
and stuff like that. A lot
1:52:09
of the innovation happening around Shenzhen
1:52:11
in China, like the ability to
1:52:13
just quickly create custom hardware for
1:52:16
pretty much any use and the
1:52:18
specificity with which you can order
1:52:20
just a few units where previously
1:52:22
you would have to do an
1:52:25
entire shipping container full of this
1:52:27
thing and you didn't have the
1:52:29
prototyping, you didn't have the turnaround
1:52:31
time that you have now. I
1:52:34
don't know how... closely you follow
1:52:36
the hardware aspect now that you're
1:52:38
mostly focused on the software. But
1:52:40
have you noticed any big improvements
1:52:43
in like how things changed since
1:52:45
you were in high school robotics
1:52:47
club? Oh yes, definitely. 3D printing
1:52:49
has gotten like so much better.
1:52:52
So when I was in high
1:52:54
school like 3D printer was either
1:52:56
really expensive or just like was
1:52:58
not a thing. And now apparently
1:53:01
you can like 3D print like
1:53:03
whole rockets with with metal and
1:53:05
that's that's kind of like insane.
1:53:07
And yeah, I think like, like,
1:53:10
I still kind of dabble in
1:53:12
hardware, obviously not as much now
1:53:14
with my focus in software, but
1:53:16
I'd been a tinker for a
1:53:19
long time, I've built, you know,
1:53:21
physical robots and stuff, and I've
1:53:23
thought about just like getting a
1:53:25
3D printer, and I actually like
1:53:28
looked into buying one, and apparently
1:53:30
these 3D printers are are just
1:53:32
like 200-300 dollars and you can
1:53:34
also get like a small like
1:53:37
laser cutter right which you can
1:53:39
use to like make a lot
1:53:41
of like crafts and like custom
1:53:43
crafts and things like that you
1:53:46
can get a laser cutter for
1:53:48
like 500 dollars and like like
1:53:50
before right when I was in
1:53:53
high school these things would cost
1:53:55
like thousands of dollars at the
1:53:57
at the bare minimum and now
1:53:59
they're like, you know, a fifth
1:54:02
of the price, right? And they're
1:54:04
only getting cheaper as well. So
1:54:06
I think, like, the cool thing
1:54:08
about 3D printing, and especially, like,
1:54:11
you know, bigger, larger industrial scale
1:54:13
of 3D printers that aren't just,
1:54:15
like, printing plastic parts, but they're
1:54:17
printing, like, more metal parts, such
1:54:20
as the parts for rockets and
1:54:22
whatnot. I talked to some of
1:54:24
my friends in the aerospace industry,
1:54:26
and they are saying that it's
1:54:29
just like. you know, easier and
1:54:31
more efficient now to like build
1:54:33
custom parts. And, and especially like
1:54:35
custom like metal parts, both through
1:54:38
a mixture of 3D printing and
1:54:40
also like CNC machining. Yeah. And
1:54:42
yeah, I think that's actually like
1:54:44
really interesting as well. I think
1:54:47
the biggest bottleneck there is actually
1:54:49
the supply chain and like not
1:54:51
the technology. Obviously a lot of
1:54:53
that does come from China. And
1:54:56
I'm not sure if there's a
1:54:58
big like supply chain like outside
1:55:00
of China or like big kind
1:55:02
of like human cost labor and
1:55:05
like parts supplier in the US
1:55:07
or like close to the US.
1:55:09
So that that would probably be
1:55:11
my biggest like question mark. What
1:55:14
about like PCB like printed circuit
1:55:16
boards and things like that you
1:55:18
can get like really custom? Oh,
1:55:20
that'll be cool, yeah. Yeah, and
1:55:23
like with specific sensors and things
1:55:25
like like, like, you can basically
1:55:27
have kind of like a salad
1:55:29
bar of like thousands of different
1:55:32
components that you have incorporated into
1:55:34
your own custom circuit board that
1:55:36
you could incorporate into your electronic,
1:55:38
like your robot. Do you think
1:55:41
that is, is that a limitation?
1:55:43
like it's pretty cheap to make
1:55:45
your own like breadboard or PCB
1:55:47
or whatever these days so to
1:55:50
make any like robot prototype is
1:55:52
actually like super affordable to do
1:55:54
that. So like a Hanasamov that
1:55:56
was built, you know, 30 years
1:55:59
ago, 25 years ago or something
1:56:01
like that, the tiny robot that
1:56:03
we were talking about, not Chinese,
1:56:06
like four feet tall, or about
1:56:08
a meter tall. But like, if
1:56:10
you were to try to build
1:56:12
like a replica of that, not
1:56:15
exact, but like if you were
1:56:17
to just order parts from like
1:56:19
Chinese websites or something like that
1:56:21
and assemble that. How much money
1:56:24
do you think it would cost
1:56:26
to build something that had the
1:56:28
capabilities and like the general form
1:56:30
factor in everything that he has?
1:56:33
Like $5,000 to $10,000? Okay, that's
1:56:35
not bad. Yeah. I'm not sure
1:56:37
what the actual cost of the
1:56:39
original, let's see, Honda, Asimov, robot
1:56:42
cost. Uh, 2.5 million dollars. What
1:56:44
they were planning. This is the
1:56:46
order of magnitude cheaper, yeah. Yeah,
1:56:48
and it was released in 2000,
1:56:51
so it was 25 years ago.
1:56:53
Wow. Yeah, so, and the Boston
1:56:55
Dynamics Atlas was 1.6 million, but
1:56:57
it sounds like things have moved
1:57:00
quite a bit since then. Well,
1:57:02
okay, so I think like, yeah,
1:57:04
I think Boston Dynamics, I don't
1:57:06
know too much about the Honda
1:57:09
robot, but the Boston Dynamics robot,
1:57:11
yeah, I was like one. The
1:57:13
Boston Dynamics is like much more
1:57:15
capable like in terms of like
1:57:18
hardware and capability and sensors and
1:57:20
whatnot. So I think like to
1:57:22
reach Boston Dynamics level that was
1:57:24
so probably cost like tens of
1:57:27
thousands if not like hundreds of
1:57:29
thousands of dollars in terms of
1:57:31
hardware. But I think like once
1:57:33
you basically get out of the
1:57:36
prototyping phase and you start putting
1:57:38
things in production, especially with the
1:57:40
supply chain and kind of like
1:57:42
Chinese manufacturing factories, you can get
1:57:45
the costs of these robots. down
1:57:47
like hardware to these robots and
1:57:49
custom parts down to like really
1:57:51
really cheap. And that's partly why
1:57:54
you can get like really cheap
1:57:56
like unitary robots right for like
1:57:58
robot dogs for like $2,000 or
1:58:00
humidity robots for like $10,000. So
1:58:03
that's yeah. Awesome. That's really exciting.
1:58:05
Well, yeah. There's an end on
1:58:07
the fact that even though it's
1:58:09
expensive in the prototyping phase once
1:58:12
you get, you know, economies of
1:58:14
scale and scope from going into
1:58:16
the production. You can run it
1:58:19
for you, and would you join
1:58:21
me in encouraging people to pursue
1:58:23
robotics as you still think it's
1:58:25
a field to go into based
1:58:28
on your answer? Like, everything is
1:58:30
not software. There is still a
1:58:32
lot of innovation to be done
1:58:34
in physical world with atoms and
1:58:37
not just bits. Yes, I think
1:58:39
there's a lot. There's a lot
1:58:41
going on both in terms of
1:58:43
software and hardware and that students
1:58:46
or people just kind of getting
1:58:48
into computer programming and computer engineering
1:58:50
should definitely look into that because
1:58:52
people think like oh like all
1:58:55
the research has been like done
1:58:57
already everything set but I was
1:58:59
like no there's like so much
1:59:01
we can do and there's like
1:59:04
so much cool things that we
1:59:06
can do and and that will
1:59:08
probably last for at least another
1:59:10
10 years if not more. So
1:59:13
I'm actually very excited to see
1:59:15
how the world will change. like
1:59:17
this next decade. And I'm excited
1:59:19
for you know robots that can
1:59:22
help me do everything for me
1:59:24
very soon. Final, final question. What
1:59:26
should people do in terms of
1:59:28
their information diet if they want
1:59:31
to like keep up on these
1:59:33
things? Are there any like Twitter?
1:59:35
Okay, are there any like podcasts
1:59:37
or YouTube channels or anything that
1:59:40
you're like oh I'm a huge
1:59:42
fan of this like they they
1:59:44
do like the really good hardcore
1:59:46
robotics anything that you would recommend?
1:59:49
I'd say Twitter is like my
1:59:51
main source of information these days
1:59:53
for better or for worse. I
1:59:55
mean, it's kind of weird because
1:59:58
I'm like on a podcast. Right
2:00:00
now, but I don't actually listen
2:00:02
to that many podcasts because I
2:00:04
just don't have time. Like, because
2:00:07
I don't like a lot of
2:00:09
people like listen to podcast while
2:00:11
they're like driving from like one
2:00:14
place to another or like. You
2:00:16
don't need to drive. Exactly. Or
2:00:18
if you ride a cruise, you can
2:00:20
just plus take your laptop and
2:00:22
work. Yeah, wait a moment. Sorry,
2:00:25
not cruise. Cruise is dead. Sadly.
2:00:27
But yeah, I think. Yeah, most of
2:00:29
my information is from Twitter.
2:00:32
I do watch like YouTube
2:00:34
videos on specific topics You
2:00:36
know, so that's that's my
2:00:39
other source of information as
2:00:41
well I have heard that Like TikTok
2:00:43
could be good if you
2:00:46
use it right, but it's
2:00:48
also getting banned pretty soon.
2:00:50
So maybe not But yeah, I
2:00:52
think like Twitter Twitter is
2:00:54
my main main source of information
2:00:56
these days Awesome. And a sub
2:00:59
stack. Sub stack too. Sub stack
2:01:01
is like people use letters. Yeah. Yeah.
2:01:03
But I think like there aren't that
2:01:05
many like more technical sub stacks
2:01:08
a lot more philosophical. So again,
2:01:10
like Twitter is probably the
2:01:12
better information for more technical
2:01:15
stuff. Awesome. That's super helpful.
2:01:17
Well, it's been an absolute blast talking
2:01:19
with you Peggy. Yeah, thanks for having
2:01:21
me. I'm thrilled that Preco Camp could
2:01:24
play a part in your assent as
2:01:26
a robotics engineer and now a studio
2:01:28
at a YSTE company. That's super chill.
2:01:30
So, yeah, I'm excited to learn more
2:01:32
from you in the future and eventually
2:01:34
meet up with you again in person
2:01:36
and hopefully get some mission burritos.
2:01:39
Yes, and please ride the waymo
2:01:41
when you do that. Awesome. All
2:01:43
right, thank you so much for
2:01:45
having me, Quincy. This is an
2:01:47
honor, absolute honor, absolute blast. Like,
2:01:49
you're the best podcast host,
2:01:52
so this is super great.
2:01:54
So much for saying that
2:01:56
before we stop recording. Awesome.
2:01:58
Until next week, everybody. Happy
2:02:00
Code. See you. Bye
2:02:03
bye.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More