Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Whether you're on a cross-country drive or on your
0:02
daily commute, time in the car is perfect
0:04
for listening to podcasts. T-Mobile's network
0:06
can help keep you connected to all your favorite
0:08
podcasts when you're out and about. T-Mobile
0:10
covers more highway miles with 5G
0:12
than anyone. So if you need great coverage,
0:15
especially when you're on the go, check out T-Mobile.
0:17
They're the largest and fastest 5G
0:20
network. Find out more at t-mobile.com
0:23
slash C-Y. That's S-E-E-W-H-Y.
0:27
Pack lightly? Not a chance. In
0:29
the all-new 2024
0:30
Chevy Trax, you and your squad have the space
0:32
and versatility you deserve and
0:36
need for your next big adventure. With large,
0:38
modern display screens, wireless phone
0:40
connectivity, and affordable pricing, this
0:42
small SUV is kind of a big deal. But
0:45
if you're looking for a car that's fast, reliable, and has
0:47
a great range, this is the car for you.
0:50
It's the T-Mobile C-5G. It's a 5G-based 5G network,
0:52
and it's a 5G-based 5G network. With affordable
0:54
pricing, this small SUV is kind
0:57
of a big deal. And with the option to
0:59
choose between stylish designs like
1:01
the cool, competent Active or the ultra-sporty
1:04
RS, you're sure to find the all-new Chevy
1:06
Trax fits your budget and your brand.
1:09
Oh, hey, it's the bunny that you swear
1:11
you saw on the lawn, even if no one else believes
1:13
you, Allie Ward. And here's all the
1:16
Gs. Hey, am I a real person? Unfortunately,
1:18
I am. Am I intelligent?
1:21
That's up for debate. But this week, we
1:23
are taking a dive into artificial intelligence
1:26
and brain data with a scholar
1:28
in the matter. So listen,
1:30
the past few months have been a little surreal.
1:33
Photoshop's out there generating backgrounds
1:36
to cut your cousin's ex-girlfriend out of your wedding
1:38
photos. ChatGPT is
1:40
writing obituaries, and frankly,
1:43
a lot of horse bucky. There's
1:45
also groundbreaking labor strikes in the
1:47
arts, which we covered in the field trip
1:50
episode from the WGA Strike Lines.
1:52
If you haven't heard it, I'll link it in the show notes. But
1:55
I heard about this guest's work, and
1:57
I said, please, please, please talk to me about
1:59
how to feel about AI. Are
2:02
we farting around the portal to
2:04
a new and potentially shittier
2:06
way of living? Or will AI
2:09
say, hey, dipshits, I ran
2:11
some simulations and here's what we have to do
2:13
to unextinct you in the next century.
2:15
We're going to find out. So this guest has studied
2:18
law at Dartmouth, Harvard, and Duke,
2:20
and been a professor at Vanderbilt University
2:22
and is now at Duke's Institute for
2:25
Genome Sciences and Policy. She
2:28
recently delivered a TED Talk called
2:29
Your Right to Mental Privacy in the
2:32
Age of Brain Sensing Tech,
2:34
and just authored a new book called The Battle
2:37
for Your Brain, Defending the Right to Think
2:39
Freely in the Age of Neurotechnology.
2:42
But before we chat with her, a quick thank you
2:44
to patrons of the show who support at patreon.com
2:47
slashologies for a buck or more a month
2:49
and submit their questions for the second half. And
2:51
thank you to everyone inologiesmerch.com,
2:54
shirts and hats and such. Of course,
2:57
you can also support the show just by leaving a review
2:59
and
2:59
I may delight you by
3:02
reading it, such as this one left this week by
3:04
environmental lawyer, Harrison, Harrison,
3:06
Harrison, who wrote a review calling
3:08
ologies and ooey gooey
3:11
ratatouille rip roaring. Good
3:13
time. So yeah, I read them all. Thank you,
3:15
Harrison, for that. Okay. Neurotechnology.
3:18
Let's get into this. How the brain interacts
3:20
with technology and also techno
3:22
neurology, how tech is striving
3:25
to replicate and surpass human
3:27
intelligence and what that means for us all.
3:30
So let's be bump our way into a talk about
3:33
texting, scrolling, cheating, brain
3:35
implants, mental health, doomsday
3:38
scenarios, congressional hearings,
3:40
apocalypse potential, medical
3:42
advances, biometric mining,
3:45
and why suddenly artificial intelligence
3:48
is
3:48
on our minds with law
3:50
professor and neurotechnologist,
3:53
Dr. Nita Farhani.
4:01
Nita Farahani,
4:04
it's she, her.
4:13
So
4:15
good to meet you. Terrifying to
4:17
meet you. Are you the scariest person at a dinner party because
4:19
of how much you know? I'm not. I'm
4:21
not a scary person. And I
4:24
find that people think that
4:26
it's equal parts fascinating and terrifying. So
4:28
if anything, I think I'm a great dinner guest, right? Because
4:30
they're fascinated. I definitely should clarify that.
4:33
You are, there's nothing scary about you.
4:35
The information that you hold is
4:37
like, no, I know. Do I want
4:39
to look? Do I not want to look? Do I want
4:42
to look? It's thrilling, like a horror
4:43
film. Yeah. It's like people can't
4:45
look away. Yes. Right? I
4:48
want them to know. But at the same time, what I usually get is like,
4:50
wait, this is real? Like what you're talking about is
4:53
it actually exists and people are really using it and
4:55
employers are really using it and governments are really
4:57
using it. And
4:58
wait, what? Yeah. Do you spend a lot
5:00
of your time chatting with people trying
5:03
to warn them or calm them down? Yes.
5:08
So on
5:10
the one hand, I am trying
5:12
to raise the alarm and
5:14
to help people understand that this whole area
5:17
of being able to decode and really
5:20
hack and track the brain is a new
5:22
frontier and the final frontier of
5:25
what it means to be human and privacy
5:28
and freedom. And at the same
5:30
time, I don't want to make people
5:33
have the reactionary approach to technology,
5:36
which is like, okay, then let's ban it because
5:38
the promise is also extraordinary.
5:41
And so I am very much equal
5:44
parts. Like let me help you understand
5:46
not only what the promise is and
5:48
why you're likely to adopt it, but
5:50
why before you do so and before we
5:52
as a society at scale adopt
5:55
this technology that we make some really important choices
5:57
that will actually make it good
5:58
for us and not.
5:59
the most Orwellian, frightening,
6:02
scary thing possible. I feel like there's few
6:04
topics that have this much true ambivalence
6:07
of so much good and so
6:09
much potential for misuse. Did
6:11
your brain become a lawyer brain because
6:14
of those sort of like philosophical conundrums?
6:17
What drew you to this kind of deep, deep
6:19
thought? Yeah, I've always been driven to the questions
6:22
that are at the intersection of philosophy
6:25
and science. In high school,
6:27
I was really interested in the science, but I was a policy
6:29
debater.
6:29
In college, I was a government
6:32
minor and science major. I
6:35
did in lab stuff, but largely
6:37
things that were policy. So
6:39
Nita got several graduate degrees studying
6:42
law and science, behavioral genetics
6:44
and neuroscience, the philosophy of mind, neuroethics,
6:47
bioethics, and even reproductive rights
6:50
and policy in Kenya. And she said
6:53
all her work seems to gravitate toward
6:55
this intersection of philosophy and law
6:58
and science because she had
6:59
fundamental questions like, do
7:02
we have free will? And
7:04
do we have fundamental autonomy
7:07
and freedom? And how do we put into place
7:09
the protections? But I've always been fascinated
7:11
and really interested in the science
7:13
and the technology itself. I've never been a
7:15
Luddite. I've always been somebody who's an early
7:17
tech adopter, but clearly see what the downsides are
7:19
at the same time. Where was tech at when
7:21
you were getting that roster of
7:24
graduate degrees? Where were we at? Were
7:26
we at emails? Were we at video calls? Yeah.
7:28
So we were not at video calls.
7:29
We were at emails. The internet
7:32
existed. We used it. We all had computers,
7:34
but we didn't have cell phones. I
7:36
got my first cell phone after I graduated from
7:39
college, like the year after, and I had
7:41
a flip phone. And I
7:43
thought that was super cool. I could
7:45
type out a text message one character
7:48
at a time. Oh, T9? Yeah. I
7:50
was a little bit metal in T9. Nice. I could
7:52
do it without even looking at
7:55
the phone, where I found it harder when we
7:57
had a keyboard. Yeah. And
7:59
then I had a Palm Pilot.
7:59
like as the precursor to the iPhone.
8:02
And then I stood in line the first day that the iPhone
8:05
was being sold and got one of the first
8:07
iPhones in my hand. So I've seen
8:09
the evolution of tech, I guess, as I was
8:11
getting all of those degrees. And what about
8:13
in
8:14
terms of neurotechnology? Have you seen
8:16
kind of an exponential growth pattern in
8:19
terms of technology? Is that growth
8:22
pattern still valid or have we
8:24
surpassed it? Slowly over
8:26
the
8:27
past decade or two,
8:29
neurotechnology has been getting better. And the ways
8:31
in which neurotech has been getting better has
8:33
largely been kind of hardware-based, which
8:36
is the sensors are getting better. Sometimes
8:38
the software has been getting better to be able to filter
8:41
out noise, the algorithms
8:43
to be able to pick up brain activity
8:46
without having muscle twitches
8:48
or eye blinks or interference
8:50
from the environment to pick up different information.
8:52
All of that's been getting better. But suddenly
8:54
we've gone from what was
8:57
improvements to just the past
8:59
five years, seeing much more rapid advances.
9:02
Generative AI is making things move
9:05
and these seismic shifts, like where you suddenly
9:07
have just a massive leap in capabilities.
9:10
Just real quick, before we descend into
9:12
the abyss of ethics and possible
9:15
scenarios, what is generative
9:17
AI? What is AI and what's just
9:19
a computer computing? Okay, I looked
9:21
this up for us and then I took a nap because
9:24
it was confusing and then I tried again. And here's what I sussed out.
9:27
So artificial means it's coming
9:29
from a machine or software and intelligence,
9:31
fuck, I mean, that depends on who you ask,
9:34
but broadly it means a capacity
9:36
for logic, understanding, learning,
9:39
reasoning, problem solving and retaining
9:41
facts. So some examples of
9:44
AI are Googling or search engines,
9:46
the software that recommends other things
9:48
you might like to purchase, navigating
9:51
via self-driving cars, your Alexa
9:54
understanding when you scream, Alexa
9:56
stop
9:57
because she tried to get you to subscribe to Amazon
9:59
Prime.
9:59
again. It also includes
10:02
computers being chess nerds, that's AI,
10:05
and generating artwork. And according
10:07
to some experts, AI can
10:10
be separated into a few categories including
10:12
on the base level, reactive machines,
10:14
and those use existing information
10:16
but they don't store or learn anything. Then
10:19
there's limited memory AI that can
10:21
use precedent to learn what choices
10:23
to make. There's something called theory of mind
10:26
AI and that can try to figure out
10:28
the intentions of a user or even
10:30
acknowledge their feelings like if
10:32
you've ever told Alexa to get bent
10:35
in a lot of other words and then she sasses
10:37
you back. There's also a type called self-aware AI that
10:39
reflects on its own actions
10:47
and then fully autonomous is
10:49
kind of the deluxe model of AI and
10:53
that does its own thing. That sets its own goals,
10:55
set it and forget it if you can. So when
10:57
did things start speeding up? When did they
10:59
start careening toward the future like this? When
11:02
computers got faster and smaller and
11:05
better in the last 10 but
11:07
really kind of 2 or 3 years. So
11:09
better hardware means more processing
11:11
power. There's also cloud storage
11:14
and that adds up to something called deep learning
11:17
which kind of sounds creepy like a hypervigilant
11:20
mannequin but deep refers
11:23
to many layers of networks
11:25
that use what look like these complicated
11:27
flowcharts to decide what actions
11:29
to take based on previous learning. So that's
11:31
kind of what led up to these startlingly
11:34
human-like generative AI outputs
11:36
and deep fakes where they
11:38
can just straight up put Keanu Reeves
11:40
face on your mom and then confuse
11:43
the bejesus out of me on TikTok or
11:46
chat GPT which is one language
11:48
model chat bot. Computers are
11:50
starting to pass bar exams.
11:52
Maybe they're writing the
11:55
quippy flirtations on your dating
11:57
app. Who knows? Meanwhile, less than 100% of the users are using the
11:59
app. So that's pretty much it.
11:59
years ago, a lot of the U.S. didn't
12:02
have flush toilets in case you
12:04
feel weird about how weird this feels because
12:06
it is weird. Evolutionarily, our
12:08
flabby, beautiful little brains can barely
12:11
handle the shock of a clean river
12:13
coming out of a garden hose, let alone
12:16
some metal and rocks that are computers
12:19
that were training to potentially kill
12:21
us.
12:21
We don't know how to deal with that. So pattern
12:24
recognition using machine learning algorithms
12:26
has really pushed things forward rapidly.
12:29
A lot of brain data that happens in characteristic
12:31
patterns and those associations
12:33
between what is a person seeing or hearing
12:36
or thinking, how are they feeling,
12:38
are they tired, are they happy, are they sad, are
12:40
they stressed, those things have been
12:42
correlated with huge data sets and
12:44
processed using machine learning algorithms
12:46
in ways that weren't possible before.
12:48
I can read your mind. Then you
12:50
have generative AI and chat GPT
12:52
that enters the scene in November.
12:55
All of a sudden, the
12:58
papers that are coming out are jaw-dropping.
13:00
Data that's being processed by generative
13:02
AI to reconstruct what a person is thinking
13:05
or hearing or imagining or
13:07
seeing is next level.
13:09
My book came out March 14th, 2023. All
13:11
of a sudden, what was happening was continuous
13:14
language decoding from
13:18
the brain in really,
13:19
really high resolution
13:22
using GPT-1, not even the most advanced
13:24
GPT-4. Visual reconstruction
13:26
of images that a person is seeing in ways
13:29
that were much more precise than anything
13:31
that we had seen previously. That's
13:34
happening at this clip that is just,
13:37
I think, extraordinary. It's just so much
13:39
faster than even I would have imagined. Even
13:41
I could have anticipated even
13:43
having written a book about the topic. That was
13:46
literally going to be my next question because
13:48
when a person writes a book, that doesn't happen overnight.
13:51
Even working on this book probably for a couple
13:53
of years, did you have any
13:55
idea that your book would be so
13:58
closely timed to such a giant
14:00
leap in terms of public
14:03
perception and awareness of AI. I
14:05
mean, it couldn't have timed it better. Well,
14:07
I mean, of course I'm a futurist. I
14:10
was predicting it perfectly, right? No. No.
14:12
I mean, I wish, right? In truth, my book
14:15
is like a year and a half late from when I
14:17
was supposed to turn it into the editor, into
14:19
the publisher, but there was a global
14:21
pandemic that got in the way and a bunch of other
14:23
things. But I'm grateful that it didn't
14:25
happen sooner because I was both able to
14:28
be part of what is a growing conversation
14:31
about the capabilities
14:31
of AI and to see when
14:33
you say to a person like, oh yeah, also AI
14:36
can decode your brain. You
14:38
know, that really puts a fine point on it for people
14:40
to understand how quickly these advances are coming
14:43
and to see how it's changing everything
14:45
in society, not just how people are writing essays
14:48
or writing emails, but fundamentally unlocking
14:51
the mysteries of the mind
14:53
that people never thought before possible and
14:56
the risks that that opens up, and the possibilities
14:58
of mental manipulation and hacking and
15:00
tracking, those are dangers
15:02
that I think a year ago,
15:05
before people really woke up
15:07
to the risks of
15:08
AI, they would not have been having the
15:10
conversation in the same way that they are around the book
15:12
and now they are having that conversation, seeing
15:14
the broader context and seeing the
15:16
alarm bells everywhere, right? Like, oh
15:19
wait, we really do need to regulate or recognize
15:21
some rights or do something. So futurists
15:23
are urging some foresight. Congressional
15:26
panels have aired on C-SPAN and there
15:28
seems to be this kind of collective side-eye
15:31
and like a hope that someone's
15:34
on top of this, right? So I mean, I think
15:36
people are looking for some guidance
15:38
and to have somebody
15:40
come at it from a balanced perspective, like, wait
15:42
a minute, there's a lot of good here and
15:44
there's some serious risks and here's a potential
15:46
pathway forward. I think instead
15:49
of like pause, which everybody says like, of course,
15:51
we can't just pause or a doomsday
15:53
scenario without any positive, like,
15:56
oh, let's regulate AI. I think we need voices
15:58
at the table who are thinking about.
15:59
it both in a balanced way but also are coming forward
16:02
with like here are some concrete things we could
16:04
do right now that would actually help
16:06
the problem. So we know a few types
16:08
of AI from Googling a source
16:11
for a research paper or digitally
16:14
removing your cousin's ex from
16:17
your wedding photos, but what about technology
16:19
that's gathering data from our
16:21
brains? Let me give you the spectrum. There's
16:25
medical grade neurotechnology.
16:28
This is technology that people
16:30
might imagine in a doctor's office
16:33
where somebody puts on an EEG, electroencephalography
16:36
cap that has a bunch
16:38
of different wires coming out of it and a bunch of
16:40
gel that's applied to their head and a bunch of sensors.
16:43
That's picking up electrical activity, which we'll get back to in a minute.
16:46
Then there's the clunky giant machine,
16:48
a functional magnetic resonance imaging machine,
16:50
which can peer deeply into
16:52
the brain and somebody
16:55
might have already undergone an fMRI test
16:57
for something like a brain tumor to kind of
16:59
look more deeply into the brain. What that's picking
17:02
up is changes in blood flow
17:04
across the brain, which tells us something
17:06
about different areas that are activated
17:08
at any particular time and what those patterns
17:10
might mean. So if you've never had an MRI,
17:13
I guess congratulations, that's probably good, but
17:15
this is magnetic resonance imaging.
17:18
It's pretty exciting how these strong
17:21
ass magnets all line up the
17:23
hydrogen atoms in your body to go one direction
17:25
and then they release them and
17:27
from that they can see inside of your body.
17:30
Now, an fMRI is a
17:32
functional MRI and to put it in super
17:34
simple terms, it's kind of like animation
17:37
instead of a still picture, but it's of your
17:39
brain. So when you see imaging
17:41
examples of how someone's melon
17:44
illuminates like a Christmas tree
17:46
to certain stimuli, that's fMRI
17:49
technology tracking blood flow to
17:51
different regions of the brain. This fMRI
17:53
technology is used in a lot of
17:55
neuro and psychology research.
17:57
And then there's something like functional near infrared spectra-
17:59
which is more portable
18:02
and it's also measuring changes in the
18:04
brain, but it's using optical and
18:06
infrared lights in order to do so. And
18:09
that functional near infrared
18:11
spectroscopy looks for changes
18:13
in oxyhemoglobin and deoxyhemoglobin
18:17
in the brain. These words might not matter to
18:19
you right now as you're cleaning your
18:21
shower grout or your carpooling. But in clinical
18:23
settings it comes in handy for patients
18:26
with strokes or learning about Alzheimer's
18:28
or Parkinson's or even anxiety
18:29
or a traumatic brain injury,
18:32
which my brain would like you to know I've had
18:35
and I will link the traumatic
18:37
brain injury or the neuropathology episode
18:39
about my Helen Narnar concussion I
18:41
got last year in the show notes. But yes, there
18:43
are a lot of ways to get data from a brain
18:46
including CT scans and PET scans
18:48
with radioactive tracers. But what
18:51
about non-medical uses? Do
18:53
they exist? Oh boy, how do you do they?
18:55
If you then look at what's happening in the consumer
18:58
space in the consumer space you take
18:59
the 64 or 120 electrodes
19:03
that are in a big cap and then you have a couple of them
19:05
that are put into a foreheadband
19:07
or a baseball cap or increasingly
19:10
what's coming is brain sensors that are embedded
19:12
in everyday technology. So you and I are
19:14
both wearing headphones and
19:17
the soft cups that go around our ears are being
19:19
packed with sensors that can pick up brain activity
19:22
by reading the electrical activity through our
19:24
scalp. You want my tinfoil hat? Or
19:27
if we were wearing earbuds inside of our ears
19:29
instead embedding brain sensors
19:31
inside of those that can pick up electrical activity
19:34
in our brain activity as we're thinking or doing
19:36
anything and those
19:39
become much more familiar and much more commonplace
19:41
very quickly. So there's just a few of those products
19:43
that are on the market but that's where most
19:45
of the big tech companies are going is to embed
19:48
brain sensors into everyday devices like
19:50
earbuds and headphones and even watches
19:52
that pick up brain activity from
19:54
your brain down your arm to your wrist and picks
19:57
up your intention to move or to type or to
19:59
swipe or something like that.
19:59
So to use like a medical
20:02
analogy, you know continuous glucose
20:04
monitors. These are a powerful tool
20:07
for diabetics to monitor their blood sugar
20:09
levels and their insulin needs and we covered those in
20:11
the two-part diabetology episode with Dr. Mike Natter.
20:14
But now continuous glucose monitors are
20:16
starting to become available to people without diabetes
20:19
just to better understand their metabolisms
20:21
and their dietary responses, their mood
20:24
and energy. So all of these neuroimaging
20:27
and all this data was just used
20:29
in clinical
20:29
and research settings by people in crisp
20:32
coats carrying metal clipboards
20:35
but it's starting to pop up in the market now. This
20:37
is great news, right? You understanding your brain?
20:39
Yeah,
20:40
yeah, but not
20:41
all the research in consumer applications
20:44
is solid and some make some wild
20:46
claims of efficacy. Others
20:49
argue that if a device can enhance
20:51
our moods and sharpen us
20:54
cognitively but cost some
20:56
serious cash, doesn't that just widen
20:59
a privileged gap even further? But I
21:01
guess so does college. I
21:03
don't know. In the US, you need a GoFundMe
21:06
to pay for chemo. So we've got a lot
21:08
of pretty large systemic
21:10
fish to fry.
21:11
But if you've got money, you
21:14
can buy EEG headsets
21:16
that track your mood and emotions and stress
21:18
for a few grand. There's others
21:20
that track your heart rate and brainwaves
21:23
for sleep and meditation. There are
21:25
VR gaming sets that track
21:27
brainwaves and even a Mattel game
21:29
called MindFlex. You can buy for like 120
21:32
bucks. But Nita says, all
21:35
of those consumer-based technologies pick
21:37
up a little bit of like kind of low resolution
21:40
information right now.
21:41
They pick up if you're stressed, if
21:43
you're happy or if you're sad, if you're
21:45
tired, like it maybe picks up that your
21:47
mind is wandering and you're kind of like
21:49
dozing off and the
21:52
things like fMRI pick up
21:54
much more precise information. Now
21:57
that could just be a matter of time. It could
21:59
be...
21:59
that as machine learning algorithms
22:02
and generative AI get applied to the
22:04
electrical activity in the brain, that it'll
22:06
get better and better and better. And it's
22:08
interesting, because in a way, you could think about AI
22:11
as being the convergence between computer
22:13
science and neuroscience. So
22:15
computer scientists have been designing
22:18
algorithms that can process
22:21
information in very narrow ways, and
22:23
they're very good at doing specific tasks. So
22:25
for example, a doctor or a pathologist
22:28
who's looking at many different samples
22:29
of tissue
22:32
to figure out if it looks cancerous or not
22:35
can only see so many samples
22:37
in a lifetime. And so they've marked them
22:39
and labeled the data. And a machine
22:41
learning algorithm can be trained on that
22:44
data, which is like, here's
22:46
thousands of images that are cancer
22:48
and not cancer. Now, here
22:51
are new images, predict whether or not they
22:53
have cancer. And they become very, very good,
22:55
because they can process millions and millions
22:57
of images and see far more images and get
23:00
much better at being able to do
23:02
that specific task of identify if something
23:04
is cancerous. So those tasks are relatively
23:07
simple for machines to learn and
23:09
execute. Computers are like, child's
23:11
play. But the human brain
23:14
isn't so narrow and task-specific.
23:16
And neuroscience has long understood
23:19
that the connections that the brain makes
23:21
are much more multifaceted.
23:24
They're much more complex. And
23:26
so the modern
23:29
types of AI are
23:31
built on how the brain
23:33
works. They're built on what are called neural
23:35
networks. So this is a deep learning
23:38
model, which is instead of that very
23:40
specific task of like, do this,
23:43
do that, it's meant
23:45
to take a huge amount of information
23:47
and to learn from that information and
23:49
then do what we do, which is to predict
23:52
the next thing or to kind of understand where
23:54
that's going or to make inferences from more
23:56
of a deep learning perspective.
23:58
So it's more than machine learning.
23:59
learning like the pathology example
24:02
she gave. So remember deep learning. So
24:04
neural networks are modeled after
24:07
biological brains and they have nodes
24:09
like neurons that consume input
24:11
that they learn from and then it's processed
24:13
in several layers or tiers aka
24:16
it's deep to come up with
24:18
a result or an action and
24:21
things like chatbots or facial
24:23
recognition or typing dog
24:25
into your phone's photo album to see what
24:28
goodness comes up or speech to text.
24:29
Those are all done by neural networks and
24:32
AI that we're already using and they seem
24:34
commonplace after having them for just a few years.
24:37
But since late last year we're
24:39
seeing them create more like
24:41
how the human brain might. And those insights
24:43
about the brain and neural networks have
24:45
informed this new class
24:47
of AI which is generative AI generative
24:50
AI is different in
24:52
that it is both built on a different model and
24:55
it has much greater flexibility in what it can
24:57
do and it's trying to not say
24:59
like this is cancer that isn't cancer
25:02
but to take a bunch of information and then be
25:05
asked a question and to respond
25:07
or to generate much more like the human
25:09
brain reasons or thinks or comes up
25:11
with the next solution. And that's exciting
25:14
and terrifying. What about
25:17
the
25:18
information that say the
25:20
artistic AI is getting? Are they scrubbing
25:22
that from existing art and
25:25
in the case of say the writer strike
25:28
where you see writers saying
25:30
you cannot
25:32
take my scripts and write a sequel on something
25:35
without me. And if you're curious about what
25:37
is up with these strikes, what is going on
25:39
in the entertainment industry including
25:42
the WGA or the Writers Guild of America
25:44
strike which started on May Day of this year
25:47
and it was joined in recent weeks on the picket
25:49
lines by SAG-AFTRA which is a screen
25:52
actor skilled. And again we did a whole episode
25:54
explaining what is going on. It's called field trip
25:56
WGA strike that will be linked to the show notes. So
25:59
if you want watch TV or movies
26:01
or you ever have, listen to that episode
26:04
because it affects us all. And
26:06
these entertainment labor unions are known as
26:08
the tip of the spear for other
26:11
labor sectors. Your industry may
26:13
be affected or might be next.
26:15
I'm really interested in what happens in
26:17
this space, not just because of the
26:20
writers themselves and hoping
26:22
that they
26:24
managed to succeed
26:26
and actually getting fair, appropriate
26:30
treatment,
26:31
but also because it's gonna be incredibly telling
26:34
for every industry as what
26:36
happens when workers demand
26:38
better conditions and better
26:41
terms.
26:42
And the result is
26:44
greater experimentation with generative AI to
26:46
replace them. But why is this such a sudden
26:49
concern? Why does it feel like AI has
26:51
just darkened the horizon and thundered
26:54
into view and we're all cowering
26:56
at its advance? Is this the first act
26:58
of a horror film? So where does it come
27:01
from? They're not totally
27:03
transparent. We don't know all of the answers to that,
27:05
right? But we do know that these
27:07
models have been trained, meaning
27:10
there's been billions,
27:12
potentially trillions, we don't know, right, the exact number
27:14
of parameters. That is prior
27:17
data which has been used. Meaning
27:19
the material that the machines learn from.
27:22
And that could be prior scripts, it could be prior
27:24
books. It includes a bunch of self-published
27:27
books, apparently, that are part of it, prior
27:29
music, prior art, potentially
27:32
a whole lot of copyrighted material that
27:34
has been used to inform
27:36
the models. Once the models learn, they're
27:38
not drawing from that information
27:40
anymore, right? That information was used to train
27:43
them. But in the same way
27:45
that you don't retain everything you've ever read
27:47
or listened to, and your creativity
27:50
may be inspired by lots of things that
27:52
you've been exposed to, the models
27:54
are similar in that they've been trained on these prior
27:56
parameters, but they're not storing or drawing
27:58
from or returning to the machine. them. It's
28:01
as if they have read and digested
28:03
all of that information. And
28:05
I was talking with an IP scholar who I
28:07
like and respect very much. And his perspective
28:10
was, how is that different than what
28:12
you do? Right? You write a book and
28:14
you read tons of information and
28:16
there's tons of information you cite. And there's also tons of information
28:19
that you learned from, that inspired
28:21
you, that shaped how you write and think that
28:23
you don't actually cite. And is that actually
28:26
unfair or violating
28:29
the intellectual property or somehow
28:31
not giving a fair shake to every
28:34
source that you have ever learned from or every input
28:36
that you've ever learned from? I mean, it's an interesting
28:38
and different perspective, right? I don't have the answers to
28:40
it yet. I'm really
28:41
interested to see how this particular
28:43
debate evolves. What do other people think
28:45
who aren't me? So a recent study reported
28:47
that about 50% of AI experts
28:50
think there's a 10% chance
28:52
of unchecked AI causing
28:54
the extinction of our species
28:57
with AI getting into little sneaky
28:59
elf on the shelf shenanigans like playing
29:01
God or establishing
29:04
a political dictatorship. And the
29:06
Center for AI Safety issued a
29:08
statement. It was signed by dozens of
29:11
leaders in computer science and tech, including
29:13
the CEO of Google's Deep
29:15
Mind and Bill Gates and
29:18
the guy who started ChatGPT
29:20
and the director of a center on strategic
29:23
weapons and strategic risks.
29:26
And this statement said very simply, quote,
29:29
mitigating the risk of extinction
29:31
from AI should be a global priority
29:34
alongside other societal scale
29:36
risks, such as pandemics and
29:39
nuclear
29:39
war. So that's a
29:41
pretty big statement. And other experts
29:44
draw parallels between humans and
29:46
chimps, but we're the chimps and AI
29:49
is us. So guess who's making who wear
29:51
diapers and live with Michael Jackson? Yeah.
29:54
Although of course there are computer
29:56
scientists saying that we need to calm our collective
29:59
boobies.
29:59
and that AI isn't advanced
30:02
enough to threaten us. Yet, I
30:04
love yet. Yet
30:07
is so comfy. Yet is the space between
30:09
the alarm clock and the panic
30:12
of racing out the door because you'll be late to a job
30:14
interview. Ah, yet,
30:16
just yummy, just fuck it. I think
30:18
from a governance perspective in society,
30:21
we have near term risk that we need to
30:23
be safeguarding against. And
30:25
this is near term risk like bias
30:28
and discrimination and inaccuracies.
30:31
I don't know if you saw the story recently about
30:33
a lawyer who filed a brief
30:36
in a case before a federal judge that
30:39
the pleading for the case had been entirely
30:42
written by Chad GPT, which included
30:44
a whole bunch of invented cases. And
30:46
the invented cases, like he hadn't
30:48
gone and sight checked them or
30:51
read them. In fact, he has this dialogue
30:53
where he's asking Chad GPT
30:55
if the cases are real or not, rather
30:57
than like, yes. And
31:00
he was not doing this to prove a point. Just
31:02
a bit of a dumbass. No, no, just straight up dumbass
31:05
just did it. And
31:08
then the other side comes back and
31:10
says, hey, judge, we can't find any
31:12
of these cases. And the judge says, you have to
31:14
produce it. And apparently he produces
31:16
the full citations
31:18
of the made up cases. And anyway,
31:20
it finally goes back with the
31:23
lawyer then admitting, I'm so sorry, this
31:25
is all apparently fabricated
31:28
and it's fabricated not intentionally, but
31:30
it's fabricated because I generated it all using
31:32
Chad GPT. Nita says, who knows
31:35
what will happen if and when where
31:37
people start using bots to kind
31:39
of cut corners and no one fact
31:41
checks it. And around Juneteenth, I
31:43
saw a viral tweet about Chad
31:46
GPT not acknowledging that
31:48
the Texas and Oklahoma border was in
31:50
fact
31:51
influenced by Texas desiring
31:53
to stay a slave state. I told
31:55
my husband, Jared, your pad mom, didn't believe
31:58
it could get things so wrong. And then he...
31:59
proceeded to have like an hour-long fight
32:02
and discussion with ChatGPT, hoping
32:04
to teach ChatGPT that
32:07
it has a responsibility to deliver
32:09
accurate information. I was
32:11
like, dude, you're fighting a good fight and I
32:13
wish you luck. Now, as for this lawyer
32:15
that Nita mentioned, according to a May 2023 New
32:18
York Times piece about it titled, Here's What
32:20
Happens When Your Lawyer Uses ChatGPT,
32:23
the lawyer in question pleaded his own
32:25
case within the case, telling a rightfully
32:28
miffed-off judge that it was his first
32:30
foray with a chatbot and that he
32:32
was, quote, therefore unaware of
32:35
the possibility that its content could
32:37
be false. And the New York Times explains
32:39
that ChatGPT
32:41
generates realistic
32:43
responses by making guesses
32:45
about which fragments of text
32:48
should follow other sequences based
32:50
on a statistical model that has ingested
32:52
billions of examples of text pulled from all over
32:55
the internet. So ChatGPT is
32:57
your friend at the party who knows everything,
33:00
and then you find out that they're full of shit and they're very
33:02
drunk, and maybe they stole your wallet and
33:04
they could kill your dog. Will they shit in the pool?
33:06
It's anyone's guess, but wow, they
33:08
are spicing up the vibe. This
33:11
is not a boring party at all. It raises
33:13
this complex question about, you
33:15
know, who is responsible? And we've
33:18
generally said the attorney is responsible, right? The attorney
33:20
is the one who is licensed to practice law. They're responsible
33:23
for making sure all of the work that they certify under
33:25
their name. Is there any liability
33:27
for generative AI models? Now, ChatGPT
33:30
says, like, I'm not here to provide legal advice
33:32
and it's prone to hallucinations. Is
33:34
that enough to disclaim any liability
33:36
for ChatGPT? Just a jacuzzi
33:39
of hallucinating chatbots
33:41
saying whatever sentence they think
33:43
you want to hear, maybe pooping in there too. So
33:45
what happened to that lawyer, though? Did he get
33:47
so disbarred? Did he have to grow a beard
33:50
and move to Greenland? Does he make felted
33:52
hats out of goat fur now? No, no,
33:55
he's fine. He kept his job. He was just fined
33:57
five grand,
33:58
which if he builds for the research, search hours
34:00
that a chatbot really did, he maybe still
34:02
turned a profit on that deal. But the lessons,
34:06
those are invaluable. Now, if you appreciate
34:08
nothing else today, I just want you to stare off at
34:10
the horizon for even 30 seconds
34:13
and just say, what a time we're
34:15
living in. Hundreds of thousands
34:17
of years of people getting boners and falling
34:19
in love made me a
34:21
person standing on a planet at
34:24
a time when there's plumbing, antibiotics,
34:27
electricity, there's domesticated cats. And
34:30
I have a front row seat to some real madness.
34:33
What an era. As for what we do, I don't
34:35
know. Aren't we being watched all the time anyway? What
34:37
are the watchers doing about this?
34:40
Well, forgive the patriarchal caricatures,
34:42
but where are Big
34:45
Brother and Uncle Sam? Are
34:47
they working together on this? Is there any
34:49
incentive from like a governance perspective
34:52
to say, to step in and say like, we don't
34:54
know how far this should go, or
34:56
does it just generate
34:58
kind of more income for
35:01
maybe big corporations that can misuse it?
35:03
So like, hard to fight against that. So,
35:06
you know, it's hard to know, right? There have
35:08
been hearings that have been held recently
35:12
by the government to try to look into
35:15
sort of both questions that you're asking, which is Uncle Sam
35:17
and Big Brother, right? So there were hearings
35:19
looking at whether or not to regulate
35:22
private corporation use of generative
35:24
AI models. It was, you know, a very
35:27
public hearing where Sam Altman from
35:29
OpenAI calls for regulation. If
35:31
you're
35:31
wondering why this is a big deal. So Sam
35:34
Altman is the CEO of
35:36
OpenAI, which invented ChatGPT.
35:39
And he spoke at the Senate Judiciary Subcommittee
35:41
on Privacy, Technology, and the Law Hearing,
35:44
which was called Oversight of AI, Rules
35:46
for Artificial Intelligence. That was in May
35:48
of this year. He also signed that statement
35:51
about trying to mitigate the risk of
35:53
extinction. And he told the committee
35:55
that AI could, cause
35:58
significant harm to the public.
35:59
world. Papa Chat GPD himself.
36:02
My worst fears are that we cause significant,
36:05
we the field, the technology, the industry cause
36:07
significant harm to the world. I
36:10
think that could happen in a lot of different
36:12
ways. I think if this technology goes
36:14
wrong, it can go quite wrong. And
36:17
we want to be vocal about that. We want to work
36:19
with the government to prevent that from happening.
36:22
And ultimately, Sam urged
36:24
the committee to help establish a new framework
36:26
for this new technology. It was
36:28
a surprisingly collaborative
36:31
tone for most of the federal officials who were
36:33
questioning him very differently
36:35
than in social media context of the past.
36:38
But meanwhile, in a different building, that
36:40
same day, a different hearing was happening, which most people
36:43
weren't aware of, which was federal use of AI.
36:45
And a lot of the discussion in that context
36:48
was about how the federal government needs to be innovating
36:50
to use more AI
36:52
in a lot of
36:54
what they do and to be modernizing what's
36:56
happening.
36:56
Today, we'll be discussing how AI
36:58
has the potential to help government
37:01
serve, better serve the American
37:03
people. Okay, so tonally, the Senate
37:06
Homeland Security and Governmental Affairs Committee
37:08
hearing, which was called artificial intelligence
37:10
in government, was a little bit more optimistic,
37:13
like, hmm, guy, give me some of that. And
37:14
that would include things like Uncle Sam, like improving
37:17
the IRS system and, you
37:19
know, what does filing of taxes look
37:21
like? And are there ways to ease the burden? Are there
37:23
ways to modernize and have different parts of the
37:25
government talking to each other? And hopefully those conversations
37:28
will converge. We won't be looking at like,
37:30
how do we regulate and limit the risks of
37:32
generative AI and then infuse it throughout
37:34
all of the federal government at the same time, right?
37:37
Like, hopefully, like, you have the left
37:39
hand talking to the right hand so that we actually come
37:41
up with a sensible strategy and a road ahead.
37:44
over
38:00
jobs because it's so smart, but at the same
38:02
time, it's worse at Googling than your 10-year-old
38:05
niece with a book report. And while this
38:07
is going on, the government is holding two
38:10
simultaneous hearings on the same
38:12
day, and one is Oppenheimer
38:15
flavored, and the other is Barbie Land. So
38:17
if you are confused by all of this and
38:19
you don't know how to feel, the answer
38:22
is yes, that's correct.
38:25
But it's happening so
38:27
quickly that it's not going to
38:29
be law alone that does anything to rein
38:32
it in. We're going to need a lot
38:35
of cooperation between governments,
38:37
between tech companies. And if you look in
38:39
the U.S., the U.S. has not been good at
38:41
regulating tech companies, right? I
38:44
mean, it has had lots of talk about
38:46
it, lots of very contentious
38:48
Senate hearings. I
38:50
started Facebook. I run
38:52
it, and I'm responsible for what happens
38:54
here.
38:55
And then they have so much money and so much power and
38:57
so much lobbying influence that the
39:00
result is nothing happens.
39:02
And that just can't be the case now. We can't
39:05
go into this era leaving
39:07
it up to tech companies to decide the
39:09
fate of humanity.
39:10
Right. What do you do if you're mad
39:12
as hell and you're not going to take it anymore? What does an
39:14
average person who does not own a $40 billion
39:17
tech company say
39:18
when they're like, don't scrub my brain
39:21
data through my headphones. I'd stop simulating
39:24
art. Some people make some art. Have
39:26
you seen that meme about how somehow we've gotten to a
39:28
place where human beings are still laboring
39:31
at
39:31
wages that don't increase, that
39:33
are not livable, yet computers
39:36
get to write poetry and make art? No,
39:39
but that sounds right. That's such
39:41
a heartbreaking way to look at
39:43
it, where
39:44
no one can afford to be an artist. So
39:46
the exact words from Twitter user Carl
39:49
Sherrow read, humans doing
39:51
the hard jobs on minimum wage while
39:54
the robots write poetry and paint
39:56
is not the future I wanted. So
39:58
that tweet was shared.
39:59
35,000 times because
40:02
it's true and it hurts my soul.
40:05
I haven't seen that meme and now
40:07
I'm reeling from thinking about it, which is like,
40:09
oh my God, that's so true. Suddenly
40:12
we've outsourced all the things that we like and
40:14
we're now doing all of the grunt work still.
40:16
And like, how horrible is that? We're gonna send like
40:19
generative AI to the beach next weekend and
40:21
say, you know, like, what,
40:23
we like stay home and toil and pay
40:25
for it, right? Yeah, I
40:27
mean, you know, the problem is that
40:30
on the one hand we get to like, oh, it's all happening
40:33
so quickly. And so we can't do anything about
40:35
it. On the other hand, that's just the nature of emerging
40:37
tech. It happens quickly. And so it's not
40:39
as if there have not been proposals
40:42
about what agile governance looks like or what
40:44
adaptive regulations look
40:46
like that actually changed based on changes
40:48
in milestones in technology. And it would not be impossible
40:51
to put some of those things into place. and
40:54
thinking about and proposing these models for a long time.
40:56
First off, what does agile governance look
40:59
like? And what does adaptive regulations
41:00
mean? I don't know. I'm
41:03
not a law professor. I'm a podcast
41:05
host who's jealous of a circuit board
41:07
that gets to a watercolor. So I asked my
41:09
robot machine Google and agile governance
41:11
means a process that brings the most
41:14
value by focusing on what matters.
41:16
Okay, but adaptive regulations, I
41:18
think mean like, watch the space, keep
41:21
making laws if shit seems
41:23
like it's getting out of hand. Now in June,
41:25
the European Union overwhelmingly passed
41:27
the EU AI Act, which
41:30
classifies
41:30
different types of AI into risk
41:33
categories. There's unacceptable,
41:35
there's high risk, there's generative
41:37
AI and limited risk. What
41:39
is in these buckets? You're wondering. So
41:42
the unacceptable bucket includes cognitive
41:45
behavioral manipulation and
41:47
social scoring, a la black mirror
41:50
and biometric identification
41:52
like real time public facial recognition.
41:55
High risk involves more biometric
41:57
uses, but after the fact with a few.
42:00
exceptions for law enforcement, but it curbs
42:02
AI stitching on employees
42:05
and doing emotional spying from when
42:07
I gather. Generative AI would have to disclose
42:10
that it's generative and the makers need to come clean
42:12
on what copyrighted material they're using
42:15
to teach generative neural networks.
42:17
Now that's in the EU. As for America,
42:20
we have not gotten that far yet. I mean, that is
42:23
if everyone could even agree on what
42:25
needs to happen, then they'd have to
42:27
agree on voting for that thing
42:29
to
42:29
be actually enacted, which is,
42:32
it's a beautiful dream that
42:34
I'm generating with my human imagination.
42:37
The problem has been, I think, the political will to do
42:39
anything about it and to figure out like, why
42:41
should we care about the cognitive liberty
42:44
of individuals? Why should we care about
42:46
leisure and flourishing
42:49
of humanity? Let's just maximize productivity
42:52
and minimize human enjoyment in life.
42:55
That just can't be
42:56
what the answer is in the digital age anymore. I
42:58
mean, we need an updated understanding of what flourishing
43:01
means, and it can't be that it is generative
43:03
AI making art and writing poetry while
43:05
we toil away. That can't be
43:07
the case. I'm a philosopher. I'm
43:09
going to go back to, we have all
43:11
of these philosophical conceptions,
43:14
lots of perspectives on what flourishing
43:16
is. None of those perspectives,
43:18
if you go back and look at them, contemplated a
43:20
world in which our brains and mental experiences
43:23
could so easily be hacked and manipulated.
43:26
The idea of happiness being
43:29
the primary concept of human flourishing,
43:32
what is synthetic happiness? Is that really happiness?
43:34
If it's generated by dopamine
43:37
hits from being on a predictive
43:39
algorithm that's sending you little notifications,
43:41
it's just the right time to make your brain addicted and
43:43
staying in place, that looks like
43:45
happiness, but I don't think that's happiness,
43:47
right? Even that all of
43:49
these presupposed world in which we actually
43:52
had cognitive freedom,
43:53
we need to realize we don't anymore, right? If
43:56
we don't anymore, we need to create a
43:58
space in which we do so that human...
43:59
flourishing in the digital age is what we're actually
44:02
after and trying to make happen. That
44:04
we could put some human rights in place for
44:07
it. We could put some systems in place that were actually
44:09
creating incentives to maximize
44:12
cognitive freedom as the precursor
44:14
to all other forms of flourishing. And hopefully
44:16
that cognitive freedom would be the right to
44:19
create art without having it appropriated, the right
44:21
to write scripts and poetry without
44:24
having it used to train models
44:26
without our permission and without
44:29
us being part of it
44:29
that then make us irrelevant
44:32
so that the models can play while
44:34
we work. So in her book, The Battle for Your Brain,
44:36
Nita writes that we must establish the right to
44:38
cognitive liberty to protect our
44:41
freedom of thought and rumination,
44:43
mental privacy and self-determination
44:46
over our brains and mental
44:49
experiences. This is the bundle
44:51
of rights that makes up a new right to cognitive
44:53
liberty, which can and should be recognized
44:56
as part of the Universal Declaration of
44:58
Human Rights, which creates powerful
45:00
norms that guide corporations
45:03
and nations on the ethical use of neurotechnology.
45:06
Neurotechnology has an unprecedented power to either
45:09
empower or oppress us. The
45:12
choice is ours." And
45:15
one liberty I've taken is
45:18
never using chat GPT, kind
45:20
of like my high school's football
45:23
rallies. I just don't want to participate.
45:26
And I don't like what it's all about, even
45:28
though literally no one cares that
45:30
a stinky drama student with dyed black
45:32
hair and braces is boycotting. Nobody misses
45:35
me. I've always been a little
45:37
bit creeped out and hesitant. Like, I've never
45:39
tried chat GPT.
45:41
And I have this absolutely
45:44
incorrect illusion that if I don't
45:46
use chat GPT, it won't get
45:49
smarter. And therefore, I
45:51
single-handedly by abstaining have
45:53
somehow taken down an entire industry
45:55
of AI. It's not true. Well,
45:57
it's not true, but there is something to this idea.
45:59
that we're not helpless and that there
46:02
is a demand side to technology just
46:04
as there is a supply side to technology. And
46:06
there is a sense in which consumers and
46:08
individuals feel like they're helpless. It's
46:11
the same way you see with voting. Well, what's the point
46:13
of voting because my state always goes
46:15
this way or that way or... And that kind
46:17
of apathy means that a lot of times elections
46:19
are decided by everybody else and you
46:22
know that you don't have an effect. But this is even more
46:24
so like collectively if we don't like the terms
46:26
of service, why are we all still on
46:28
the platforms? And you're right, the
46:31
models are going to continue to be trained with or without you.
46:33
Yeah, no, like it's not that radical an act
46:35
from just me to abstain. Well,
46:38
but that idea that collectively we
46:40
could act differently, if we
46:43
could motivate and actually
46:45
work collectively to act differently, we
46:48
could act differently. One individual
46:50
person silently protesting
46:52
against chat GPT isn't going to do it, right?
46:55
But loudly protesting against it and saying
46:57
like, look, the models train
47:00
based on human interaction and the more
47:02
human interaction there is, the more it is trained. And
47:05
so do you want to continue to feed
47:08
into that model? That's a worthwhile societal conversation
47:10
to have. You know, I was talking to my husband
47:12
this morning about how many brilliant engineers
47:16
end up working for bomb companies because they're
47:18
going to have the best benefits,
47:21
they're going to have the most stable employment, right?
47:24
How many people in the legal field
47:26
do you feel like get kind of scooped up by
47:29
tech companies because it's just an easier
47:31
way to live?
47:32
Do tech companies just have more pull
47:35
to get the
47:36
best lawyers to advocate for
47:38
them instead of for say greater
47:40
humanity? I think it's not just law,
47:43
right? If I look at some of the best tech ethicists,
47:45
many of them have gone in house to a lot of companies
47:48
that are not actually that invested
47:50
in tech ethics. And many of them got
47:52
laid off in the major tech layoffs that
47:54
have happened from 2022 to 2023. Because a lot of tech companies,
47:56
I think, have
47:59
put lip service to being serious
48:02
about ethics, but they haven't as seriously
48:04
grappled with it. And
48:06
the money and the power that these
48:08
corporations have and the influence on society
48:11
they have, I think both
48:13
makes it hard for some people to resist saying no,
48:15
but also this idea that like
48:17
if you're at a tech company where the transformation
48:20
of humanity is happening, maybe you can steer
48:22
it in the ways that you think are better for
48:24
humanity.
48:25
Are there any nonprofits or organizations that
48:27
you feel like are doing a good job? There are a lot. I
48:29
mean, I couldn't even begin to name them all. Like
48:32
I would say, first,
48:35
I admire what UNESCO is
48:37
doing. So UNESCO is the United
48:39
Nations Educational, Scientific, and Cultural
48:42
Organization. And on their Ethics
48:44
of Artificial Intelligence webpage, it
48:46
states, UNESCO has delivered global standards
48:49
to maximize the benefits of scientific
48:51
discoveries while minimizing the
48:54
downside risks,
48:55
ensuring they contribute to a more inclusive,
48:58
sustainable, and peaceful world. And
49:00
it's also identified challenges
49:02
in the ethics of neurotechnology. So
49:04
as a result, their recommendation on
49:06
the ethics of artificial intelligence was
49:09
adopted by 193 member
49:11
states at UNESCO's General Conference
49:14
way back in the olden times
49:16
of November 2021.
49:18
They're really trying to get out ahead of a lot of issues
49:21
and to thoughtfully provide a lot of ethical
49:23
guidance on a lot of different issues.
49:26
I think the OECD is trying to be
49:29
a useful and balanced
49:32
organization to bring important information
49:34
to bear. The OECD, I had to look this
49:36
up, is the Organization for Economic Cooperation
49:39
and Development. And it's headquartered in France,
49:41
but involves 38 countries. So
49:44
what are they doing? The OECD principles
49:46
on artificial intelligence promote AI
49:48
that's
49:48
innovative and trustworthy and that respects
49:51
human rights and democratic values.
49:53
And then of course there's the EU. I think the EU
49:56
is acting in ways that are really pushing
49:58
the conversations forward around
49:59
the regulation of AI and how to
50:02
do it and how to respect everything from
50:04
mental privacy to safeguard against manipulation.
50:06
And, you know, they get lambasted for like
50:08
going too far or not going far
50:10
enough. And those conversations are better than
50:13
putting nothing on the table, which is what's happening a lot
50:15
of times in the US. I think the Biden
50:17
administration has put out a lot of different principles
50:20
that have been helpful and that those kinds of principles
50:22
are things around like an AI Bill of Rights.
50:25
I went and took a gander at this doc and the
50:27
blueprint for an AI Bill of Rights
50:29
sets forth five principles, which I will
50:32
now read to you. You should be protected
50:34
from unsafe or ineffective systems.
50:37
You should not face discrimination by algorithms.
50:40
You should be protected from abusive
50:42
data practices and you should have agency over
50:44
how data about you is used. You
50:47
should know that an automated system
50:49
is being used and understand how and why
50:51
it contributes to outcomes that
50:53
impact you. And finally,
50:55
you should be able to opt out where appropriate
50:58
and have access to a person who
50:59
can quickly consider and remedy
51:02
problems you encounter. I don't know if that means
51:04
a helpline. I have no idea. But that
51:06
five point framework is accompanied
51:08
by a handbook called From Principles
51:10
to Practice and its guidance for
51:13
anyone who wants to incorporate those protections
51:15
into policy. So that's what the White House
51:17
has put out. They're like, y'all, we should
51:19
really like be cool and nice about
51:21
all this. And it's so sweet and I appreciate
51:24
it. My grandma had 11 children
51:27
and really just dozens of grandkids and she
51:29
still remembered
51:29
all her birthdays and would send a letter
51:32
with one dollar in it. And that dollar meant
51:35
a lot, even if it didn't get you far
51:37
in the world. But I appreciated it in
51:39
the same way I appreciate that AI
51:42
Bill of Rights. It's very sweet.
51:44
Don't know what to do with that.
51:46
There's a lot of different people coming at the problem from a lot of
51:48
different perspectives. If anything,
51:51
there are so many voices at the table that it's
51:53
in many ways becoming noisy where
51:55
we're not necessarily like moving ahead in
51:57
a really constructive or productive way.
51:59
of replication of efforts. But that's
52:02
better than having too little activity at the table.
52:04
So yeah. I think that a lot
52:06
of us on the outside of it think there's a tumbleweed
52:08
blowing through
52:09
a boardroom, and nobody cares.
52:11
So it's really good to hear. No, I
52:13
will tell you that I just feel like there are conversations
52:15
happening in every corner you can imagine right
52:18
now. And I'd like to see those
52:20
conversations be turned into useful
52:22
and practical pathways forward,
52:25
like calling for governance if you're
52:27
a major tech company and saying,
52:30
these technologies that I'm creating create
52:32
existential risk for humanity. Please regulate
52:35
it. Or if you think that they present existential
52:37
risk for humanity, don't just
52:39
rush ahead and come forward with
52:42
something positive rather than saying, my
52:44
job is just to create the technology. Your job is
52:46
to govern it. That's not the pathway forward
52:48
either. I have questions from listeners who
52:50
know you're coming on. Oh, great. Yeah, please.
52:53
But before we do, we'll donate to a relevant
52:55
cause. And this week, it's going to Human
52:57
Rights Watch, which is a group of experts, lawyers,
53:00
and journalists who investigate and
53:02
report on abuses happening in all corners
53:05
of the world. And then they direct their advocacy
53:07
toward governments, armed groups, and
53:09
businesses. And you can find out more at hrw.org.
53:13
And we will link that in the show notes. And thanks to
53:15
sponsors of the show who make that donation possible. Listen,
53:18
there's a lot of reasons to learn a new language.
53:21
Perhaps it makes you more competitive as a job
53:23
applicant. Maybe you're flirting with a stranger
53:25
you otherwise wouldn't have a chance with. Rosetta
53:28
Stone, they're there for you. They're available on desktop.
53:30
They can also be used as an app on your phone.
53:33
And you can choose from 25 languages, including
53:36
Spanish, and French, and Dutch, and Arabic. And
53:38
instead of memorizing and drilling a bunch of vocab
53:40
words, honk shoo, Rosetta Stone
53:43
teaches through immersion, reading stories,
53:45
and participating in dialogues. And it also
53:48
keeps it really interesting.
53:48
I use Rosetta Stone. It's been used by
53:50
millions. And it's been around for 30 years
53:53
because it works. I'm learning to speak Spanish. And
53:55
my neighbor Pablo says that my accent is
53:57
muy bueno. They also have a true accent feature. so
54:00
you get feedback on how well you're pronouncing words in case
54:02
you don't have Pablo to launch a compliment at
54:04
you. You can find lessons as short as 10 minutes
54:06
and you can start thinking in more languages.
54:09
How exciting is that? And I know you might be saying,
54:11
which language would I even choose? Well, start
54:13
learning that new language right now for a very
54:16
limited time. All of these listeners can get Rosetta
54:18
Stone's lifetime membership for 40% off.
54:21
So that's $179 for unlimited access to 25
54:26
language courses for the rest of your
54:28
life. Redeem your 40%
54:29
off at rosettastone.com
54:32
slash ologies today. That's rosettastone.com
54:36
slash ologies. Go for it.
54:38
Summer's here and no one expects you to remember
54:40
everything. Listen, maybe you're hosting a summer
54:42
cookout and then you realize, no marshmallows?
54:45
Do you cancel the party? No, use Instacart.
54:48
Maybe you're at a sunscreen. You can have that stuff
54:50
delivered in as fast as an hour. So with Instacart,
54:52
you can get everything you need for summer in
54:55
one app and you can check out with PayPal using
54:57
one login. So easy. Instacart
54:59
helps deliver the order in as fast
55:01
as an hour, which is giving time back to
55:04
enjoy summer. Hassle-free. And even
55:06
if you're hosting or you need large orders, it's easy,
55:08
you can just stock up and you can save on 1200 retailer
55:10
brands, 75,000 locations
55:13
across the country. I just used Instacart this
55:16
week. Not because I was hosting a party or having
55:18
fun, but I had pneumonia. And guess what? I
55:20
wasn't getting out of bed to go get soup. Nope, Instacart's
55:22
like, here you go. And I was like, ding dong, thank you. Instacart,
55:25
add summer to cart. Visit instacart.com
55:29
to get free delivery on your first three
55:31
orders. New Instacart customers also
55:33
get $25 off your order of $75 with
55:36
code PayPalNew25
55:38
when using PayPal through September
55:40
30th, 2023. Additional terms
55:42
apply. Have a summer, get your stuff.
55:46
This show is sponsored by BetterHelp. Sometimes
55:49
the most exciting time of our life is
55:51
when we're making a big decision, even though
55:53
it can be really scary, whether you're thinking about a career
55:55
change or feeling like your relationship
55:57
needs some TLC or you're going through grief.
55:59
or a crossroads, whatever it is, therapy
56:02
can help you map out the future and help
56:04
you understand why you're gravitating
56:06
towards certain decisions. So if you're thinking
56:08
of starting therapy, give BetterHelp a try. It's
56:10
entirely online, it's designed to be convenient,
56:13
flexible, and suited to your schedule. You
56:15
just fill out a brief questionnaire and they'll
56:17
match you with a licensed therapist. It's
56:19
quick, it's easy, if for any reason you're
56:22
not vibing, you can switch therapists anytime
56:24
for no additional charge, no drama. Everyone
56:27
needs a little help sometimes, and those big
56:29
exciting moments in life will change
56:31
where you go in the future. So let therapy be
56:34
your map with BetterHelp. Visit betterhelp.com
56:36
slash ologies today and get 10% off
56:39
your first month. That's betterhelp, H-E-L-P
56:42
dot com slash ologies.
56:44
Here's the deal, when it comes to money, sometimes you're like, I'm
56:46
just not going to think about it. And then other times you're like, I would like
56:49
someone else to think about it for me. Hence,
56:51
rocket money. If you don't know exactly how much you're
56:53
spending every month, you need rocket money. Do
56:56
you know how much your subscriptions really cost?
56:58
Most Americans think they spend around 80 bucks
57:00
a month on subscriptions. The total is
57:02
actually closer to 200. 80% of people
57:05
have subscriptions that they completely forgot about.
57:07
Chances are you're one of them. Rocket money
57:10
quickly and easily finds your
57:12
subscriptions for you. And then for any,
57:14
you don't want to pay anymore. You just hit cancel. Boom,
57:17
rocket money, gone. It's that
57:19
easy. Rocket money also helps you manage
57:21
all your finances in one place, and
57:23
it automatically categorizes your expenses
57:26
so you can track your budget in real time.
57:28
Super helpful. And it'll alert you if
57:31
anything looks off. Maybe I just bought a sweater
57:33
that cost $80 and it has a worm
57:35
on it. That was all me. Over 3 million
57:38
people have used rocket money, saving
57:40
the average person up to $720 a year. That's
57:42
a lot of worm sweaters. So that's rocket money.com
57:47
slash ologies. Rocket money.com
57:50
slash ologies. You can do this. Well, they
57:52
can do it for you.
57:54
Okay, on to questions written
57:56
by actual human listeners made of
57:58
meat and water. Let's start
58:00
with something optimistic. A ton of people,
58:03
Lena Brodsky, Nina Yvesy, Chris Blackthorn,
58:06
Meg C, Alexandra Kautoule, Adam
58:08
Silk, Katie McAfee, Madison
58:10
Piper, and Will Mac, want to know,
58:13
can we use AI for good? Ry
58:16
of the Tiger wants to know, what will AI's role
58:18
look like in the fight against climate change,
58:21
for example, or should we be using
58:23
AI for the toils like meal
58:25
planning and trip planning and things like that?
58:27
Yeah, so I think we can absolutely
58:30
use AI for good. And first,
58:32
I would say a friend of mine, Orly Lopel, wrote
58:35
a book recently called The Equality
58:37
Machine. And it's all about using
58:40
AI to achieve better
58:42
equality in society, and gives kind of example
58:44
after example of both how it could be done, and
58:47
how it is being done in some context. I
58:49
think recognizing that there is this
58:52
terrifying narrative about AI, but that actually
58:54
AI is already making our lives better,
58:57
in many, many ways, is an important thing
58:59
to look at. And that
59:02
we can put it to solving some of the biggest problems
59:04
in
59:05
society, right from climate
59:07
change and trying to generate novel
59:10
ideas to testing, and
59:12
identifying, and this is already happening, novel
59:15
compounds that could be used to solve
59:17
some of the worst diseases, to being
59:20
used to identify the causes
59:22
of different diseases, to
59:24
identifying better patterns that help
59:26
us to address everything from neurological
59:29
disease and suffering to the existential
59:32
threats to humanity like climate change. So I absolutely
59:34
think it can be used for good. It
59:36
is being used for good. It could be used for
59:39
more good. We have to better align
59:41
the tech companies with the
59:44
overall ways of human
59:46
flourishing. Right, I mean, if you were to
59:49
use AI to improve brain
59:51
health, instead of to addict and diminish
59:53
brains, that would be phenomenal.
59:56
And it could be used to do that. It can be used
59:58
for mental health treatment and for solving neuro- neurological disease
1:00:00
and suffering, or it can be used to addict
1:00:02
people and keep them stuck on technology. We
1:00:05
need to figure out a way to align the incentives
1:00:07
of tech companies with these
1:00:09
ideas of AI for good. It'll
1:00:11
be so interesting to see if
1:00:14
they are getting a lot of feedback
1:00:16
from our brains, any
1:00:18
mental health challenges or speaking
1:00:21
as someone who has anxiety and
1:00:23
is neurodivergent. Hello, hi. It's
1:00:26
like ADHD, autism, those have been so
1:00:28
overlooked in some populations.
1:00:31
It would be interesting to see people
1:00:34
getting a better understanding of their own brains that maybe
1:00:36
medicine has overlooked because of demographics
1:00:39
for a long time. Yeah. I have a TED Talk that
1:00:41
just came out that the first half
1:00:43
of the TED Talk actually focuses on all of the
1:00:46
positive ways that neurotechnology can be used
1:00:48
and all of the hope that it offers. Stress
1:00:51
tracking our everyday brain activity could
1:00:54
help us better understand what stresses us out.
1:00:57
The earliest stages of glioblastoma,
1:00:59
the worst and most threatening form of aggressive
1:01:02
brain cancer is, the earliest stages of
1:01:04
Parkinson's and Alzheimer's disease, better
1:01:06
solutions for ADHD and trauma,
1:01:10
everything from understanding the impact
1:01:12
of technology on our brains to the understanding
1:01:15
the impact of having that glass of wine or that cup of coffee
1:01:17
on the brain and how it reacts to it. Gaining
1:01:21
insight into our own brain activity could
1:01:23
be the key to unlocking much
1:01:26
better mental health and well-being.
1:01:29
I think if it's put in the hands of individuals
1:01:32
and used to empower them, that
1:01:34
will be tremendous and phenomenal. Long
1:01:37
as we don't overshadow those benefits or
1:01:39
outweigh those benefits with the dystopian misuses
1:01:42
of the technology, which are
1:01:44
very real and very possible, right, of
1:01:46
using in the same way that companies
1:01:48
are using all kinds of algorithms
1:01:51
to predict our purchasing behavior
1:01:53
or to nudge us to do things like watch
1:01:56
the tenth episode in a row of
1:01:58
a show rather than
1:01:59
you know, breaking free and getting some
1:02:02
sleep, which is important for brain health. If
1:02:04
the companies don't use brain data
1:02:06
to
1:02:07
commodify it, to
1:02:09
inform a more Orwellian
1:02:12
workplace, get back to work. If
1:02:14
governments don't use it to try to surveil
1:02:16
brains and to intrude on freedom of thought,
1:02:19
but instead it's used by individuals
1:02:21
to have greater power over their own
1:02:23
health and well-being and their own brains, it
1:02:26
will be tremendous. You just have to really
1:02:28
worry about those misuses and how we safeguard
1:02:30
against them. So the day before this interview,
1:02:32
a TED talk featuring Nita went
1:02:35
live and in it she discusses
1:02:37
the loss of her daughter and the
1:02:39
grief that overwhelmed her. And she tells
1:02:41
of how using biofeedback to
1:02:44
understand her own sorrow
1:02:46
and trauma from the experience helped
1:02:49
her so much, but how individuals
1:02:51
brain
1:02:51
data should be protected. And
1:02:53
this wrenching personal story that she tells,
1:02:56
plus her long backgrounds in ethics
1:02:58
and science and philosophy make her very uniquely
1:03:00
suited to see this issue from a lot of angles.
1:03:03
And a lot of patrons had questions about
1:03:05
surveillance and brain data and even
1:03:08
neural hardware, including Katie
1:03:10
McAfee, Ryan Marlowe, and Sandy Green,
1:03:12
who asked about things like
1:03:14
medical devices like brain implants
1:03:16
being used for surveilling or for
1:03:19
commerce. I was curious, so are some
1:03:21
listeners to
1:03:21
PAVCA34, Aminek, David,
1:03:24
and Alex Ertmann's words, if
1:03:26
we were to implant chips into human brains, what
1:03:28
would they most likely be capable of? Would
1:03:31
they be more in the realm of modulating real inputs
1:03:34
or would they be capable of generating
1:03:37
new thoughts? Alex says it seems
1:03:39
far-fetched, but also the truth can be stranger than fiction.
1:03:41
So is that a really big leap
1:03:44
philosophically and legally and
1:03:46
technologically? I think it might be easier
1:03:48
to interrupt thoughts than to create new thoughts.
1:03:51
However, I guess philosophically that
1:03:53
is creating new thoughts if you're interrupting thoughts, right,
1:03:55
because you're letting other thoughts happen. But implanted
1:03:57
neurotechnology right
1:03:59
now is...
1:03:59
very limited. It's very difficult
1:04:02
to get neurotechnology into
1:04:04
people's brains. And there are 40 people who
1:04:07
are part of clinical trials that
1:04:09
have implanted neurotechnology right now. It's a
1:04:11
tiny number of people. If Neuralink,
1:04:14
you know, and Elon Musk has his way, there will be far
1:04:16
more people who are part of that. But
1:04:19
implanted neurotechnology is limited. What
1:04:21
it primarily is being used to do
1:04:23
is to get signals out of the brain. That
1:04:26
is to listen to intention to
1:04:28
move or to form speech
1:04:29
and to translate that
1:04:32
in ways that then can be used to operate other
1:04:34
technology. If you're like, what is Neuralink
1:04:36
again? It sounds like a commuter
1:04:38
train, but this is actually a side hustle of
1:04:41
Twitter owner and Tesla
1:04:43
guy and tunnel maker Elon
1:04:45
Musk. And he described this
1:04:48
cosmetically undetectable coin
1:04:50
sized brain accessory as a wireless
1:04:53
implanted chip that would enable
1:04:55
someone who is quadriplegic
1:04:58
or tetraplegic to control
1:04:59
a computer or mouse
1:05:02
or their phone or really any device just by
1:05:04
thinking. And he likened it to a Fitbit
1:05:06
in your skull with tiny wires that
1:05:09
go to your brain. So a
1:05:11
robot surgeon also invented by Neuralink,
1:05:14
so 64 threats with over
1:05:16
a thousand electrodes into
1:05:19
the brain matter, which allows the
1:05:21
recipient to control devices
1:05:23
or robotic arms or screen using
1:05:26
telepathic typing, which sounds pretty cool.
1:05:28
In early 2022, it
1:05:29
came to light that roughly 1500 animals
1:05:33
had been killed in the testing process since 2018.
1:05:36
Some from human errors like incorrect
1:05:38
placement on pig spines or
1:05:41
wrong surgical glue used
1:05:43
in primate test subjects. And some former
1:05:45
employees reported that the work there was
1:05:47
often rushed and that the vibe was
1:05:49
just high key stressful.
1:05:52
But nevertheless, Neuralink announced
1:05:54
just a few months ago that they got the green
1:05:56
light from the FDA to launch their human
1:06:00
you're like, hey, I am
1:06:02
always losing the TV remote, so wire
1:06:04
me up, Musk. Please cool your jets, because
1:06:06
they added that recruitment is not yet open
1:06:09
for their first
1:06:09
clinical trial. More on that as it develops.
1:06:12
But I guess when I said that we could become bubbles
1:06:14
of chimp, that was really on the optimistic
1:06:16
side of things. What is possible, though,
1:06:18
and this is one of the things I talk about in my
1:06:20
TED Talk, is it's possible to use
1:06:23
neurostimulation in the brain. So I
1:06:25
describe, for example, the case of Sarah,
1:06:27
where she had intractable depression,
1:06:31
and through the use of implanted electrodes,
1:06:34
was able to reset her brain activity.
1:06:36
This side note was conducted at the University
1:06:38
of California at San Francisco, where neuroscientists
1:06:41
implanted what's called a BCI, or
1:06:43
brain computer interface, which was initially
1:06:45
developed for epilepsy patients into
1:06:48
someone with treatment-resistant depression. And
1:06:50
one surgeon on the team said, when we turned
1:06:53
this treatment on, are patients' depression symptoms
1:06:55
dissolved in a remarkably small
1:06:58
time she went into remission? And
1:07:00
the patient, Sarah, reported laughing
1:07:02
and having a joyous feeling wash over
1:07:04
her that lasted at least a year after
1:07:06
this implantation. So the specific
1:07:09
pattern of neural activity
1:07:11
that was happening when she was the most symptomatic
1:07:13
was traced using the implanted technology.
1:07:16
And then, like a pacemaker for the brain, those
1:07:18
signals were interrupted and reset
1:07:21
each time she was experiencing them. That
1:07:24
doesn't create a new thought. What
1:07:26
it does is interrupt an existing thought.
1:07:28
But philosophically, you could say that creates a new thought. It
1:07:30
creates, for her, an experience of being
1:07:32
able to have a more typical range of emotions. I
1:07:35
think specific thoughts would
1:07:38
be very hard to encode into the
1:07:40
brain. I won't say
1:07:41
never. So brain hacking
1:07:43
and hacking into your brain may
1:07:46
radically change the way that we think
1:07:48
and feel
1:07:49
if we don't blow up the planet first, which
1:07:51
is not an intelligent thing to do. Speaking
1:07:54
of intelligence, many patrons wanted to know
1:07:56
what is in a name. Alexis Will-Clark,
1:07:59
Zomba.
1:07:59
who proposed the term OI
1:08:02
or organic intelligence for human thinking
1:08:05
and history buff Connie Brooks, they
1:08:07
all had questions about
1:08:09
AI and the term AI. Is
1:08:12
it intelligent? Is it artificial?
1:08:15
Are they ever going to do a rebrand on
1:08:17
that? Does it give people the wrong idea of
1:08:19
what it is? Yeah. So I mean, a lot of
1:08:21
the technologists out there were computer
1:08:24
scientists saying, this isn't artificial
1:08:26
intelligence because that assumes that there's intelligence.
1:08:28
These aren't intelligence.
1:08:30
They are task specific algorithms
1:08:32
that are designed to do particular things. And
1:08:35
that if we ever get to the point where you start to see
1:08:37
more generalized intelligence, then
1:08:40
that's the point at which it makes more sense to talk about
1:08:42
artificial intelligence. But not everyone
1:08:44
is so casual
1:08:45
about that assessment.
1:08:47
Interestingly, Eric Corvitz, who is
1:08:49
the chief science officer at Microsoft,
1:08:52
who has partnered with OpenAI for chat
1:08:55
GPT, he just published his
1:08:57
essay on this AI and anthology series.
1:08:59
And he talks about how his
1:09:02
experience with GPT-4 was to see a lot of threads of
1:09:07
intelligence, of what we think of as intelligence.
1:09:10
And you see increasingly a lot of
1:09:13
examples of reasoning more
1:09:15
like humans. I think one of the examples I've
1:09:17
seen out there is giving GPT-4
1:09:19
a question of like, okay,
1:09:22
you have some eggs, a laptop,
1:09:24
it's like five or six items, how would you stack
1:09:27
them? And then comes
1:09:29
out and explains how you would stack them. And like you would
1:09:31
put the book on the bottom, and then you
1:09:34
would put a set of eggs that were spread out so
1:09:36
that they could be stable. And then you would put the
1:09:38
laptop in a particular configuration and
1:09:40
blah, blah, blah. And why that
1:09:42
kind of reasoning was
1:09:45
more like human intelligence
1:09:47
than it is like an algorithm. And those are
1:09:49
really interesting to think about. Like what is intelligence
1:09:52
is really the fundamental question, I think when somebody
1:09:54
is saying, is it really artificial intelligence? It
1:09:57
is to have a particular perspective on what intelligence
1:09:59
is.
1:09:59
is and means, and then to say,
1:10:02
well, that isn't intelligence. Or
1:10:04
if a
1:10:06
generative AI model says it's happy,
1:10:08
that it can't really be because that's
1:10:10
not an authentic emotion because it's never
1:10:13
experienced the world and it doesn't have sensory
1:10:15
input and sensory output. Or
1:10:17
if a generative AI model
1:10:20
says, here's what the ratings of wine are
1:10:22
and what an excellent wine is, it can't possibly
1:10:24
know because it's never tasted wine. And
1:10:27
then there's a question of, is that kind of intelligence
1:10:29
what you need, which is experiential
1:10:32
knowledge and not just knowledge built
1:10:34
on knowledge. There are some forms
1:10:37
of intelligence, like emotional intelligence, which you might
1:10:39
think really requires experiencing
1:10:41
the world to authentically have that kind of intelligence.
1:10:45
I don't know shit about wine, and sometimes
1:10:47
I'm bad at my own emotions. Oh, well, we
1:10:49
can learn. Speaking of learning, many
1:10:51
patrons who are students had thoughts
1:10:54
and questions like Handy Dandy Mr.
1:10:56
Mandy, Natalie Jones, Josie Chase,
1:10:58
and Slayer, as well as educators,
1:11:00
including Nina Bratzke, Julie Vollmer,
1:11:02
Leah Anderson, Jenna Cong-Ben Theodorovician,
1:11:05
Hudson Ansley, and Nina Yvesi. There
1:11:08
were several teachers who wrote in with
1:11:10
questions. Katie Bauer says, I'm a middle school
1:11:12
teacher and I just started having students
1:11:14
use AI
1:11:15
tools to write essays for them. Help,
1:11:17
talk me down. How do we embrace
1:11:19
new tech but also teach students how to navigate
1:11:21
this new landscape with solid ethics
1:11:24
and an understanding of the need to develop skills
1:11:26
that don't revolve around AI technology? And
1:11:29
Liz Park, first time question asker, they're a teacher
1:11:31
and they feel that teaching along with a lot of other jobs
1:11:34
just can't be handed off to AI and expected
1:11:36
to have the same impact because machines, no matter
1:11:38
how advanced, won't be able to individualize
1:11:41
education and provide warmth and et cetera.
1:11:43
Well,
1:11:43
you know, it's funny because I hear that almost
1:11:45
the same question in both, right? What
1:11:48
is the role of education and human
1:11:50
to human education in a world
1:11:52
of generative AI? And I
1:11:55
think that's a great question to be asking. And I would
1:11:57
say first, I'm so glad
1:11:59
that they were giving. their students the assignment
1:12:01
of working with Chat GPT and trying
1:12:03
to understand it because I think there
1:12:06
are skills that you can't
1:12:08
learn from generative AI and if you don't learn
1:12:10
them we will not be able to interact
1:12:13
well with them and use them well. And these are critical
1:12:15
thinking skills and if the same old
1:12:17
assignments are how we're trying to teach students
1:12:20
then yeah students are just going to go
1:12:22
to Chat GPT and say here's the
1:12:25
book generate a
1:12:27
thesis statement for me and write my essay
1:12:30
right but they will have lost out
1:12:32
on the ability to generate a thesis statement
1:12:34
and what that critical thinking skill is and lost
1:12:37
out on the ability to build an argument in how you
1:12:39
do so lost out on the ability
1:12:41
to write and understand what good writing is
1:12:44
and they won't be able to interrogate the systems
1:12:46
well because they won't have any of the skills necessary
1:12:49
to be able to tell fact from fiction and what is good
1:12:51
writing or anything else. So then the question
1:12:53
is what do you do and
1:12:55
it's the
1:12:56
teachers and higher education and K
1:12:59
through 12 education needs to be thinking
1:13:01
about okay what are the fundamental
1:13:03
skills of reasoning and
1:13:05
critical thinking and empathy
1:13:08
and emotional intelligence and
1:13:10
mental agility that we
1:13:12
think are essential and that we have been
1:13:14
teaching all along but we've been teaching by
1:13:18
tasks that now can be outsourced and
1:13:20
then how do we shift our teaching
1:13:22
to be able to teach those skills
1:13:25
and you know if you go back
1:13:26
to like the Socratic dialogues
1:13:29
there's an art to asking the question to seeking
1:13:32
truth and there is an art to asking the question
1:13:34
of generative AI models in
1:13:36
seeking the truth or in seeking good outputs
1:13:38
and we have to be teaching those skills if we
1:13:40
want to move ahead. I wasn't sure
1:13:43
what the Socratic method of questioning was
1:13:46
so I asked the literature via
1:13:48
computer and I found that it involves
1:13:50
a series of focused yet open questions
1:13:53
meant to unravel thoughts as you go
1:13:55
and according to one
1:13:56
article instead of a wise person
1:13:59
lecturing
1:13:59
teacher acts as though ignorant of
1:14:02
the subject. And one quote
1:14:04
attributed to Socrates reads, the highest
1:14:06
form of human excellence is to question
1:14:09
oneself and others. So don't
1:14:11
trust my wine recommendations, but do cut
1:14:14
bangs if you want. Text a crush, ask
1:14:17
a smart person, a not smart
1:14:19
question, because worms are
1:14:21
going to eat us all one day. But yeah,
1:14:23
the point of education isn't to
1:14:26
get a good grade, but to develop skills
1:14:28
that in the future are going to
1:14:29
get you out of jam. So many
1:14:32
jams.
1:14:33
And I think your other person
1:14:35
talking about that they can never replace human
1:14:37
empathy, that's right. But
1:14:40
don't be blind to the fact
1:14:42
that they can make very powerful personal
1:14:44
tutors as well. And they may not
1:14:46
be able to tell when a student is struggling or
1:14:48
when they need emotional support or when they may
1:14:51
be experiencing abuse at home and need
1:14:54
the support of the school to be able to intervene,
1:14:56
for example. But they can
1:14:58
go beyond, a teacher
1:15:00
can go, a teacher doesn't have the capability
1:15:02
to sit down with every student for hours
1:15:05
and help them work through 10 different
1:15:07
ways of explaining the same issue
1:15:09
to somebody. And so you help them
1:15:11
learn how to ask the questions. And
1:15:13
then they could spend all night long saying, okay, well, I didn't understand
1:15:16
that explanation. Can you try explaining it to me a different
1:15:18
way? Can you try explaining it to me as if you
1:15:20
were telling my grandmother, I don't understand what that word
1:15:22
means. There's no teacher on earth
1:15:24
who has either the patience for that or
1:15:27
the time or is paid well enough
1:15:29
to do that for every student. And so I
1:15:31
think it can be an extraordinary equalizer,
1:15:34
you know, right now, like wealthier parents
1:15:37
are able to give private tutors to their kids.
1:15:39
Okay, now you can have a generative AI model
1:15:42
serve as a private tutor that can be customized
1:15:44
to every student based on how they learn. However,
1:15:47
that doesn't mean we don't need teachers to be
1:15:49
able to be empathetic and to help students
1:15:51
learn how to engage with the models
1:15:54
and learn critical thinking skills or to create
1:15:56
a social environment to help develop their emotional
1:15:58
intelligence and their digital intelligence.
1:15:59
intelligence.
1:16:01
But it does mean that there is
1:16:03
this additional tool that could actually be incredibly
1:16:05
beneficial and can augment how we're
1:16:07
teaching. Okay, but outside the classroom
1:16:10
and into your screens, folks had questions
1:16:12
including Michael Heiker, Kevin Glover,
1:16:14
Andrea Devlin, Jenna Congdon, credit
1:16:16
state of mind, Chris Blackthorn, RJ Dorridge,
1:16:19
and... One big question a lot of listeners
1:16:22
had is
1:16:23
Rebecca Newport says, what's your favorite or
1:16:25
least favorite portrayal of AI in media?
1:16:28
Chris Whitman wants to know, what is your favorite
1:16:30
AI storyline based movie and why is it
1:16:32
Ex Machina? Someone else said, Mrs. Davis, should
1:16:35
we turn off Mrs. Davis if we could? How do
1:16:38
we prevent Terminator 2? Whether or not
1:16:40
you watch Black Mirror, anything
1:16:43
that you feel like pop culturally
1:16:46
written by humans that you've loved
1:16:48
or hated? I love Minority Report.
1:16:50
It's an oldie but goodie, but it really
1:16:52
informs a lot
1:16:53
of my work and I think
1:16:55
it's great. I'm placing you under
1:16:57
arrest for the future murder of Sarah Marks. Give it a man his head.
1:17:01
The future can be seen.
1:17:03
I think that some
1:17:06
of the modern shows
1:17:08
that I like like Severance, Altered
1:17:11
Carbon, I thought was a great series, Black
1:17:14
Mirror, Yes. You know, all of those
1:17:16
I think are terrific and creepy.
1:17:18
I appreciate those stories and really
1:17:20
raising consciousness about some of the existential threats
1:17:23
but I would like to see stories that
1:17:26
give us a more balanced perspective sometimes. I
1:17:28
guess that doesn't make for good film but you know, look,
1:17:31
the fears of like we don't fully
1:17:33
understand consciousness,
1:17:33
let alone how emergent
1:17:37
properties of the human brain happen,
1:17:40
let alone how emergent properties could happen
1:17:42
in an incredibly intelligent system
1:17:45
that we are creating. I
1:17:47
share those fears. Like I don't
1:17:49
know where all of this is going and I worry about
1:17:51
it. I also
1:17:53
don't think anybody has an answer about how to safeguard
1:17:56
against those existential threats
1:17:58
and we should be doing things better.
1:17:59
to try to identify
1:18:02
them and to identify the points and identify
1:18:05
what the solutions would be if we actually start
1:18:07
to see those emergent properties and those emergent properties
1:18:09
are threatening. Like we need monitoring systems.
1:18:12
We also in the meantime need to
1:18:14
be looking at the good and figuring out
1:18:16
how to better distribute the good, how
1:18:18
to better educate people, how to change our education
1:18:21
systems to catch up with it, how to
1:18:23
recognize that the right answer for the writer's strike
1:18:26
isn't to outsource it to chat GPT and there's something
1:18:28
uniquely human about the
1:18:29
writing of stories and
1:18:32
the sharing of stories and the creation of art
1:18:34
and that that's part of the beauty of what it means
1:18:36
to be human. And so those
1:18:38
conversations about the role in
1:18:41
our lives and how to put it to uses
1:18:43
that are good and still preserve human
1:18:46
flourishing, like that I feel like is what we need to be doing
1:18:49
in the meantime, right before it actually torches
1:18:52
us all. That is great advice. And
1:18:54
the last questions I always ask is always like, what's
1:18:56
the worst part about your job? A lot of people say
1:18:58
might
1:18:59
be jet lag meetings, emails, but
1:19:01
I will outsource that to the patrons who wanted to
1:19:03
know, are we fucked? So many wanted
1:19:05
to know, are we fucked? So what is the most
1:19:08
fucked thing about what
1:19:10
you do or learn? So
1:19:12
I mean, we're fucked if we let
1:19:14
ourselves be and I
1:19:18
fear that we will, right? I mean, so
1:19:21
I can tell people until I turn blue
1:19:23
in the face about the potential
1:19:25
promise of AI and certainly the
1:19:27
promise of neurotechnology, if
1:19:29
we put it to good use and if we safeguard
1:19:31
against the Orwellian misuses of it in
1:19:33
society. But like we seem
1:19:35
to always go there. We seem to always like
1:19:37
go to the Orwellian and do the worst thing
1:19:40
and put it to the worst applications and be driven
1:19:42
just by profit and not by human flourishing.
1:19:44
And so if we keep doing that, then yeah,
1:19:48
we're kind of fucked. And if
1:19:50
we actually like heed the wake up call
1:19:52
and do something about it, like put into
1:19:54
place not only a human
1:19:57
right to cognitive liberty, but also
1:19:59
the system.
1:19:59
the governance, the practices,
1:20:02
the technologies that help cultivate it
1:20:04
in society. I mean, if we invest
1:20:07
in that,
1:20:07
we have a bright and happy future ahead.
1:20:10
If we don't, you know, it's not good. Yeah,
1:20:12
we're fine. We're talking. What
1:20:15
about, to be such a globally
1:20:17
recognized,
1:20:19
trusted voice on this,
1:20:21
obviously, I was so pumped to interview
1:20:23
you. Like, I came straight out of the gate
1:20:25
being like, I'm terrified of talking to you. What
1:20:29
is it about your work that gets
1:20:31
you excited? What kind of keeps you motivated?
1:20:34
I guess I'm also fascinated and terrified,
1:20:37
right? I mean, so like,
1:20:39
it's almost like the horror show where you can't
1:20:42
look away. And so I'm just motivated
1:20:45
to continue to look and to learn
1:20:47
and to research. And then I guess
1:20:49
at the end of the day, I am an eternal optimist.
1:20:52
Like, I just, I believe in humanity.
1:20:54
I believe we can actually find
1:20:56
a pathway forward. And that if
1:20:58
I just
1:20:59
try hard enough, right? If I
1:21:02
just like get
1:21:04
the message out there and work with enough other
1:21:06
incredibly thoughtful people who
1:21:08
care about humanity that
1:21:11
we will find a good pathway forward. So
1:21:13
I'm driven by the hope and the fascination.
1:21:16
I'm driven to continuously learn more. And
1:21:19
I'm just grateful that people
1:21:21
seem to respond. I'm encouraged
1:21:23
that in this moment, people
1:21:26
seem to really get it. They really seem to be
1:21:28
interested in working together collectively
1:21:31
to find a better pathway forward.
1:21:33
I feel like you walking into a room
1:21:35
or a conversation is like, have you ever seen a piece
1:21:37
of chicken thrown into piranhas? All
1:21:39
of us are just like, can you help me out? I
1:21:41
have a question, I have a question, I have a question. Like, I have
1:21:44
a question. The rest of us are like intellectual
1:21:46
piranhas being like, please don't be everything you know. Get it,
1:21:48
get it, get it, get it. And give me a hug while you're at it, thank
1:21:50
you. Well, that's
1:21:52
a good thing is I can give hugs too, right? And
1:21:55
so I'm also a mom at the end of the day.
1:21:57
I have two wonderful little girls
1:21:59
that I own.
1:21:59
who keep me grounded and
1:22:02
see the world full of curiosity and
1:22:04
kind of brilliance of all kinds
1:22:06
of possibility. And I want to help them
1:22:10
continue to see the world as this kind of
1:22:12
magical place. I want it to still be that place for them
1:22:14
as they grow
1:22:14
up. So ask actual
1:22:17
intelligent people some analog questions
1:22:19
because the one thing that we can agree on
1:22:22
is that there is some power in learning, whether
1:22:24
you're a person or a machine. And
1:22:27
now that you know some basics, you can keep up with some of the
1:22:29
headlines, but honestly, take news
1:22:31
breaks, go outside, smell
1:22:34
a tree, play pickleball or
1:22:36
something, or go read Nita's book. It's
1:22:39
called The Battle for Your Brain, Defending the Right to Think Freely
1:22:41
in the Age of Neurotechnology. We'll link that and
1:22:44
her social media handles in the show notes,
1:22:45
as well as so much more on our website at allyward.com
1:22:48
slash ologies slash neurotechnology.
1:22:51
Oh, also smologies are kid-friendly
1:22:53
and shorter episodes. Those are up at allyward.com
1:22:55
slash smologies, linked in the show notes. Thank
1:22:58
you, Zeke Rodriguez-Thomas and Jared Sleper of
1:23:00
Mindshare Media and Mercedes Maitland
1:23:02
of Maitland Audio for working on those. We are
1:23:05
at ologies on Instagram and Twitter, and I'm allyward
1:23:07
on both, just one L in ally. Thank
1:23:09
you patrons at patreon.com for such
1:23:12
great questions. You can join for a
1:23:14
dollar a month if you like.
1:23:15
Ologies merch is for sale at reasonable
1:23:17
prices at ologiesmerch.com. Thank
1:23:19
you, Susan Hale for handling that among
1:23:22
all of her many responsibilities as managing
1:23:24
producer. Noel Dilworth schedules for
1:23:26
us. Aaron Talbert, admin, Zeology's podcast
1:23:29
Facebook group. This is from Bonnie Dutch and Shannon Feltes.
1:23:31
Also happy birthday to my sister Celeste, who has
1:23:33
a great brain. Emily White of the Wordery
1:23:36
makes our professional transcripts and those are at
1:23:38
allyward.com slash ologies dash extras
1:23:41
for free. Kelly Ardwyer does our website.
1:23:43
She can do yours too. Mark David Christensen,
1:23:45
assistant
1:23:45
edited this and lead editor
1:23:48
and alarmingly smart. Mercedes Maitland of Maitland
1:23:50
Audio pulls it all together for us each
1:23:52
week. Nick Thorburn wrote and performed the
1:23:54
theme music. If you stick around until the end of the
1:23:56
episode, I tell you a secret and I'm going to treat
1:23:59
this space like a... a confessional booth, if you don't
1:24:01
mind. Okay, so once I ran into this guy
1:24:03
that I had dated who had dumped me and he was
1:24:05
with his lovely new girlfriend and
1:24:07
I pretended like I didn't hear his new
1:24:10
girlfriend's name right. I was like, what is it? As
1:24:12
if I hadn't been six
1:24:14
years deep
1:24:15
in her Facebook like the day they became official
1:24:18
and I still feel guilty about that. But I'm telling you that because
1:24:21
computers, wow, they've changed our lives and also
1:24:23
humans were so gooby and flawed.
1:24:26
But you know, everyone's code has bugs and
1:24:28
we just keep upgrading our software until
1:24:31
things work well enough. Okay, go
1:24:33
enjoy the outdoors if you can. Bye bye.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More