Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
2:00
and migration from rural to urban areas.
2:03
Now, in the face of artificial intelligence,
2:05
the question many are grappling with is
2:07
whether machines will replace us. But
2:10
the more pressing issue may not be
2:12
that these systems are replacing human labor,
2:14
but rather concealing it. Take
2:18
the mechanical Turk, built in 1770. It
2:21
was touted as an automaton that could
2:23
play chess, and it made the rounds
2:25
facing off against the likes of Benjamin
2:27
Franklin and Napoleon. But
2:30
the technology behind this machine was
2:32
all a charade. It
2:35
was presented to the public, and
2:37
it toured for decades, and it
2:39
delighted crowds because it looked like
2:42
it was a fully operating clockwork
2:45
automaton that could play
2:47
chess. Now, as many
2:49
of us now know, there was a
2:51
person inside who was hidden in a
2:53
secret compartment. This is tech
2:55
critic and writer Joanne McNeil. She
2:58
says the story of the mechanical Turk over
3:00
two centuries ago is in some
3:02
ways also that of automation today. I
3:06
look around and I can't think of one instance
3:08
of AI where
3:10
you don't see human labor somewhere
3:12
alienated from the actual product in
3:14
the service. Like I
3:16
said, Joanne is a tech critic, but she
3:19
also recently came out with her debut novel.
3:21
It's called Wrong Way, and though it's fiction,
3:23
it has a lot to say about our
3:25
current moment of tech-fueled gig work. It
3:28
centers on a middle-aged woman named Teresa who's
3:30
stuck in a cycle of precarious work. She
3:33
takes a mysterious yet promising job at a
3:35
tech behemoth called All Over.
3:38
I was really interested in someone who
3:41
had maybe what we might
3:43
consider a traditional working class job
3:45
history emerging in the
3:48
2020s into
3:50
the gig economy that Silicon
3:52
Valley has sort of turned
3:56
so many of these once maybe less...
10:00
and kind of like a little office
10:02
space and watching the role of the remote
10:04
operator is kind of like taking
10:06
over for navigation and for
10:08
as few cars as Waymo actually has
10:10
on the road. It's a huge labor
10:12
force of remote operators. So
10:14
that was very, very
10:16
interesting experience for me.
10:19
You are listening to
10:21
Spark. This is Spark.
10:23
This is Spark. From
10:25
CBC. I'm
10:38
Nora Young and today we're talking about Work
10:40
Part 6 in our occasional series Being Human
10:42
Now. Right now my guest
10:45
is Joanne McNeil. Joanne is a tech
10:47
critic and now novelist. Her debut novel
10:49
is called Wrong Way. It explores the
10:51
often hidden human cost of automation and
10:53
gig work. In
10:56
your novel, the CEO of Oliver, who
10:59
has the great name Falconer, he uses
11:01
this very progressive sort of techno-solutionist
11:03
language throughout the book that really
11:05
does feel a lot like the
11:07
manifestos written by Silicon
11:10
Valley giants. Their ethos
11:12
is something called holistic apex.
11:15
So tell me a bit about what you're doing
11:17
with this character. I wanted to show how little
11:20
commitment Silicon Valley leader would
11:22
need to express in their statements.
11:26
And so I thought it
11:28
would be very funny to have a Silicon
11:30
Valley billionaire who's not just necessarily
11:33
claiming he's progressive, but
11:35
claiming he's anti-hierarchical, while
11:37
also giving his money away.
11:39
And it's funny because
11:41
when I have interacted with
11:43
people who are very deep
11:45
in the Silicon Valley mindset, I
11:48
always end up very frustrated. I can't have
11:50
a proper debate with them because they will
11:52
say anything to win. And so when I
11:54
was thinking about writing in his voice, it
11:58
was always the kind of the statement. that
12:00
are dead ends, that are very difficult
12:02
to argue with, like the rhetoric
12:04
that just almost shuts the conversation down, that he
12:06
will say something like, well, antitrust
12:09
is pro-capitalist. Right. But
12:13
is there a power in kind of claiming
12:15
a vision of the future that you then
12:17
kind of own and can sell? Yes,
12:20
because a vision of the
12:22
future is different from the
12:24
vast unknown, and the unknown
12:26
frightens us, uncertainty frightens anyone.
12:29
And when you have these
12:32
kind of leaders who offer
12:34
roadmaps to somewhere,
12:38
that's something you at least can see.
12:40
And in fact, like the other problem
12:42
with that is you're arguing
12:44
with their terms, because if someone were
12:46
to say, you know, Mars
12:49
in five years, that's it, we're doing it,
12:51
we are going to have life on Mars
12:53
in five years. The
12:55
next five years, you have
12:58
to argue with why maybe that's not
13:00
the best use of resources,
13:03
because otherwise, there is this unknown. So I
13:05
think it's a power of science fiction, too.
13:07
And it's a power that I'm always thinking
13:10
through when I write science fiction is
13:12
that once you imagine
13:15
something, it's a
13:17
possibility. It's no longer this
13:19
vague space. With
13:22
my writing, I hope it stands
13:24
out as fiction first. I mean, that is what I intended
13:27
to do. But I also am
13:29
thinking about the future. And one thing I
13:31
wanted to express in this book was that it's
13:34
set in the future, but it's set in a future
13:36
that feels very much like now. It's
13:39
set in the future of a moment
13:41
of decline that in fact, the
13:44
technologies are not getting better. The
13:46
technologies we have today are kind of
13:49
crumbling. The infrastructure is crumbling.
13:53
And this, to me, is the
13:57
reality of the future that
13:59
many... progressive gains that we
14:01
might have seen in our
14:03
recent lifetime, acceptance of trans
14:06
people over the past decade,
14:08
but now in recent
14:11
years, the policy that is
14:14
clawing back that progressive action,
14:16
that we can't necessarily see
14:18
the future as constant wins,
14:20
constant life getting better, but
14:23
as just ongoing change,
14:26
including decline. You
14:30
started writing the wrong way in 2018 and
14:32
it came out at the end of 2023. 2023 was sort of dubbed
14:36
as the year of artificial intelligence. Obviously a
14:38
lot changed in those five year period while
14:41
you were writing it. Could you reflect a little bit
14:43
on the changes that we've seen in that time, especially
14:45
when it comes to AI? Yeah,
14:48
it's an example of
14:51
how swiftly technology can
14:53
be normalized. I want
14:55
to say it was winter of 22
14:58
that people were even hearing about open
15:00
AI for the first time or discovering
15:03
what something like mid journey or
15:05
chat TPT can do that there
15:08
was this very clear moment that
15:10
AI all of a sudden became
15:12
a very mainstream conversation. While people
15:14
might have followed its developments over
15:16
time, I don't think many
15:18
of my friends who don't follow technology would
15:20
have known who Sam Altman was. And then
15:23
all of a sudden, he's
15:25
everywhere. And a year
15:27
isn't enough time to discover
15:31
what is a reasonable
15:33
way to integrate this
15:35
technology into our lives
15:37
if we want to at all. And one
15:40
company is wealthy and powerful.
15:43
And the descent on the other hand
15:45
is scrappy and without
15:47
resources. Something that does
15:49
give me hope is that the descent
15:52
is broad. I think a lot of
15:54
people do have that visceral response to
15:56
LLMs that I described before that it
15:59
feels like like you're going through my
16:01
stuff. Like why do you need my life
16:04
to be crumbled up like that and
16:06
shoot out? Is there some way we
16:08
could do this without that trade that
16:10
isn't a trade? It's just taking
16:12
it. Yeah. So
16:15
what would you like to see included in
16:17
the conversation about, you
16:19
know, automation, gig work, and
16:21
the human labor at the heart of some of
16:23
these automated systems that we use? What are what
16:26
are we not talking about that we need to
16:28
talk about? I'd like to
16:30
see more transparency. I mean, the fact
16:32
that they're kind of very hazy about
16:34
what the training data even is that's
16:36
powering various LMS in
16:39
the case of open AI
16:41
transparency about the workers themselves.
16:43
Why are the remote operators
16:45
so hidden from the public?
16:47
I mean, if they're
16:49
integral to the operation
16:51
of a Waymo vehicle, then
16:54
their role should be much more transparent
16:56
to the public. And we
16:58
see this again and again, what the
17:00
company is hiding from us is not
17:02
just the secret sauce, it's exploitation that
17:05
makes the technology possible. And if a
17:07
technology is based on human exploitation, then
17:09
can we safely agree
17:11
that it shouldn't exist? You've
17:15
written about content, moderation and the conditions of
17:17
their work in particular in this regard. Yeah,
17:19
I always notice when those stories come up
17:21
because I get to a personal place, I
17:23
think that like when I was in my
17:25
20s and trying to find a job, that's
17:27
the kind of job that I might have
17:29
landed on. And so when I
17:31
hear about these traumatized workers
17:34
who might have just thought that they
17:36
were stumbling on something to
17:38
do in the daytime and work
17:40
on music or comic books or
17:42
whatever their hobbies or passions
17:44
are at night, and
17:46
to have not just a terrible office
17:48
job, but an office job that exposes
17:51
you to horrors again
17:53
and again, and really
17:55
pushes you beyond that limit. It's
17:57
so confounding. And I feel known
44:00
as scientific management or
44:02
social engineering. When Friedrich Taylor
44:04
working with some of the
44:06
early assembly lines and factories
44:08
in the US and North
44:10
America more broadly really started
44:13
to look at humans as
44:15
small components of a ginormous
44:17
machine or cockpit and everything
44:19
started to become a vehicle
44:21
or tool in the interest
44:23
of productivity. And in
44:25
the five or six decades that followed,
44:28
actually, there was a big movement towards
44:30
empowering employees, looking after workers rights and
44:32
with that came a lot of good
44:34
legislation and regulation. As
44:37
talent management started to become
44:39
really prominent in the 1990s,
44:41
we actually entered a spiritual
44:43
age where things like employee
44:45
engagement and thriving and career
44:48
fit and talent and potential
44:50
all became really important competencies.
44:52
But with the rise of
44:54
big data and data analytics,
44:56
which includes AI and AI
44:58
surveillance, actually what you have
45:01
is both things operating. On the one
45:03
hand, employers all try to provide employees
45:05
with a sense of purpose, ensure that
45:07
they can thrive and experience calling and
45:09
we hear employees and leaders saying they
45:11
want employees to bring their whole self
45:13
to work and to be themselves and
45:15
that they're valuable for their unique characteristics.
45:18
But on the other hand, underneath it,
45:20
we're monitoring and measuring everything. I
45:22
think there's still this idea that people
45:25
are productivity machines and that if you
45:27
measure the performance and you incentivize them
45:29
and either create nudges, actually they will
45:31
deliver. And so even when we seem
45:34
to care about engagement and happiness, actually
45:36
the ultimate goal is to squeeze as
45:38
much profitability and productivity of workers. And
45:41
when you add to this layer the
45:43
fact that people are so dependent on
45:46
technology and interacting with AI and other
45:48
technologies so much, there's a real need,
45:50
I think, for organizations and leaders to
45:52
rehumanize work and actually rediscover some of
45:55
the things that actually made work interesting
45:57
and valuable in the first place. Some
46:00
critics have even argued that the wellness
46:02
movement in the workplace itself is sort
46:05
of inherently tied to this kind of
46:07
level of efficiency and productivity and surveillance.
46:10
There are certainly areas of
46:12
overlap. So if you
46:14
look at the recent rise of
46:16
the so-called self-care movement within the
46:18
wellness industry, this idea that you
46:20
should care for yourself and you
46:23
should look after yourself, which comes
46:25
with good intentions. Go to the gym,
46:27
eat healthy, don't overeat, sleep
46:30
enough, take a power nap. We even have
46:32
nap pots in the office maybe and walk
46:34
your 10,000 steps and eat your five portions
46:36
of fruits and veg a day. All
46:39
that is good, but if the real intent
46:41
is for you to be really, really productive
46:43
or to want to stick around work and
46:46
be at the office a lot or to
46:48
return these seemingly well-meaning
46:51
recommendations with your hard work
46:53
and loyalty, then it's normal
46:55
that we are a little bit cynical when we
46:57
hear this advice. And also, if
46:59
our solution to the wellness
47:02
issue and the well-being problems that we
47:04
have seen in the industrialized worlds for
47:06
the last decade or so is to
47:09
just tell people that they should only
47:11
worry about themselves or worry about themselves
47:13
first before they can help others, that
47:16
actually fosters a very selfish and narcissistic
47:18
mindset. Yeah. So what would
47:20
you propose as a solution to this
47:22
phenomenon in order to rehumanize the workplace?
47:25
Well, I think organizations should understand that
47:27
the more people depend
47:29
on technology to do their work and
47:31
be productive, the more they have to
47:33
kind of create cultures that actually provide
47:36
an antidote to that and compensate for
47:38
that. For example, by stimulating analog
47:40
or 3D physical encounters
47:42
between people, by separating
47:45
between activities that might not lead
47:47
to productivity but actually enhance bonding
47:49
and fueling or lubricating the social
47:51
ties that people want with their
47:54
colleagues, irrespective of whether it
47:56
actually makes them more productive or boosts revenues,
47:58
productivities, and profits. think that
48:00
when it comes to caring, you know,
48:03
we have to remember that one
48:05
of the best and I think
48:07
most pro-social ways we have to
48:10
enhance our own happiness and our own subjective
48:12
well-being is to actually be nice and be
48:14
kind towards others, right? So the less you
48:17
think about your own problems and the more
48:19
you try to solve other people's problems, the
48:21
more your problems go away. But
48:23
if I'm listening to this and I run a department and
48:25
I have a, you know, bottom line that I'm expected to
48:28
meet, are some of those things
48:30
in conflict with my ability to meet my productivity goals?
48:33
Well, there is a tension, right? So I
48:35
think generally speaking, it
48:37
is true that on average, other
48:39
things being equal, the
48:42
more engaged and satisfied and happy
48:44
your team is, and of course,
48:46
the more physically fit and energized
48:48
or energetic they are, the
48:50
more productive they will be. But at the
48:52
same time, that overlap is less than 10%. It's
48:55
a correlation of 0.3, which indicates
48:57
a 9% overlap, which means that you'll have
49:00
a lot of people who are extremely healthy,
49:02
their well-being is great and they're very engaged
49:04
but actually they don't add value in terms
49:06
of productivity. And also that
49:08
some of your most valuable high-performing
49:11
or high-potential employees are going
49:13
to be quite grumpy, quite dissatisfied and
49:15
maybe have poor work-life balance
49:17
and, you know, struggle in other areas of
49:19
life. I mean, let's face it, historically, there
49:22
was a tension between people who devote a
49:24
lot of their energies, focus and skills
49:27
on their careers and because of that,
49:29
neglect other areas of social or personal
49:31
life. So I think you have to
49:34
allow for both things and ultimately worry
49:36
less about short-term results and more about
49:38
the kind of culture and climate you
49:40
create in your organization because ultimately people
49:43
are always going to fluctuate. They're going
49:45
to have good years and bad years
49:47
but it's the long-term commitment to a
49:50
strategic goal that actually gives you the
49:52
results in the long-term. People
52:00
gravitate towards and they have mass
52:02
organic adoption because actually they take
52:04
care of boring tasks like proofreading
52:07
texts or emailing colleagues or even
52:09
attending meetings that you don't want
52:11
to attend. The people who
52:13
actually opt in to these tools because
52:15
they see that they can basically stop
52:17
doing things that they don't want to
52:19
do aren't automatically motivated to then reinvest
52:21
the time they save on
52:24
new learning experiences or new kind
52:26
of difficult, effortful ideas. I
52:29
think efficiency is a double-edged sword and it
52:31
can be wonderful but at the same time,
52:34
if at some point we no longer need to think
52:37
and we create something that is like a
52:40
microwave for ideas and we stop actually thinking,
52:42
our brains don't think anymore, then we have
52:44
to wonder what the long-term effects might be.
52:47
Just finally, as unpredictable as the future
52:50
is maybe, what would you say are
52:52
the job skills of the future in
52:54
this automated context? Well,
52:56
I don't pretend to have data on
52:58
the future. I'm always a little bit
53:00
perplexed when I see all these very,
53:03
very granular and detailed calculations of what
53:05
will happen to skills or jobs, etc.
53:09
I think that we need to
53:11
be agile to adjust and adapt
53:13
to whatever comes but it seems
53:15
to me that a reasonable expectation
53:17
is that AI will win
53:20
the IQ battle if it
53:22
hasn't won it already. It will always know
53:24
more about things than we do,
53:26
especially if you count the large number of things
53:29
that can be known even if it doesn't understand
53:31
it. When it comes
53:33
to things like empathy, consideration,
53:35
kindness, self-awareness, people skills, emotional
53:38
intelligence rather than intellectual ability, I think
53:40
we have a real chance to still
53:42
compete and to add value. If you
53:45
think about the manager of
53:47
the past versus the manager of the
53:49
future, in the past they were appointed
53:51
into a management position based on what
53:53
they knew, their qualifications, their hard skills,
53:55
their university credentials. In the future, it's
53:57
probably going to be their ability to
53:59
inspire, to connect with, others to understand
54:01
people and to really give them that
54:03
sense of validation and kindness
54:05
and attention that they will crave, especially if
54:07
they can't even tell whether they're interacting with
54:10
a human or deepfake in some other areas
54:12
of life. Tomás, thanks so much
54:14
for your insights on this. It's been a
54:16
real pleasure. Anytime. Tomás Chamorro-Premusic
54:18
is an organizational psychologist. He's also
54:21
the author of I Human AI
54:23
Automation and the quest to reclaim
54:25
what makes us unique. You've
54:34
been listening to Spark. The show is
54:37
made by Michelle Parisi, Samurit Yohannes, Megan
54:39
Carty, and me, Nora Young. And
54:41
by Joanne McNeil, Alison Pugh, and
54:43
Tomás Chamorro-Premusic. I'm
54:46
Nora Young. You can check out back issues of
54:48
Spark, so find and follow us wherever you get
54:50
your podcasts. For
54:54
more CBC podcasts,
54:57
go to cbc.ca/podcasts.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More