Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
At the end of the
0:02
day, that is the
0:04
best route for any
0:07
worker who really worries
0:09
about technology coming and
0:12
being disruptive in their
0:14
workplace. It's just a
0:16
matter of power, right?
0:18
And we've got to
0:21
utilize our powers workers
0:23
together to take this
0:25
on. to Fast Company's The New
0:27
Way We Work, where we take listeners
0:29
on a journey through the changing landscape
0:32
of our work lives and explain exactly
0:34
what we need to build the future
0:36
we want. I'm Fast Company Deputy
0:38
Editor Kathleen Davis. Throughout
0:43
this series on how AI will change
0:45
how we work, we've covered how this
0:47
emerging tech will impact hiring, our daily
0:49
tasks, and how our job performance is
0:52
evaluated. But we haven't yet
0:54
talked about the biggest concern for
0:56
most people when it comes to AI
0:58
and work. Are robots going to take our
1:01
jobs? And honestly, we're right to
1:03
be concerned. According to McKinsey and
1:05
Company, 45 million jobs or a
1:07
quarter of the workforce could be
1:09
lost to automation by 2030 by
1:11
2030. That's just five years from
1:14
now. Of course, the promises that
1:16
AI will also create jobs,
1:18
and we've already started to
1:20
see emerging roles like prompt
1:22
engineers and AI ethicists crop up.
1:25
But many of us have a
1:27
lot of concerns about how AI
1:29
is being incorporated into our fields.
1:31
Should a bot host a podcast,
1:33
write an article, replace an actor?
1:35
Could AI be a therapist, a tutor,
1:37
build a car? Three out of four employees
1:40
say their organization is not collaborating
1:42
on AI regulation and the same
1:44
share say that their company has
1:46
yet to share guidelines on responsible
1:48
AI use. To help us understand
1:50
what protections exist and how employees
1:52
can fight back to ensure their
1:55
jobs are safe is Lorena Gonzales.
1:57
She's the president of the
1:59
California. California, Federation of Labor Unions,
2:02
a former California Assembly woman, and
2:04
has written AI transparency legislation, including
2:06
a law that prevented algorithms from
2:08
denying workers break time. Well, thank
2:10
you so much for being here.
2:12
Well, thank you for having me.
2:14
Can you talk about and explain
2:16
how your job leading labor unions
2:18
in California interacts with national AI
2:20
regulations and trends? So in California,
2:22
I lead the ALCIO, and we
2:24
don't do... directly with national issues,
2:26
the national EFL CIO does. But
2:28
we know that in California we
2:30
have a special responsibility, largely because
2:32
Silicon Valley is here, and so
2:34
much of these tech changes have
2:36
come out of California. We also
2:38
are very aware that little has
2:40
happened or will happen at the
2:42
federal government as far as regulations
2:44
and guidelines. So if we want
2:46
to really set the pace so
2:48
often. within labor laws in general.
2:50
California will pass a law and
2:52
then it becomes more broadly accepted
2:54
first in other what we call
2:56
trifecta states that have pro-labor majorities
2:58
in the House and the Senate
3:00
and in the governor's office. But
3:02
often we'll get picked up as
3:04
well at the national level. So
3:06
although we can't directly or we
3:08
don't really imagine will directly affect.
3:10
federal regulations. We want to make
3:12
sure that we have strong guidelines
3:14
and regulations in California that other
3:16
people can then copy. I'm so
3:18
glad you mentioned that about California
3:20
because it is such unique. place
3:22
not only because Silicon Valley is
3:24
there and that's where a lot
3:26
of this tech is coming from
3:28
but also because the the labor
3:30
world in California is huge you
3:32
know I don't have to tell
3:34
you most populous you know state
3:36
and such you know a major
3:38
union presence in such diverse fields
3:40
you know teachers screenwriters nurses kind
3:42
of everything you can think of
3:44
are there common themes that you're
3:46
hearing from all these kind of
3:48
different groups that you work with?
3:50
There are, you know, when we
3:52
started to first look at regulations
3:54
or guidelines or bills that we
3:56
could. put forward, we thought the
3:58
best way to do it would
4:00
be industry specific. Because when you
4:02
look at AI in the workplace,
4:04
there are very different challenges that
4:06
come in, whether you're talking about
4:08
a health care setting, whether you're
4:10
talking about somebody's like creative work,
4:12
whether you're talking about somebody who
4:15
drives a truck, a big rig
4:17
truck is what I meant there.
4:19
So these are all different issues
4:21
that, especially when it comes to
4:23
safety and privacy. But we realized
4:25
there's some common threads across all
4:27
workplaces as well. One is the
4:29
notion of just robot bosses. We
4:31
did do the First of the
4:33
Nation bill probably about three years
4:35
ago now on algorithmic management. It
4:37
applies only to warehouses and that
4:39
was in response to what we
4:41
saw going on in Amazon and
4:43
to a lesser degree Walmart warehouses.
4:45
And we basically wanted to give
4:47
workers the power. to question the
4:49
algorithm that was speeding up their
4:51
quota, basically their requirement to go
4:53
faster and faster. It also lined
4:55
up with injuries on the job.
4:57
So we thought, this doesn't make
4:59
a lot of sense. You don't
5:01
even have a human interaction. We're
5:03
expanding that this year with a
5:05
bill that would basically say for
5:07
a lot of those interactions, you
5:09
need a human boss. What we
5:11
started with in the warehouse bill
5:13
were really seeing expand throughout different
5:15
types of work. When you're dealing
5:17
with an algorithm, even the basic...
5:19
experience of having to leave your
5:21
desk conversation with that computer. So
5:23
this idea also, what if you
5:25
have to leave work all of
5:27
a sudden because you got a
5:29
call from your kid's school and
5:31
your kid is very sick or
5:33
there's some emergency. Taking away the
5:35
human element obviously has a structural
5:37
problem for workers, but it has
5:39
a humanity problem as well. We
5:41
also know that with algorithm management
5:43
that the computer is making certain
5:45
decisions and relying on certain assumptions
5:47
that we may not even. allow
5:49
openly managers to rely on, right?
5:51
Your race, your gender, some assumptions
5:53
that the computer has been programmed
5:55
to rely upon, and there's no
5:57
regulation, there's no liability, there's no
5:59
ability, there's no ability to even
6:01
know what's going into the computer
6:04
to know what the outcome and
6:06
what the computer is trying to
6:08
do. So having a computer is
6:10
a boss. A robot is a
6:12
boss, if you will, is one
6:14
of the ways we thought in
6:16
every setting, everybody kind of identified
6:18
that as problematic, dealing with the
6:20
idea of algorithmic management. So that's
6:22
one bill we're going to do.
6:24
Privacy is another. I mean, just
6:26
straight out privacy. And we started
6:28
seeing what a lot of companies
6:30
were putting out into the ether
6:32
about what corporations could utilize. So
6:34
we think there are certain spots
6:36
in work, no matter where you
6:38
work, that should be private to
6:40
an individual. That's going to be
6:42
a bathroom. It's going to be
6:44
a break room. In your car
6:46
and in your home, if those
6:48
places aren't places of work, you
6:50
should have a basic sense of
6:52
privacy. And what we used to
6:54
believe is privacy, like, well, they
6:56
can't, they can't watch me. They
6:58
can't listen to me. These are,
7:00
you know, it allowed me to
7:02
have some privacy. No, we've got
7:04
to go much further than that.
7:06
It has to be heat mapping.
7:08
It has to be all the
7:10
tools that are being marketed to
7:12
these corporations. We need protections against.
7:14
Your employer should not be able
7:16
to know every person you're talking
7:18
to in the restroom because you're
7:20
both wearing some device that tells
7:23
them that you're talking to each
7:25
other. A boss definitely should not
7:27
know who you're talking to in
7:29
a break room. Of course, as
7:31
a union organizer, we really think
7:33
that's an important feature of work
7:35
is having the right to. When
7:37
you're off the clock, if you
7:39
will, to interact with your coworkers
7:41
in a way that may result
7:43
in unionization, and when they're able
7:45
to use surveillance mechanisms to stop
7:47
that, that's dangerous. So we have
7:49
a bill on that as well.
7:51
And the third area that we
7:53
think really applies to everybody is
7:55
just a basic data issue. And
7:57
we haven't released that bill yet.
7:59
We're working on the language of
8:01
that bill. But you know, the
8:03
data that is being taken from
8:05
workers without their knowledge, their personal
8:07
data, their facial features, their whatever
8:09
it is, the right to understand
8:11
what is being taken by a
8:13
computer or by AI as you're
8:15
doing the work, sometimes to replace
8:17
you, sometimes to evaluate you. That
8:19
data goes into forming something else.
8:21
Now this became an issue really
8:23
obviously during the rider strike. It's
8:25
so much clear and obvious to
8:27
people when you're saying you're taking
8:29
their work, feeding it into computers,
8:31
it's coming out somewhere else. But
8:33
this comes in different forms. We've
8:35
heard it from longshoremen who say
8:37
the computer works side by side
8:39
to try to mimic the responses
8:42
that the worker is giving. It
8:44
might be to tide levels or
8:46
to dangerous situations. The workers should
8:48
have the right to know that
8:50
they're being monitored, that their data
8:52
is being taken, and there should
8:54
be some liability involved. So those
8:56
are kind of the broader issues
8:58
we've identified in ones we're going
9:00
to tackle with legislation this year
9:02
or try to tackle. Thank you
9:04
for sharing those. Those are great,
9:06
and that is a... a great
9:08
kind of overview of the way
9:10
that AI can, similar AI can
9:12
impact, you know, different industries. I'd
9:14
love, I'd love to hear more
9:16
from you about kind of more
9:18
specific AI in industry since we've
9:20
seen such as you mentioned, we've
9:22
seen such high profile labor. disputes
9:24
in the last year regarding AI.
9:26
Obviously the screen actors was a
9:28
big one. The auto workers had
9:30
an AI provision in their labor
9:32
dispute. Is there a way to
9:34
regulate AI implications within industries or
9:36
is that kind of a case-by-case
9:38
labor issue and these these larger
9:40
bills are kind of the only
9:42
way to tackle it in a
9:44
larger way? There's absolutely ways to
9:46
tackle it industry by industry and
9:48
we've done that some successfully, some
9:50
less successfully. So we got some
9:52
provisions for SAG After and for
9:54
our actors in legislation last year.
9:56
That was a positive. One of
9:59
the things we've done in the
10:01
past and we got a bill
10:03
through and signed was basically when
10:05
any new technology is deployed in
10:07
public transit that it becomes a
10:09
a mandatory subject of bargaining. So
10:11
that's good where you have a
10:13
union, where the union in a
10:15
workplace, whatever the technology is, can
10:17
have a discussion and can be
10:19
forced, management's force to discuss it
10:21
with workers. That's less protective, but
10:23
it's important. We've run a couple
10:25
of bills where we really are
10:27
talking about human operators, and this
10:29
is something that we have not
10:31
been able, we've gone it through
10:33
the legislature with broad bipartisan support,
10:35
but not have, haven't been able
10:37
to get the governor to sign.
10:39
And that's like, a big rig
10:41
or a school bus or a
10:43
big barge should not be operating
10:45
just simply with AI without a
10:47
human operator. So we want to
10:49
do that, number one, to save
10:51
jobs, but number two for safety
10:53
reasons as well. And I think
10:55
most people don't want to be
10:57
on a highway with a big
10:59
rig that has no human operator
11:01
as a backup. We have some
11:03
pending issues, I think, going on
11:05
with pilots, right, where you have
11:07
airlines that want to go to
11:09
one pilot. and a computer rather
11:11
than the two pilots that we
11:13
now have and rely upon to
11:15
make sure that we're safe. These
11:18
are anti-consumer, they're anti-worker, and they're
11:20
things that we think can be
11:22
and should be regulated in legislation.
11:24
There are a lot of things
11:26
in health care that we've been
11:28
looking at in trying to figure
11:30
out how to get at. Health
11:32
care is a special industry that
11:34
AI has been very helpful. There
11:36
are things that we want AI
11:38
to do. Of course, I'm a
11:40
cancer survivor. I would like the
11:42
computer to view all of my
11:44
images along with my doctor to
11:46
see if they can pick something
11:48
up that my doctor couldn't. Of
11:50
course that's a great thing, but
11:52
we also have seen AI and
11:54
computers go a little rogue in
11:56
the hospital sitting where they're insisting
11:58
on certain tests or certain procedures
12:00
that the nurse knows is inappropriate
12:02
and the doctor doesn't have to
12:04
okay. The nurse gets into a
12:06
position where it's easier, in fact
12:08
it's much harder to challenge the
12:10
decision of the computer than to
12:12
simply over test folks. That's that
12:14
that can be dangerous as well.
12:16
We've also been looking at who
12:18
actually is providing the information. So
12:20
in California, for example, you need
12:22
a medical license to practice medicine,
12:24
but the computer doesn't have a
12:26
medical license, and we don't even
12:28
know who's programming the computer and
12:30
who has a medical license and
12:32
what the person behind the information
12:34
being given, how credible it is.
12:37
And so these are all questions.
12:39
And last but not least, of
12:41
course, liability. So if you go
12:43
to the doctor and there's some
12:45
really bad screw-up, of course, there's
12:47
medical malpractice and you have the
12:49
right to sue, you have the
12:51
right to some kind of recourse,
12:53
in these situations where the computers
12:55
are making the decision and doing
12:57
the work, who's liable? I mean,
12:59
that's even something that hasn't really
13:01
been established, and that's something that
13:03
continues to be a question. Those
13:05
are all great points. It's also,
13:07
you know, it's really the consumer,
13:09
the customer, the patient, the worker,
13:11
and as you bring up in
13:13
a liability issue, you know, the
13:15
company itself. And that's something that
13:17
we haven't touched on yet. You
13:19
know, we've been talking a lot
13:21
about legislation and regulation. I opened
13:23
this episode with some stats about
13:25
how the majority of employees say
13:27
that their companies are not collaborating
13:29
on AI regulations. or that their
13:31
companies haven't even kind of shared
13:33
guidelines for responsible AI use. If
13:35
I'm a business leader and I'm
13:37
listening to this, what's your guidance?
13:39
Well, if you have a union
13:41
in your workplace, absolutely, you should
13:43
be getting together me and conferring
13:45
over any new technology. A lot
13:47
of times that's in a contract,
13:49
but even if it's not, it's
13:51
just good practice. But if you're
13:53
an employer without a union, that
13:56
should be a discussion that you're
13:58
having with your employees in an
14:00
open discussion, not a discussion whereby
14:02
you've already made the decision, you're
14:04
going to go in a tech
14:06
route in an AI route without
14:08
actually talking to the people doing
14:10
the work and seeing how that
14:12
will affect their... job, their livelihood,
14:14
and what they're doing. I think
14:16
so often, especially those at the
14:18
very top, get sold on new
14:20
technology as being cool and being,
14:22
you know, innovative and being able
14:24
to do things faster and quicker
14:26
and not really going through the
14:28
entirety of what these jobs are
14:30
and not really imagining what on
14:32
a day-to-day basis that worker has
14:34
to deal with. And so I
14:36
think that is really important is
14:38
to sit down and actually talk
14:40
to the workers and empower the
14:42
work. to have a say in
14:44
the technology. Yeah, I think that's
14:46
true. And I, you know, in
14:48
the first episode of this season,
14:50
I talked to an AI ethicist
14:52
who was talking exactly about that.
14:54
I think the way that AI
14:56
technology is sold is often, you
14:58
know, not only is this an
15:00
amazing development, but is in these
15:02
very technical terms of people like,
15:04
well, I don't understand it, but
15:06
I'm sure it's good. Yeah. And
15:08
kind of, you know, having that
15:10
really human skill, right of critical
15:13
thinking. to ask. Absolutely and I
15:15
think in public employment we're really
15:17
concerned right because so often what
15:19
will happen is these city managers
15:21
or our county clerks the appointed
15:23
positions or the elected positions go
15:25
to these conferences and you have
15:27
some presentation on how this can
15:29
help you reduce the workforce or
15:31
save money and they buy into
15:33
it. But they buy into it
15:35
and then reduce a workforce that
15:37
actually does that critical work. And
15:39
at the end of the day,
15:41
if that technology doesn't work, if
15:43
it doesn't work out or you
15:45
don't have the ability it costs
15:47
too much to fix it or
15:49
replace it or there's problems, you've
15:51
then gotten rid of an entire
15:53
workforce that actually can make sure
15:55
those vital services are happening. You
15:57
know, there should be some guardrails
15:59
about how you change, especially when
16:01
you're replacing people in jobs and
16:03
that institutional knowledge of how to
16:05
get a job done. And we
16:07
talk about vital services, whether it's
16:09
ensuring that individuals can sign up
16:11
for their health care, like MediCal
16:13
and California, or get their food
16:15
stamps, or to make sure that,
16:17
you know, the police come when
16:19
you call 911, all of these.
16:21
can't risk losing human beings who
16:23
know how to do this job
16:25
because somebody went to some conference
16:27
and saw a really cool computer
16:29
that seems to be able to
16:32
do it without looking at the
16:34
full cost and the the applicability
16:36
in any situation. Well, in speaking
16:38
of this technology, again, you know,
16:40
you're in California, a lot of
16:42
this technology is coming out of
16:44
Silicon Valley. What are you seeing
16:46
about how the AI industry is
16:48
interacting with the rest of the
16:50
state, with the unions, with the
16:52
lawmakers? You know, I don't see
16:54
a lot of, I went to
16:56
CS in Las Vegas this year,
16:58
and we did have a little
17:00
tour, it was all, I went
17:02
with union leaders, it was fascinating,
17:04
but I don't see that they're
17:06
actually reaching. out. I think they're
17:08
trying to give it a finished
17:10
product and saying this works. But
17:12
I've got to be honest too.
17:14
I've been in government before in
17:16
between I was a labor leader
17:18
and then I was a legislator
17:20
and then back to being a
17:22
labor leader. And we were having
17:24
tech problems in California because our
17:26
computer system is on COBOL. You
17:28
know, it's like, are we going
17:30
to have AI? One of the
17:32
problems is you didn't have anyone
17:34
that could fix an old computer
17:36
system. And so we bought a
17:38
new computer system. with Oracle and
17:40
it didn't talk to the old
17:42
computer system. All right, well. Can
17:44
AI fix that? Because if not
17:46
all you're going to do is
17:48
put one more thing on top
17:51
of it. I know people get
17:53
excited and want the newest, best,
17:55
quickest thing, but the reality is
17:57
our infrastructure is dated. And so
17:59
what is the plan there? I
18:01
don't think there's been a lot
18:03
of interaction. Instead, it's just a
18:05
finished product like here. You could
18:07
use this rather than working with
18:09
procurement and with the unions, the
18:11
workers that actually are doing that
18:13
are work the best to perform
18:15
efficiencies to ensure that there's safety
18:17
and to ensure that there's still
18:19
a human operator. Are there any
18:21
cases, I mean, you mentioned it
18:23
a little bit when your health
18:25
care example, are there cases where
18:27
that you've seen the AI can
18:29
be good for workers? I'm sure
18:31
there are, like I think about
18:33
this a lot, I have to
18:35
say. When we think about who
18:37
has the most amount of technology,
18:39
ability to deploy AI, has the
18:41
resources, we often think of Amazon,
18:43
right? Amazon has these massive warehouses,
18:45
they have invested in a lot
18:47
of technology, they understand it. We
18:49
saw during COVID, when they started
18:51
screening people for their body temperature,
18:53
which is now morphed into screening
18:55
body temperatures to see who's talking
18:57
to each other. So we know
18:59
that there's a lot out there
19:01
that can be done. But what
19:03
amazes me is they haven't used
19:05
all that expertise, all that knowledge,
19:07
to figure out how to reduce
19:10
injuries on the job. So you're
19:12
asking like, why if you have
19:14
the ability to make a job
19:16
safer for the actual worker that
19:18
is still being deployed to do
19:20
it, why are you not? using
19:22
your technology to do that. And
19:24
I think that's a valid question.
19:26
Obviously, you know, look, when you
19:28
look at an airplane being driven,
19:30
we like that the computer system
19:32
is a backup, but we also
19:34
want the human operators on there
19:36
to have eyes on it as
19:38
well. If you look at some
19:40
of the... kind of famous like
19:42
saved crashes it's a human being
19:44
overriding the computer system that saved
19:46
it on a day-to-day basis. I
19:48
think the computer system, the technology,
19:50
can keep, especially in bad weather,
19:52
you from hitting another plane or
19:54
hitting a mountain. Those are good
19:56
things, right? So I think technology
19:58
can absolutely be good, and it
20:00
has been good when it comes
20:02
to directions for drivers, you know,
20:04
understanding when something's closed and open
20:06
so that they're not wasting time
20:08
going down a street that now
20:10
has been blocked off. These are
20:12
all good things. It's just a
20:14
matter of utilizing the good with
20:16
utilizing the agency of the individual
20:18
workers as well and figuring out
20:20
how to mold those two and
20:22
unfortunately we don't have a lot
20:24
of corporations who care deeply about
20:27
about that worker feature. And sometimes
20:29
when we do have corporations that
20:31
care about the safety, it's just
20:33
because they know, for example, nobody's
20:35
gonna get on a plane that
20:37
feels like it's gonna crash. Right.
20:39
It comes down to profits and
20:41
money, and that is the motivation
20:43
of most corporations, and what's wrong
20:45
with allowing them to, you know,
20:47
later make these decisions on technology
20:49
in the workplace. So I asked
20:51
your opinion on what business leaders
20:53
should think, and as I've been
20:55
covering on this season, AI is
20:57
coming for nearly every workplace, whether
20:59
we like it or not. What
21:01
is your advice to the average
21:03
American worker on how to think
21:05
about AI in their working life
21:07
and what they can do if
21:09
they're, you know, maybe justifiably afraid
21:11
about it? Well, my advice is
21:13
always to form a union. You
21:15
know, we have, you can Google
21:17
how to form a union. We
21:19
have a website, unionize california.org. The
21:21
bottom line is an individual worker
21:23
is going to have a hard
21:25
time fighting tech changes or protecting
21:27
their job or really taking on
21:29
the boss. But when workers get
21:31
together and make demands and make
21:33
changes, then real change can happen.
21:35
And we've seen protections in workplaces
21:37
that are unionized that far out.
21:39
legislation that far outpaced any other
21:41
workplace and that's because they can
21:43
collectively bargain. At the end of
21:46
the day that is the best
21:48
route for any worker who really
21:50
worries about technology coming in being
21:52
disruptive in their workplace. It's just
21:54
a matter of power right and
21:56
we've got to utilize our powers
21:58
workers together to take this on
22:00
because now the boss is doing
22:02
what the boss has always done
22:04
and that's trying to figure out
22:06
how he or she can get
22:08
richer at the so often their
22:10
workforce and now it's in collaboration
22:12
with a computer, but the power
22:14
of individual workers is still always
22:16
going to be bigger. Well, I
22:18
think that's kind of a perfect
22:20
segue into my last question, which
22:22
is I always like to end
22:24
on a like best and worst
22:26
case scenario. So what are your
22:28
kind of worst case and best
22:30
case scenarios for AI in 2030
22:32
in five years from now? I
22:34
think the worst case would be
22:36
that we don't regulate it. It's
22:38
been interesting to kind of watch
22:40
that these tech companies really come
22:42
in and number one tells we
22:44
don't understand it so we can't
22:46
regulate it. We should study it.
22:48
All of this so that they
22:50
can continue to grow and be
22:52
too big to fail, too big
22:54
to take on. And so I
22:56
think if we don't do our
22:58
job and actually regulate the use
23:00
of technology in the next five
23:02
years, then they may be too
23:05
big to take on. And the
23:07
computer can be more powerful than
23:09
than actual human beings. That would
23:11
be very dangerous. I think what
23:13
What we don't want to do
23:15
is to be scared into paralysis
23:17
and not do anything about it.
23:19
And that's what we're trying to
23:21
do in California is say, how
23:23
do we empower workers both in
23:25
their workplace and through the legislature
23:27
to say this isn't, we don't
23:29
have to accept this. This, you
23:31
know, we can regulate tech and
23:33
we should. And you don't have
23:35
to know how to code or
23:37
understand how AI is created to
23:39
regulate it. You can say that
23:41
there are guardrails and there are
23:43
safety in their regulations and not
23:45
understand how. AI comes about. That
23:47
has been smoke and mirrors and
23:49
so I think that it would
23:51
be the worst case scenario that
23:53
we're just paralyzed that nothing happens
23:55
that there are no regulations that
23:57
there's no good bargaining that goes
23:59
on in workplaces. I know that
24:01
won't happen because we have good
24:03
unions that are working on this
24:05
now. The best case scenario is
24:07
that we actually regulate this, that
24:09
we set up guardrails that make
24:11
sense, that we protect and preserve
24:13
human jobs. That is a noble
24:15
thing. I mean, I don't think
24:17
anybody wants a jobless society or
24:19
to have just certain jobs that
24:21
it makes economic sense only to
24:24
have, right? We do the value
24:26
of humans and humanity, the value
24:28
of having a human operator. I
24:30
think we can. introduce the best
24:32
of technology into the workplace with
24:34
a good collective bargaining agreement, with
24:36
laws in place that regulate it,
24:38
that have guardrails, and then with
24:40
institutional collective bargaining in places where
24:42
their unions to ensure that that's
24:44
all deployed correctly. That would be
24:46
the best case scenario. Well, I
24:48
can't think of a better way
24:50
to end it on a best
24:52
case scenario. Thank you. This is
24:54
the last episode in our mini-series
24:56
on how AI is changing our
24:58
jobs, but we are by no
25:00
means done with this topic. As
25:02
we've covered, AI is here to
25:04
stay and we are just starting
25:06
to understand how it can be
25:08
both used and misused in all
25:10
areas of our lives. For me,
25:12
an AI skeptic, my biggest takeaway
25:14
throughout this series has been the
25:16
need for human intervention. We know
25:18
that AI is only as good
25:20
as the data it's trained on,
25:22
but its implementation is also only
25:24
as effective as the humans using
25:26
it. For hiring, this means using
25:28
AI to write interview questions, but
25:30
then editing and rewriting them yourself,
25:32
or making sure that the keywords
25:34
that AI is screening for don't
25:36
leave. out those with
25:38
transferable skills. skills.
25:40
For implementing AI
25:43
into daily work,
25:45
it means vetting
25:47
systems to make
25:49
sure they actually
25:51
live up to
25:53
their promises. live up
25:55
For performance reviews,
25:57
it means using
25:59
AI to analyze
26:01
data, but not
26:03
replace human conversations not
26:05
replace The bottom
26:07
line is I
26:09
see it, AI
26:11
right now feels
26:13
less like a
26:15
total game feels less
26:17
like a new
26:19
tool that we're
26:21
just learning how
26:23
to use. tool that we're
26:25
just learning how to use. Well,
26:30
this season is done. We season is done, we
26:32
will be back with new episodes
26:34
soon featuring some of your most pressing
26:36
workplace questions. Be sure to subscribe
26:38
to The New Way We Work wherever
26:40
you listen so you never miss
26:43
an episode. never if you liked this
26:45
episode, leave us a rating or review
26:47
on Apple leave us a rating review Way We
26:49
Work is hosted by me, Way Davis,
26:51
and produced by me, Kathleen Cody Nelson
26:53
and Joshua by Henry Chardonnay, Torres. and
26:56
Joshua Christensen with mixing
26:58
by Nicholas Torres.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More