Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:03
The lack of transparency is a
0:05
rife area of risk for biased
0:07
responses. The whole discussion about AI
0:09
ethics is whether you see it as
0:11
a value or whether you see it
0:13
as compliance. My comparison I use is
0:15
with health and safety. Whatever values
0:17
you have in the analog world should be
0:19
extended to the digital world. Hello
0:25
and welcome to Insight Story, tech trends
0:27
unpacked for business leaders. The podcast that
0:29
gives you the insight you need to
0:31
make the right decisions about how to
0:33
use tech to benefit your organisation. I'm
0:36
Susie O'Neill and this is brought to
0:38
you by Kaspersky, the cyber security specialists.
0:41
This time we're asking, what
0:45
do you need to know to use AI ethically?
0:52
Now we live in a world where algorithms
0:54
can make decisions and data fuels innovation. It
0:57
means that thinking about ethics is more critical than
0:59
ever, especially for anyone in business.
1:03
Organisations need to balance the benefits of
1:05
this incredible new technology with making sure
1:07
they preserve their own integrity and
1:09
protecting their customers. How
1:11
does possible AI bias affect businesses?
1:14
What can you do to ensure fairness? And
1:17
what about privacy and the responsible use of
1:19
data in this pursuit of progress? To
1:22
help me answer these and other questions, I'm
1:24
going to be joined by two people who work at
1:26
the centre of these issues. Tomoko
1:29
Yokoi is a researcher based at
1:31
the IMD Business School in Switzerland
1:33
and she writes extensively about digital
1:35
transformation. And Andy Crouch
1:37
is a tech entrepreneur and consultant for
1:40
Acumen, a world leader in ethical use
1:42
of audience data. But
1:44
of course, we need to make sure we're all on the same
1:46
page when it comes to all of this. And
1:49
we have some very ethical help.
1:51
Hello, I am Dolly, your insight
1:53
assistant. Dolly, thank you for coming
1:55
by. Now there's a lot of
1:57
noise about the ethical side of artificial intelligence and
1:59
business. What's the main worry? The
2:02
key concern is around fairness and fears
2:04
of bias in AI algorithms. This
2:07
can affect decision making and deepen
2:09
societal inequalities. Right, I
2:11
can see that's an issue. What kind
2:13
of problems can this lead to? As
2:15
an example, facial recognition systems have
2:18
shown biases against certain groups of
2:20
people which could impact their access
2:22
to services. Biases in
2:24
job hiring algorithms can disadvantage
2:26
some people. Amazon's recruiting tool
2:29
was found to favor male applicants. And
2:31
what other concerns are there? Inaccurate
2:33
data can lead to hallucinations in
2:35
large language models in generative AI
2:38
technologies that power chatbots like
2:41
ChatGPT or GoogleBard. There
2:43
are transparency concerns around the data
2:45
sources used in Gen AI, also
2:47
how much human oversight is needed
2:50
in AI-driven systems and whether companies
2:52
prioritize automation over human employment.
2:55
Yeah, we need to keep humans in the loop, wouldn't you
2:57
agree? Anything else? It's vital
3:00
that businesses protect users' privacy and
3:02
make sure customer data is secured
3:04
and not accessed without consent.
3:07
When interacting with customers, organizations
3:09
need to be clear about whether
3:11
people are talking to a human or to
3:13
a bot. Yeah, I'm beginning to see what
3:15
all the fuss is about. Are companies
3:17
already using this technology? The
3:20
latest McKinsey Global Survey on the current
3:22
state of AI says one-third of respondents
3:24
are using Gen AI regularly in at
3:27
least one business function. It
3:29
is worth noting that it is only
3:31
one year since generative AI tools became
3:33
widely available. Yay, thank you.
3:36
So, Dolly, should we label you to say
3:38
whether you're a human or a bot or
3:41
are you a prototype of artificial general intelligence?
3:43
Goodbye. Oh, okay, touch to the
3:45
nerf again. Unfortunately, AI doesn't
3:47
have all of the knowledge yet, so
3:50
clearly we need to find some answers. Fortunately,
3:53
I'm joined by two people with
3:55
insight into exactly these kind of
3:57
issues. Firstly, Tomoko Yukoi is a
3:59
senior business executive and now a
4:01
researcher based at the Global Centre
4:03
for Digital Business Transformation at the
4:05
IMD Business School in Switzerland. She
4:08
writes extensively on issues surrounding
4:10
digital transformation including AI and
4:12
ethics in the corporate world.
4:15
Hello Tomoko. Hello. Anani Crouch
4:17
is a two-time tech co-founder
4:19
and now a business development
4:21
consultant including for Acumen, a
4:23
world leader in textual data
4:25
analysis using software to identify
4:27
emotions, behavioral drivers and topics
4:29
of concern to audiences. Hi
4:31
Andy. Hello. So Tomoko let's
4:34
start with you. What kind of things do companies
4:36
need to be worried about when they're thinking about
4:38
their AI or perhaps buying an AI product for
4:40
the business? So I think we
4:42
have to consider the actual software development
4:45
lifecycle. When we actually even start by
4:47
thinking of developing an AI product the
4:49
question is this the first in the
4:51
lifecycle development you have to think about
4:53
how the product is designed, you have
4:56
to look at the data because the
4:58
data in itself could be inherently biased.
5:01
So that's one. The second one is once
5:03
it's being developed we have some
5:05
statistics that say that although a
5:07
lot of companies say that they're
5:09
actually implementing AI ethics within their
5:12
companies, the people who are actually
5:14
developing these AI driven products they
5:16
don't know how to translate those
5:18
principles into practice. So
5:20
when they're actually starting to code some of these
5:23
products, how are these principles
5:25
actually being translated in the day-to-day
5:27
development of this software? And
5:29
then third the question of course is when
5:31
you go into testing of the products, you
5:34
know it's in a rather controlled environment.
5:36
So once then the product gets launched
5:38
out into the world the question then
5:40
is who's monitoring it and how can
5:42
we make sure that what was developed
5:44
within the lab doesn't get the bias
5:46
that they didn't expect it to be
5:48
and also the fact that people
5:50
are going to go to use it in the
5:53
right way as well. So these are some of
5:55
the sort of life cycle approach issues that you
5:57
may have to look at. So we
5:59
need to really be thinking throughout that life cycle
6:01
when we're building, testing and using how does
6:03
the ethical framework fit into every aspect of
6:06
it? That's correct. And Andy, turning to you,
6:08
tell us a bit more about Acumen. What
6:10
was it established to do and what kind
6:12
of services are you providing there? Well
6:15
the founder Paul Howarth about 14 years
6:17
ago realised there was a gap in
6:19
capability to take advantage of
6:21
human feedback in the generation of
6:23
insights. So there's lots of capability
6:25
to understand, score this out of
6:28
one to five, but when someone
6:30
actually gives a dialogue response,
6:32
like a feedback review on Amazon for
6:34
example, then there wasn't the capability to
6:36
do anything with that. So he developed
6:38
a capability with his team. It's used
6:41
to basically turn any time when humans
6:43
are talking to identify to
6:45
a granular level key aspects of the
6:47
meaning in the context such as topics
6:50
and emotions in addition to sentiment, two
6:52
very different things. Sentiment is basically a
6:54
metric of opinion, positive, negative, neutral, whereas
6:56
emotions of which there are many are
6:59
actually one of the key drivers of
7:01
behaviour and it can be used anywhere
7:03
where those conversations are going on. So
7:05
for example in the fast moving consumer
7:07
goods space, lots of
7:10
application in health care, in mental
7:12
health. So yeah it's pretty unlimited
7:14
really. Great, so while this is AI
7:16
that's drawing conclusions about people using that data,
7:18
what you're saying is it's doing it differently
7:20
from the tech that everyone's currently talking about.
7:22
There's a huge amount of hype going on
7:25
in the minutes around generative AI, so chat
7:27
GPT people will know about. The type of
7:29
AI that Acumen has been doing
7:31
for the last 15 years is very different,
7:33
it's rule-based. The simple way of looking at
7:35
it is it's human created and curated and
7:38
rather than being a statistical model
7:40
like large language models are, it's
7:42
completely transparent. So because humans subjectively
7:44
have said well this bit of
7:46
human language means this, for the
7:48
first time the machine, it's immensely
7:50
powerful over numbers, hasn't got a
7:52
clue, bless it, over what humans are
7:54
banging on about normally. So if you can give
7:57
the machine that information it can then use all
7:59
its processing computations. I
10:01
think that the lack of transparency over
10:03
the large language models is absolutely a
10:06
rife area of risk for ethical
10:08
and biased responses. The move towards
10:11
retrieval augmented generative AI when you're
10:13
using those more sort of proofed
10:15
data sets is absolutely the way
10:18
forward and that's fundamentally what a
10:20
rule-based model is. So Tomoko,
10:22
you're focusing on suppliers who put ethics
10:24
front and centre going to become increasingly
10:27
important in this idea of digital responsibility
10:29
for companies. Yes, and I
10:31
think we see it in manufacturers
10:34
first. One company that we
10:36
interviewed was Deutsche Telekom, which has been
10:38
a pioneer and a forefront in the
10:40
AI ethics world. And they
10:42
actually have gone ahead and trained all
10:44
of their company employees, which is a
10:47
rather standard way of making
10:49
sure that AI ethics is distributed
10:51
within the organisation. And
10:54
at the same time though, I think they
10:56
are also going ahead and making sure that
10:58
all of their contractors and their supply chain
11:00
are adhering to their AI ethics policy.
11:03
So it's not only within the boundaries
11:05
of the company itself, but it's going
11:07
beyond. And I think that's what's important
11:09
when it comes to AI ethics, because
11:11
when a software or a product is
11:13
developed, it's not only developed within the
11:15
confines of a company. Absolutely. I think
11:17
you were looking at some study that
11:19
showed that there's many, many charters out
11:22
there. So how do businesses go about navigating
11:24
all the different documents and rules
11:26
that are out there already? That
11:28
is true. I think there's about 200, over 250 principles
11:30
and charters out there for AI
11:34
ethics. And that is now
11:36
the conversation really is getting into
11:38
the operationalisation of AI ethics and
11:41
what can companies do about it.
11:43
I think with any type of
11:45
initiative, it is about execution. I
11:47
think there are these codified types
11:49
of mechanisms that are helpful, but
11:52
at the end of the day, these codified
11:54
mechanisms need to start changing our behaviour. So
11:57
I think the real question is in the long term, how
11:59
can we live these... principles and
12:01
ideals. Many companies
12:03
have trainings at regular basis. Some
12:06
companies have been looking at AI
12:08
ethics advisory boards. Some
12:10
of these companies don't actually have the
12:12
intelligence or the knowledge about how to
12:15
do AI ethics within the company. So
12:17
external experts can help. But I think
12:19
there is a case to be made
12:21
for each individual taking initiative and responsibility
12:23
themselves to be able to sort of
12:25
guide and coach each other. We need
12:27
to have some core principles that I
12:29
think we stick to and then if everybody is
12:32
able to do that maybe as a collective we'll
12:34
be able to have an impact and to make
12:36
sure that we're responsible as a whole organization. I
12:38
think the whole discussion about AI ethics
12:40
is whether you see it as a
12:43
value or whether you see it as
12:45
compliance. And if you take the
12:47
compliance approach of course many things tend
12:49
to be cost driven or risk driven.
12:51
And one of the things
12:53
that we'd like to emphasize is that
12:55
AI ethics could also be of value
12:57
to companies. It could be of competitive
12:59
advantage and those companies who commit and
13:03
demonstrate that they are taking
13:05
privacy seriously, that they're taking
13:07
digital responsibly seriously. A lot
13:10
of the customers like that and so think of
13:12
it as a value generating approach rather than
13:14
a risk or a compliance approach. By
13:17
comparison I use is with the health and safety
13:19
industry or within a company. You have
13:21
the health and safety director or health
13:23
and safety manager, one person that's their
13:25
responsibility. In my experience that's not going
13:27
to make a real change to the
13:29
company unless everyone in the company is
13:31
educated about the importance of health and
13:33
safety but also more importantly they get
13:35
it. They're actively looking, they understand why
13:38
it's important and then as Tamaka was
13:40
saying that it absolutely does drive productivity
13:42
and therefore revenue and your
13:44
bottom line. Absolutely a lot of risk
13:46
and reward for businesses. And Andy I recently
13:48
read a piece and it was about how
13:51
AI automatic translations it was jeopardizing
13:53
people's asylum claims so before they would
13:55
have struggled to get someone to translate
13:57
but because there was misreading and misinformation
13:59
in the... translation, it was really not
14:01
helping their cases. But
14:03
you said that your approach allows for this
14:05
genuine adjustment to biases. Tell us a bit
14:08
more about that. So ChatGPT,
14:10
for example, when being asked how many
14:12
emotions there are in the human experience
14:14
has come back to us with numbers
14:17
as large as 138,000, which may or may not be true, but
14:21
clearly is not very practical when you're trying to
14:23
understand the behaviour drivers
14:25
of humans. So our platform has 19
14:27
to 22 now actually, I think. And
14:29
so that gives you the full
14:32
gambit that's relevant to understanding what
14:34
might be driving human behaviour. And
14:37
this is particularly relevant, for example, within
14:39
the NHS, which is the UK's National
14:42
Health Service, where we deliver through our
14:44
partners, Civico, into lots of trust to
14:46
help them understand the patient experience and
14:48
the practitioner experience. If you can understand
14:50
the topics they're talking about, but also
14:52
the emotions around those topics, and for
14:54
example, frustration or delight, then you can
14:57
very quickly extract very reliable insights, which
14:59
can inform your plan. And at any
15:01
point, if anyone questions things, you can
15:03
open the box and go, well, here
15:05
you go. This is why the systems
15:07
highlighted this. And what do you think
15:09
about that? Do you not like that? Well, then we can
15:11
modify it. But we've been doing it for so long that
15:14
it's actually a very attuned model. But again,
15:16
it's opening that transparent box. So you know
15:18
the workings of how you came to that
15:20
analysis. So Tomiko, how important is
15:22
it to a company's reputation that they
15:24
follow ethical AI practices? I mean, would
15:26
that give them a competitive advantage? We
15:29
don't have specific data to say
15:32
that it gives competitive advantage. But
15:34
there are some surveys, especially related
15:36
to digital trust, the consumers are
15:38
willing to evaluate companies who do
15:40
and commit to these types of
15:43
principles as just being higher. Stakeholders
15:45
or shareholders are willing to invest in more
15:47
of these responsible and ethical companies and
15:50
consumers are willing to buy their products.
15:52
So there is some disparate data that
15:54
shows that there is value to doing
15:57
this. And on the other side is the risk, as
15:59
we talked about, the risk of the pandemic. about greenwashing when we
16:01
talk about environmental. Are there aspects
16:03
of this that could spill over
16:05
into tech and AI? Of course
16:07
there's already a term that's called
16:09
machine washing. So it's already here.
16:11
I think what companies really need
16:13
to be diligent about is
16:16
that when they make these public
16:18
commitments that they go ahead and
16:20
execute and operationalize. Organizational change, embedding
16:22
new practices is a very, very
16:25
difficult endeavor. So one of the
16:27
things I would just recommend is
16:29
that if you have a
16:31
grand goal in terms
16:33
of committing to AI ethics or
16:36
digital responsibility, divide it into sub-goals
16:38
that are tangible and it can
16:40
easily be executed by the teams or
16:42
by the functions or by divisions. And
16:45
then the accumulation of these sub-goals will
16:47
turn into results and results that people
16:50
can see. And I think that's the
16:52
most important things that organizations should be
16:54
doing right now. And Andy,
16:56
you supply these different kinds of AI services
16:58
to clients. Are you seeing this sort of
17:01
consideration around safety and governance in the clients
17:03
you work with? I think certainly,
17:05
but specifically our experience is to health
17:07
care and mental health, where and hallucination
17:09
as errors in large language models are
17:12
jocularly referred to, that's very critical. That's
17:14
life-threatening. So you can't have those. So
17:16
at any time when you're dealing with
17:18
anyone's PII in any space, it should
17:21
obviously be sacrosanct and hugely respected, but
17:23
no more so than in health care.
17:25
It's nothing new to us, but I
17:27
think as people start leaning more on
17:30
the generative AI front, then not being
17:32
100% confident what the response might
17:36
be in a given situation is
17:38
leaving the door open for
17:40
accidents and challenges and
17:43
ultimately a drop in share price.
17:45
And do you think the management boards
17:47
and the C-switch should appoint someone or
17:49
is it about educating everyone about how
17:51
to use these ethical frameworks? It's
17:53
a really good question. I think that's the the knob
17:56
of it all. I think all of the above, as
17:58
you mentioned, Susie, because let's not forget AI. is
18:01
not just at work, it's flopped over
18:03
into our entire existence. So it's almost
18:05
you should make sure that you know
18:07
what's going on as an individual and then everyone's going
18:09
to care to a certain extent within
18:11
the corporate structure but it should
18:13
be a multi-tier approach and understanding.
18:15
A whole tier from the individual to
18:17
the business. Absolutely. And Tomoko, you're part of the
18:19
IMD Business School and it's turning out those top
18:22
executives of the future they're going to be in
18:24
the C suite in years to come. So what
18:26
are you telling the people who pass through the
18:28
school about these types of issues and what's the
18:30
next things that they should be thinking about in
18:32
their careers? We are trying
18:35
to tell people that everybody has
18:37
a responsibility. They have responsibility to
18:39
these issues that go beyond the
18:41
company themselves and that
18:43
this type of responsibility is something that one
18:45
needs to be aware of but also the
18:48
fact that they have a responsibility to be
18:50
able to coach and make the
18:52
other people within their team to be aware of
18:55
it as well. So I think that's the first
18:57
thing. The second thing is
18:59
what type of organizations do we want to
19:01
work with and do these executives want to
19:03
build? Is it an organization that
19:05
can handle these multiple goals that we have
19:07
to handle which is not only profit
19:10
but also some of these social,
19:12
environmental and ethical issues? And
19:14
I think the third one very much is as we
19:16
try and coach people to be
19:18
able to handle multiple goals within
19:21
their workplace. How can you do
19:23
that in a way that is effective for the individual
19:25
and that adheres to their value and at
19:27
the same time is also good for the planet?
19:29
So I think those are the three areas
19:32
that we'd like executives to walk away with.
19:34
Thinking about the future also for you Andy,
19:36
what do you think is coming down the
19:38
line to suppliers like yourself and what kind
19:40
of legislation or requirements are going to be
19:42
placed on you? I think it's fair to say
19:44
to a certain extent in the world of AI as
19:47
of today if anyone tells you that they know what's
19:49
going to happen in the next six months or beyond
19:51
I'd highly question that. I think there's a lot of
19:53
fear, there's a lot of lobbying going on in different
19:55
areas and so I'd go
19:57
slow because if your AI driven capabilities
20:00
falls down for a failure in delivering
20:02
capability but also being roused for being
20:04
non-compliant ethically or otherwise it's going to
20:06
be very damaging to you. There is
20:08
absolutely a need for regulation I find
20:10
it personally quite interesting to see
20:12
how you regulate something that is not particularly
20:14
definable and morphs as Tomoko has said very
20:17
quickly. We have to try and do our
20:19
best to protect those who need protecting so
20:21
I think you're going to have to be
20:23
very agile and you're going to have to
20:25
understand the risks almost as they
20:27
occur. Absolutely and we like to
20:29
end our interviews by offering a final
20:31
nugget so that business executive audience who
20:34
perhaps haven't done anything yet around AI
20:36
ethics they've maybe messed around with generative
20:38
AI or they bought some tools but
20:40
they don't have that board of ethics
20:42
or that charter yet. What
20:44
would you say is the first thing that
20:46
they should be thinking about doing today? Let's
20:48
turn to you first Tomoko. Well
20:50
as a mindset please keep
20:53
in mind that the analog world and the digital
20:55
world is the same so whatever
20:57
values you have in the analog world
20:59
should be extended to the digital world
21:01
it's not separate. Absolutely and
21:04
Andy what would you say is your
21:06
golden nugget of insight? Well I'm
21:08
going to contradict myself as always having
21:10
just said that you should get your
21:12
entire team conversant with AI definitely do
21:14
that but also have a well-informed friend
21:16
to call someone who lives and breathes
21:18
this stuff so when there are changes
21:20
they understand it first and they understand
21:22
its implications and have them on speed
21:24
dial. A
21:30
big thank you to Tomoko, Yokoi and
21:32
Andy Crouch for sharing their insights on
21:34
AI and its ethical challenges. If
21:38
you're enjoying these kinds of insights
21:40
you really need to be reading
21:42
secure futures. Kaspersky's digital magazine about
21:44
all kinds of tech trends for the
21:46
discerning business executives. We've
21:48
got an opinion piece about regulating
21:50
chat GPT, one about emotionally intelligent
21:53
AI for recruitment and articles from
21:55
this season's insight story including making
21:57
the most from generative AI. and
22:00
an intro to digital sovereignty. You
22:02
can find the link to Secure Futures in the
22:05
Insight Stories show notes. Everyone
22:10
agrees that AI and machine learning
22:12
offer incredibly exciting opportunities for businesses.
22:15
But as we've been saying, they have to
22:17
be used ethically and safely. To
22:19
give us some insight into how to make the
22:21
most of this tech while remaining secure, I'm
22:24
joined by Dr. Armin Hazbini, Head
22:26
of Research Centre, Middle East, Turkey
22:29
and Africa, to Kaspersky's global research
22:31
and analysis team known as Great.
22:34
So, Armin, what are the key things that businesses should
22:36
be thinking about if they're going into this area? When
22:38
we talk about AI, a lot of us
22:40
do not realise, I think, that AI
22:43
is not going to be ethical by
22:45
itself. It's not
22:47
going to self-define its ethical
22:49
standards. We also need
22:51
to agree that when AIs that
22:54
are being used everywhere around
22:56
the world today, they are
22:58
programmed with ethical standards. And these
23:00
ethical standards, they
23:02
do not represent everyone. They usually
23:04
represent a biased definition of
23:07
ethics that is
23:09
specific to a number of people,
23:11
maybe programmers, developers, maybe an organisation,
23:13
maybe the country within which
23:16
the organisation operates. I
23:18
think what is also important to say
23:20
here is that there is
23:22
almost no way the public
23:25
is able to check, evaluate,
23:28
critique or enhance
23:31
these ethical capabilities within
23:33
this AI engine, within the
23:36
AI algorithms that
23:38
we are all using nowadays, which
23:40
is a major challenge, of course. So,
23:43
do you think there's a role there for
23:45
regulation of these
23:47
technologies, because the business itself can't
23:50
really fully evaluate the language models
23:52
and the inputs? Yes, indeed.
23:54
We need security and safety by design
23:57
and then continuous verification of
23:59
this. security and safety by design.
24:02
Such would require transparency measures,
24:05
especially by big tech vendors
24:07
and allowing the public to
24:10
influence such technologies and their
24:12
development. In reality, we're
24:14
asking the public to adopt technologies that can do
24:16
a lot of damage without
24:18
giving that same public the
24:21
capabilities to make sure that
24:23
damage will not happen. We
24:26
did the same before with social
24:28
media and social media technologies and
24:30
we've seen, sadly,
24:32
a lot of incidents such as
24:34
data leaks, abuse of data, scandals
24:37
of fake news, etc. Do
24:39
you think we're in danger of
24:41
repeating those same patterns unless regulation
24:43
happens, maybe more of a regional
24:46
level, for example, in the European
24:48
Union? On the level of the European Union, I
24:50
think regulations are moving
24:52
very fast into the areas
24:55
of artificial intelligence. Still, I
24:58
think AI could
25:00
cause a lot of damage. It's
25:02
much more dangerous than social media
25:05
and we definitely need these rules and
25:07
laws in place as quickly as possible.
25:10
What about the data sets themselves? If
25:12
a company is buying a generative AI
25:14
tool, what sort of questions should they
25:17
be asking to make sure that the
25:19
data they put in and also the
25:21
training data that the model is based
25:23
on is actually secure? I
25:25
think an organization will figure out
25:27
by itself what kind of data
25:29
it is allowed
25:31
to put on AI if
25:33
it has the right asset
25:36
management controls in place. Because
25:38
the asset management controls, if they are
25:40
well deployed, they would allow to classify
25:43
the data and they
25:45
would allow to identify which data can
25:47
be available to AI, which data can
25:49
be available to the public, which data
25:51
needs to stay inside
25:54
the organization, confidential, private
25:57
or secret. Thank
26:00
you very much to Amin. Now
26:04
one person who has a lot of thoughts
26:06
about how AI can be better regulated is
26:08
Eugene Kaspersky, a world leading cyber
26:10
security expert who founded Kaspersky in 1997.
26:14
Today he's the CEO at the helm, protecting
26:16
240,000 businesses and 400 million people from cyber
26:18
threats. He's
26:22
a busy man. He recently
26:24
wrote for Kaspersky Daily magazine about how
26:27
he thinks regulators in the industry should
26:29
be thinking about AI to improve trust. Do
26:32
you think he's in favour of heavy regulations,
26:34
night to touch or no regulation? Only
26:36
one way to find out. Check the link in the
26:38
show notes to read it. That's
26:44
it for this edition and in fact
26:46
this series of Insight Story Tech Trends
26:48
Unpacked brought to you by Kaspersky. If
26:51
you've missed any episodes, search for us wherever you
26:54
get your podcasts and you'll find all our other
26:56
shows, jam-packed with all kinds
26:58
of insights for business leaders. We've
27:00
found global experts on topics from
27:02
blockchain to quantum computing and smart
27:04
energy to industrial IoT. Go
27:07
and check it out and if you like it, please
27:09
leave us a rating and give us an excellent
27:11
review and tell us what technologies
27:13
you'd like to hear about in the next season.
27:16
If you want to get ahead, you really can't afford
27:18
to miss it. Till the next
27:20
time, goodbye. Goodbye. Ah, Sir Donley, thank
27:22
you for all the info you shared with us
27:24
this season. It was a pleasure. I'm sure I don't
27:27
even have to ask you this. It was
27:29
absolutely source-wide. Do you know where it came from?
27:32
Have you ever been sick of speaking? Just
27:42
before I go, I wanted to tell you
27:44
about two other great series from Kaspersky that you
27:46
might like. Fast Forward
27:48
by Tomorrow Unlocked explores the past, present
27:50
and future of the technologies around us.
27:53
Season 2 features six fresh new
27:55
episodes including augmented body technology, how
27:58
tech is changing family life. and
28:00
women in gaming. Plus, if you want
28:02
to hear the latest news and views
28:04
from the world of cybersecurity, join Jeff
28:06
Esposito in the US and David Buxton
28:09
in the UK for Kaspersky Transatlantic. They
28:11
chat security around current tech news and have
28:13
the lowdown on the latest data breaches. You'll
28:16
find links to both these series in our show notes,
28:18
but you'll also find them wherever you get your podcasts.
28:21
So track them down and click follow so you don't
28:23
miss an episode.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More