Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:00
Welcome everyone to another episode of
0:02
Dynamics Corner . Is AI
0:05
a necessity for the survival of humanity
0:07
? That's my question . I'm your co-host
0:09
, chris , and this is Brad .
0:11
This episode was recorded on December 18th 2024
0:15
. Chris , chris , chris . Is
0:17
AI required
0:20
for the survival of humanity ? Is
0:23
humanity creating the requirement for
0:25
AI for survival ? That's
0:27
a good question . When it comes to AI , I have
0:30
so many different questions and there's so many points
0:32
that I want to discuss about it With
0:34
us . Today we had the opportunity to speak with Zoran Fries-Alexanderson
0:36
and Christian Lenz about some of those
0:39
topics . Good
0:58
morning , good afternoon . How are ?
0:59
you doing there , we , there we
1:02
go good day
1:04
good afternoon over the pond
1:06
.
1:06
How are you doing ? Good
1:08
morning , well , good good good
1:10
, I'll tell you , soren , I love
1:13
the video . What did you do
1:15
? You have the nice , the nice blurred
1:17
background , the soft lighting yeah
1:21
, it's uh .
1:23
You can see great things with a great camera
1:25
.
1:27
It looks nice , it looks really nice , christian
1:30
. How are you doing ?
1:31
Fine , thank you very much .
1:35
Your background's good too , I like it
1:37
, it's real .
1:38
Back to the future .
1:41
It is good , it is good , but
1:43
thank you both for joining us this afternoon , this
1:45
morning , this evening , whatever it may be
1:47
been looking forward to this conversation . I was talking
1:49
with chris prior to this . This is probably
1:52
the most prepared I've ever been for a discussion
1:54
. How well prepared I am we'll see
1:56
. Uh , because I have a lot
1:58
of things that I
2:00
would like to bring up based on some individual
2:03
conversations we had via either
2:05
voice or via text . And before
2:09
we jump into that and
2:11
have that famous topic , can
2:14
we tell everybody a little bit about yourself , soren ?
2:18
Yes , so my name is Soren
2:20
Alexandersen . I'm a product
2:23
manager in the Business Central engineering
2:25
team working on finance
2:28
features basically rethinking finance
2:30
with co-pilot and AI .
2:33
Excellent , excellent Christian .
2:37
Yeah , I'm Christian . I'm a
2:39
development facilitator at CDM
2:41
. We're a Microsoft Business Central partner
2:44
. Development facilitator at CDM . We're a
2:46
Microsoft Business Central partner and
2:49
I'm responsible for the education of my colleagues in all the new topics , all
2:51
the new stuff . I've been a
2:53
developer in the past and a project manager
2:55
and now I'm taking care of taking
2:57
all the information in that
2:59
it leads to good solutions
3:02
for our customers .
3:04
Excellent excellent and thank you both for joining
3:06
us again . You're both veterans and I appreciate
3:08
you both taking the time to speak with
3:10
us , as well as your support for the podcast
3:13
over the years as well . And
3:15
just to get into this
3:17
, I know , soren , you work with
3:19
AI and work with
3:21
the agent portion I'm simplifying
3:24
some of the terms within
3:26
Business Central for the product group and
3:29
you know , in our conversations you've
3:31
turned me on to many things . One thing you've turned me on
3:33
to was a podcast called
3:35
the Only Constant , which I was pleased
3:37
I think it was maybe at this point
3:40
a week or so ago , maybe a little bit longer
3:42
to see that there was an episode where
3:44
you were a guest on that podcast talking
3:47
about AI , and
3:49
you know Business Central , erp
3:52
in particular . I mean , I think you referenced
3:54
Business Central , but I think the conversation that you had was
3:56
more around ERP software and
3:59
that got me thinking
4:01
a lot about AI , and
4:04
I know , christian , you have
4:06
a lot of comments on AI as well too , but
4:09
the way you ended that
4:12
with you know nobody wants to do the dishes is
4:14
wonderful , which got my mind thinking about
4:17
AI in
4:19
detail and what AI
4:21
is doing and how AI is shaping . You
4:24
know business , how AI is shaping how
4:26
we interact socially , how
4:28
AI is shaping the world , so
4:30
I was hoping we could talk a little bit about
4:33
AI with everyone today . So
4:37
with that , what are your thoughts
4:39
on AI ? And also , maybe , christian
4:42
, what do you think of when you hear of AI or
4:44
artificial intelligence ?
4:46
I would say it's mostly
4:49
a tool for me Getting
4:53
a little bit more deeper into what
4:55
it is . I'm not an AI expert
4:57
, but I'm talking to
5:00
people who try
5:03
to elaborate how to use AI
5:05
for the good of people . For
5:08
example , I had a conversation with
5:11
one of those experts from Germany just
5:13
a few weeks before directions and
5:16
he told me how to make
5:19
use of custom GPTs and
5:22
I got the concept and tried
5:24
it a little bit custom
5:28
GPTs and I got the concept and tried it a little bit and when I got to Directions
5:30
EMEA in Vienna in the beginning of November , the agents topic
5:32
was everywhere , so it was
5:34
co-pilot and agents and it
5:36
prepared me a lot how this
5:38
concept is evolving and how
5:40
fast this is evolving . So I'm
5:43
not able to catch up everything , but
5:46
I have good connections to people
5:48
who are experts in
5:50
this and focus on this , and the conversations
5:52
with those people , not only
5:54
on the technical side but also on how
5:58
to make use of it and what to keep in mind when using
6:00
AI , are very
6:02
crucial for me to make
6:05
my own assumptions and decide
6:08
on the direction where
6:10
we should go as users
6:12
, as partners for our customers
6:14
, and to consult our
6:17
customers and
6:19
on the other side . With the evolving
6:22
possibilities and capabilities of
6:24
AI , generating whole
6:28
new interactions with
6:30
people , it gets
6:33
much more harder to
6:35
have this barrier in mind . This is a machine
6:37
doing something that I receive
6:40
and this is not a human
6:42
being or a living being
6:45
that is interacting with me . It's
6:48
really hard to have
6:51
a bird's eye view of what is really happening
6:54
here , because it's so
6:56
like human interaction
6:59
that we have with AI
7:01
, that is hard to
7:03
not react as a human
7:05
on this human interaction and
7:08
then have an outside
7:11
view of it . How can I use it and where is
7:13
it good or bad , or something like that , that moral
7:15
conversation we're trying
7:17
to have . But
7:20
having conversations about
7:22
it and thinking about it helps
7:25
a lot , I think .
7:27
Yeah , it does , Saren . You
7:30
have quite a bit of insight
7:32
into the agents and working with AI
7:34
. What is your comments
7:36
on AI ?
7:38
I think I'll start from the same perspective as Christian
7:40
. From the same perspective as
7:42
Christian , that
7:45
for me , ai is also
7:47
a tool in the sense that when
7:51
looking at this from a business perspective , you have
7:53
your business desires , your business goal , your business
7:55
strategy and
7:58
whatever lever you can
8:00
pull to get you closer to that business
8:02
goal you have AI might be a tool you can pull to get you closer to that
8:04
business goal you have . Ai might be a tool you can utilize for
8:07
that . It's
8:09
not a hammer to hit
8:12
all of the nails . I mean it's not the tool
8:14
to fix them all . In
8:16
some cases it's not at all the right tool . In
8:19
many cases it can be a fantastic tool
8:21
. So that depends a lot on the scenario . It depends a lot on the goal . It can be a fantastic tool . So that depends
8:23
a lot on the scenario . It
8:27
depends a lot on the goal
8:29
. I will say that I'm fortunate in the
8:31
way that I don't need to know the intricate
8:33
details of every new
8:35
GPT model that comes out and
8:37
stuff like that . So that's
8:39
too far for me to go
8:41
and I could do nothing
8:44
else . And to your point , christian . So you
8:46
said you're not an ai expert . So
8:48
but I mean by
8:50
by modern standards and the
8:53
ai that we typically talk about these days . Well , lms
8:55
, it's only been out
8:57
there for such a short while . Who who can actually
9:00
be an ai expert yet ? Right
9:02
, I mean , it's been out there for
9:04
a couple of years . In this modern incarnation
9:06
, no one is an
9:08
expert at this point . I mean , you have people
9:11
who know more than me and
9:13
us , maybe given in this audience
9:15
here , but we
9:18
all try to just learn every day . I
9:20
think that's how I would describe it . There's
9:27
some interesting things . I mean from
9:29
my perspective as a product manager
9:31
. What
9:34
I'm placed in this world to do is to
9:36
basically
9:38
rank customer opportunities and
9:40
problems . That's my
9:42
primary job . Whether
9:45
or not AI can help solve some of those opportunities or problems that's my primary
9:47
job . Whether or not AI can help solve some of those opportunities or problems
9:49
great . So
9:52
that's what I'm about to
9:54
do , like reassess all
9:57
those things that I know about our customers , our
9:59
joint customers and partners , and how
10:01
can AI help those ?
10:05
Yeah , just when
10:08
you started speaking about the
10:10
dishwasher , it made me chuckle
10:12
and say how can you relate that to
10:14
why AI
10:16
was invented ? And I
10:19
had to look it up . I looked up , you
10:21
know why was the dishwasher invented
10:23
? So I thought it was pretty interesting
10:26
to share to the listeners . One
10:29
was to Josephine
10:32
Cochran , who invented the dishwasher
10:34
, and her
10:36
reasoning was to protect her china dishes
10:40
and she didn't want to
10:43
hand wash and then free
10:45
up time . And how
10:47
relatable is that with AI
10:49
? Is that we want
10:52
to free up
10:54
our time to do other things and
10:57
use AI to . In
10:59
this case , she
11:01
had noted that hand washing , avoiding
11:04
hand washing , she wanted to create a machine that could wash
11:06
dishes faster and more carefully
11:08
than she could . So , in a sense
11:11
, when
11:13
AI is invented , you
11:15
kind of want to have a
11:17
tool in this case an AI tool
11:19
to do other things for you , maybe
11:22
better than you can and
11:25
maybe more carefully in
11:27
feeding you information . I don't know , but
11:29
I thought that was pretty interesting .
11:31
The relatable component
11:34
there and that
11:36
makes total sense to me . That makes
11:38
sense in the sense that AI
11:41
is very good at paying attention to detail
11:43
that a human might overlook if
11:45
we're tired or
11:47
it's end of the day or early morning
11:49
. Even so
11:52
, there's so much relatable
11:54
things to what you just said that applies for AI
11:56
, or even just technology , I
11:58
mean , and automation . It's not just AI , because
12:01
IT is about
12:03
automating stuff . Ai just
12:05
brings another level of automation .
12:08
You could say it
12:12
is a beneficial tool . But , chris
12:14
, to go back to your point with the
12:16
invention of dishwasher and maybe even the invention
12:18
of AI , I think I
12:20
don't know the history of AI and I'm not certain
12:23
. If you know , I'm sure you could use AI to
12:25
find the history of AI . But is AI
12:27
one of those tools ? I have so many thoughts
12:29
around AI and it's tough to find
12:31
a way to get into unpack all of the
12:34
comments that I have on it . But
12:37
a lot of tools get
12:39
created or invented
12:42
without the intention of them
12:44
being invented . You
12:51
know it's sometimes you create a tool or you create a process or something comes
12:53
of it and you're trying to solve one problem . Then you realize that you
12:56
can solve many other problems by either
12:58
implementing it slightly different , you
13:01
know , working on it with another
13:03
invention or a tool that was created
13:05
. So where does it end
13:07
? And with
13:11
AI , I think we're
13:13
just I don't know if we'll ever or
13:15
we can even understand where it will go or where it will end
13:17
. We see how individuals are using it now , such
13:19
as creating pictures . Right , I'm looking at
13:21
some of the common uses of it outside of the analytical
13:24
points , points of it people creating pitches you
13:26
know a lot of your search engines now will primarily give you
13:28
the ai results of the search engines , which is
13:30
a summary of sources that they cite
13:32
. Uh , ai gets used
13:35
, you know , from that way , from
13:37
like the language model points of view , but then ai
13:39
also gets used from a technical point of view . Um
13:42
, I'm also reading . I started
13:44
reading a few weeks ago a book
13:46
uh , moral ai and how we get there
13:48
which is by pelican books and I think
13:50
it's borg , synod , armstrong and
13:53
contents I'm so bad with names which
13:55
also opened
13:57
up my eyes to ai
14:00
and how ai impacts
14:02
everybody in
14:05
the world .
14:07
I think it creates different iterations , right with
14:10
AI . You know , clearly
14:12
, you see AI
14:14
in almost practically anywhere
14:17
you had mentioned . You know creating
14:19
images for you and
14:23
started with that and then followed with creating
14:25
videos for you now and
14:28
and so much more , and then you
14:30
know , uh , sorted . You know I was trying
14:32
to . I mean , I was listening to your episode um
14:35
, you know , where does ai come into play in erp
14:37
and where does it go from there
14:39
? Right , I'm sure a lot of people
14:41
are going to create different iterations
14:43
of AI and Copilot
14:46
and Business Central , and
14:48
that is where I'm excited about
14:50
. We're kind of scratching the surface
14:52
in the ERP and
14:55
what else can it do for you in the
14:57
business sense ? Of course , there's different
14:59
AIs with M365
15:01
and all the other Microsoft ecosystem
15:03
product lines . What's
15:07
next for businesses
15:09
, especially in the SMB space ? I
15:11
think it's going to create a level
15:14
playing field for SMBs
15:16
to be able to compete better
15:19
and where they can focus more on strategy
15:21
and
15:24
be more tactical in the way they do business . So
15:26
that's where I'm excited about and and I think a
15:28
lot of us here in this call we're
15:32
the , I guess , curator and
15:35
and that's where we become
15:37
more of business consultants in a
15:39
sense of how you would run your business
15:41
utilizing all these Microsoft tools
15:43
and AI .
15:46
I think yeah .
15:46
I think , Go ahead .
15:48
Christian .
15:49
Okay , I think that we
15:52
see some processes
15:54
done by AI or agents
15:56
which we never
15:59
thought would be possible without doing
16:01
the human . What
16:03
was presented is really mind
16:06
what level of steps
16:08
and pre decisions
16:11
AI can make and offer a more
16:13
, better
16:18
result into the process until
16:20
a human needs to interact to
16:22
that . And I think that
16:24
will go further and further and further . What
16:28
I'm thinking is where is
16:31
the point where the human says okay
16:33
, there is a new point where
16:35
I have the
16:38
feeling that now I have
16:40
to grab into this
16:42
process because the AI is not
16:44
good enough and that point
16:47
is , or this
16:49
frontier is
16:51
, leveraged on and on
16:53
and on , something like that . But
16:57
to have this feeling , to have
16:59
in mind this
17:01
is the thing AI
17:04
cannot do . I
17:06
have to be conscious and
17:08
cautious and
17:11
I think , on the one hand side
17:13
, with AI we can
17:15
make more processes
17:17
, we can make more
17:20
decisions easily , and on
17:22
the other side , the temptation
17:25
is high that we just accept
17:27
what the AI is
17:29
prompting to us or offering us
17:31
. I like the concept of
17:33
the human in
17:35
the loop . So at
17:38
least the human at some
17:40
point in this process has to say , yes
17:43
, I accept what the AI is suggesting
17:45
, but
17:47
having more time to
17:50
process . More communication
17:52
is also critical
17:54
. Just to click yes , okay , okay
17:57
, okay . I
17:59
think we should implement
18:03
processes where we just
18:05
say , okay , let's look
18:08
at how we use AI here
18:10
and take a little bit back
18:12
and say , wow , what number
18:16
of steps AI can make for us . But
18:19
just think where it
18:21
just goes too far .
18:25
I think that's an interesting line of thinking
18:27
, christian , and I think so . Before
18:30
we go deeper , let me maybe just say
18:32
that some of
18:34
the stuff that we talk about in this episode like
18:37
, if nothing else is mentioned
18:39
, these are my personal opinions
18:41
and may not reflect the opinions
18:43
of Microsoft . Let's sort of get into product-specific
18:45
stuff , but I would
18:47
like to take sort of a product's eye view on what
18:49
you just said , which
18:52
is when we look at agents these days and
18:54
what an agent can do and what should
18:56
be the scope of a given agent and
18:58
what should be its name , and so now
19:01
we've released some information about
19:03
the sales order agent
19:05
and described how
19:07
does it work and actually
19:09
being fairly transparent about what it intends
19:11
to do and how it works , which I think is great
19:14
. We
19:17
actually start by drawing up in
19:21
the process today , before
19:23
the agent . How would this process look
19:26
? Where are the human interactions
19:28
between which parties ? Now
19:31
bring in the agent ? Now
19:34
, how does that human in the loop let's
19:37
say flow look like ? Are
19:39
there places where the human actually
19:41
doesn't need to be in the loop ? That's
19:44
the idea . Don't bring in the human unless it's need
19:46
to be in the loop . That's the idea . Don't bring in the human unless it's really
19:48
necessary or adds value . So that's the line , that's
19:51
the way that we think about it , to try
19:53
to really apply . You know , if
19:55
that A to Z process
19:58
can remove the human like
20:03
can automate a piece We've always been trying to automate
20:06
stuff right for many years . If
20:08
AI can do that better now , well
20:10
, let's do that . But of course
20:12
, whenever there's a risk situation
20:14
or wherever there's a situation where
20:16
the human can add value to a decision
20:18
, by all means let's bring in the human into
20:20
the loop . So that's the
20:23
way that we think about the agents and
20:26
the tasks that they should perform in whatever business
20:28
process . And to
20:30
your point , chris , I think
20:32
that the
20:35
cool thing about AI in
20:37
ERP , as in Business Central
20:40
these days , is that it
20:42
becomes super concrete . Like
20:45
we take AI from something that is very sort
20:47
of fluffy and marketing
20:50
and buzzwords that we all see online
20:52
and we make it into something that's very
20:54
concrete . So
20:57
the philosophy is that in BC unless
20:59
, of course , you're an ISV that needs to build something
21:01
on top of it , or a partner , a customer wants
21:04
to add more features AI
21:06
should be ready to use out of the box . You
21:09
don't have to create a new AI
21:11
project for your business , for your enterprise
21:14
to start leveraging AI ? No , you
21:16
just use AI features that are already there , immersed
21:19
into the UI and among all
21:21
other feature functions in Business Central and among
21:23
all other feature functions in Business Central
21:25
. So , because
21:27
small medium businesses , many of them don't even have the budget to
21:29
do their new AI project and hire
21:32
data scientists and what have you and
21:34
all these things create their own models . No
21:37
, they should have AI ready to use . So
21:39
that's another piece of our philosophy .
21:44
AI is . I look at that as more as AI as a function
21:46
, because if you have AI
21:49
as a function , you can get the
21:51
efficiencies . I think , to some
21:53
of the comments from the conversations that
21:55
we've had and the conversations that I've heard , you look
21:57
for efficiencies so that you can do
21:59
something else . People want
22:01
to use the word something else or something
22:04
that they feel is more productive and let
22:06
automation or AI or
22:09
robots I use
22:11
the word quote do the tasks
22:13
that are mundane or some
22:15
would consider boring or repetitive
22:17
. And
22:20
we do use AI on
22:22
a daily basis and a lot of the tools that we
22:25
have . To your point , Soren , that it's just embedded
22:27
within the application If you
22:29
buy a vehicle , a newer vehicle now , they
22:31
have lane avoidance , collision avoidance , all
22:33
of these AI tools that you
22:35
just get in your vehicle . You either turn it on
22:38
or turn it off , depending upon how
22:40
you'd like to drive , and
22:42
it works and it helps the , the function
22:44
, uh , be there for you . But
22:47
to kind of take a step back from
22:50
um ai
22:52
in that respect . But
22:54
a couple things that I come with ai
22:56
we . We talk about the vehicle . Um
22:58
, I'll admit I have a tesla . I
23:01
love the fsd and I used
23:03
it a lot and it just seems to
23:05
improve and improve and improve to
23:07
the point where I think sometimes
23:10
it can see things I
23:12
use the word see or detect things faster
23:15
than I can as a human right
23:17
Now . Ai may
23:19
not be perfect and AI makes mistakes
23:22
. Humans make mistakes . Humans get into car crashes
23:24
and have accidents right
23:26
for some reason , and we
23:28
have accepted that . But if AI
23:30
has an accident
23:33
, we find fault or find
23:35
blame in that process , instead
23:37
of understanding that . You
23:39
know , in essence , nothing is perfect , because
23:41
humans make mistakes too and we accept
23:44
it . Why don't we accept it when AI
23:46
may be a little off
23:48
?
23:51
That's such a great question and
23:54
the fact is , I think right now is
23:56
that to a point that we
23:59
don't accept it , like we don't give machines that
24:01
same benefit
24:03
of the doubt , or like
24:05
if they don't work it's
24:07
crap and we throw them out , like I mean that's like
24:10
, but humans like we , we're
24:12
much more forgiving , like we give them a second chance
24:15
. And oh , maybe I didn't teach you uh
24:18
well enough how to do it , or so
24:20
, but that's a good point and I , I , I
24:23
love your example with the Tesla . So I also
24:25
drive a Tesla , but I'm
24:27
not in the US , so I can't use the full self-driving
24:29
capability , so I use
24:32
the what do you call it ? The semi-autonomous , so it can keep
24:34
me within the lane . It reacts
24:36
in an instant if something drives
24:38
out in front of me much faster
24:41
than I can do . So I love that mix
24:43
of me being in control
24:45
but just being assisted by these
24:47
great features . That uh makes
24:50
me drive in a much safer way . Basically
24:52
, uh , I'm not sure I'm a proponent
24:55
of sort of full self-driving . I don't know
24:57
, I'm still torn about that , but
24:59
uh , that could lead us into a good discussion
25:01
as well , um I
25:03
think you have that trust because that I'm .
25:05
I'm the same way with brad , you know , I love , I
25:08
love it , um , as
25:10
as I , you know , continue to use
25:13
it . But in the very beginning I could not trust
25:15
that thing . I had my hand in the
25:17
steering wheel . Um , you know , a
25:19
white knuckle on on
25:21
the steering wheel . But uh , eventually
25:24
I come to accept
25:26
it and I was like , oh , that's a pretty good job
25:28
, uh , getting me around . Uh
25:30
, am I still cautious ? Absolutely , I
25:33
still want to make sure that I can quickly control
25:35
something if I don't believe it's doing the
25:37
right thing .
25:38
So I , I think , um , actually
25:41
my reason for not being
25:43
a sort of full believer in in sort of full
25:45
self-driving , like complete autonomy with
25:48
cars is is not
25:50
so much because I don't I mean , I actually
25:52
do trust the technology to a large extent
25:54
. It's more because of many of the reasons
25:57
that are now in that book that I pitched
25:59
to all of you that moral AI like
26:02
who has , like if something
26:04
goes wrong . And there's this example in the book
26:06
where , where an uber car
26:09
like you would think it was a volvo they
26:11
, they test an uber car , some self-driving
26:13
capabilities in some state and
26:15
it accidentally runs over a , a
26:18
woman who's who's passing the street in in
26:20
an unexpected place and it was dark
26:22
and things of that nature , and the driver
26:25
wasn't paying attention , and there was all these
26:27
things about who has the responsibility
26:30
for that end of the day . Was it the
26:32
software ? Was it the driver who wasn't
26:34
paying attention ? Was it the , the
26:37
government who allowed that car
26:39
to be on that road in the first place ? But
26:41
while testing it out all of these
26:43
things and if
26:46
we can't figure that out or
26:48
all those things need to be figured out first
26:50
before you allow a technology
26:53
loose like that , right , and so that and
26:56
I wonder if we can do that . If
26:58
we can , we
27:00
like
27:03
we don't have a good track
27:05
record of of doing that , uh
27:08
. So I wonder I I'm
27:10
I'm fairly sure the technology will , will
27:12
get us there , if we can live with the
27:14
uh , uh
27:17
when it doesn't work well . So what
27:20
happens if a self-driving car kills
27:22
20 people per year , or
27:25
cars multiple ? Um , can
27:28
we live with that ? What if 20 people is a lot better
27:30
than 3000 people from from human
27:32
drivers Like yeah , that is
27:35
.
27:35
I think in the United States there's 1.3
27:37
. I don't don't quote me on the statistics . I think
27:39
I heard it again with the all these
27:41
conversations about self-driving
27:44
and you know the Moralei
27:46
book and listen to some other tools . I
27:48
think in the United States is one point three million fatalities
27:50
due to automobiles a year . You
27:52
know I forget if it's a specific type , but it's
27:55
a lot . So , to get to
27:57
your point , you know not to focus
27:59
on the you
28:01
know , the driving portion , because a lot of topics
28:04
we want to talk about . Is
28:07
it safer ? In a sense , because you may
28:09
lose 20 individuals
28:11
tragically in an accident per
28:13
year , right , whereas before it was
28:15
a million because AI
28:18
? You know I joke
28:20
and I've had conversation with Chris talking about the Tesla
28:22
. I trust the FSD a lot driving around
28:24
here in particular , I trust the FSD a
28:26
lot more than I trust other people . And
28:28
to your point of someone losing
28:31
their life tragically , crossing in the
28:33
evening at
28:36
an unusual place and
28:39
having a collision with a vehicle , that
28:41
could happen with a person doing it as well
28:43
, and
28:46
I've driven around and the
28:49
Tesla detected something
28:51
before I saw it . So the reaction time is
28:54
a little bit quicker because if you're driving right
28:57
and it goes up to a couple
28:59
points I want to talk about , which I'll bring up to is
29:01
, you know , too much trust and de-skilling
29:03
. I want to make sure we get to those points . And
29:06
then also , if we're looking at analytics
29:08
, some you know harm bias as well , and
29:11
then also , if we're looking at analytics , some you
29:13
know harm bias as well , no-transcript
30:02
. And then to Christian's point and
30:04
even your point where the humans are involved
30:06
. Are the humans even capable
30:08
with the skilling ? Because you don't have
30:10
to do those tasks anymore
30:12
to monitor the AI ? You know
30:14
, if you look back , I'm going to go on a little tear in
30:16
a moment . In in education , when
30:18
I was growing up , we learned a lot of math
30:21
and we did not , you
30:23
know , use calculators . I don't even know when the
30:25
calculator was invented , but we weren't allowed
30:27
to . You know , they taught us how to use a slide rule . They
30:29
taught us how to use a slide rule . They taught us how to use even believe it or not , when I was
30:31
really young an abacus , and now
30:34
and then I could do math really , really well . Now
30:37
, with the , you know , ease
30:39
of using calculators , ease of using your phone
30:42
or ease of even using AI
30:44
to do math equations ? can
30:47
you even do math as quickly as you used to
30:49
? So how can you monitor a tool that's supposed
30:51
to be calculating math , for example ?
30:54
I , I , I think you're , I
30:56
mean , you have very good points about the like
30:58
. Just coming back to the car for a second , because
31:01
, uh , I mean , technology
31:03
will speak for itself and what it , what it's capable
31:05
of , I think . I think
31:07
where we have to take some decisions
31:09
that we haven't had to before
31:11
is when we dial up the autonomy to
31:14
100% and the car drives completely
31:16
on its own , because then
31:19
you need to be able to question how
31:22
does it make decisions ? And get
31:24
insights into how does it make decisions based
31:26
on what ? Who determines how
31:29
large an object has to be before the car
31:31
will stop if it runs ? So I
31:33
think back in the old days in Denmark , insurance
31:41
companies wouldn't cover if the object you ran over was smaller than a small
31:43
dog , something like that . So
31:45
who set those rules
31:48
? And the same thing for the technology
31:50
too Should I just run that pheasant
31:52
over or should I stop ? For the pheasant
31:55
? Those
31:57
kind of decisions . But if it's a human
31:59
driving in control , we can always just
32:02
point to the human and say , yeah , you need to follow
32:04
the rules , and here they are . But if it's a machine
32:06
, all kinds of things , and
32:09
eventually if the machine fails or we end up in
32:11
some situation where there's a dilemma who's
32:15
responsible , who's accountable and that
32:17
just becomes very hard questions . I
32:19
don't have the answer , but I think
32:22
when we dial up the autonomy to that level
32:24
, we need to be able to have you
32:26
know and we need to talk about what level of transparency
32:28
can I demand
32:30
as a user or as a bystander
32:33
or whatever ? So there's just so many
32:35
questions . That opens up , I think .
32:39
And if you are allowed
32:41
to turn off AI assistance
32:43
, will , at some point in
32:45
time , when a failure is occurring
32:47
, you be be responsible
32:50
for turning that assistance off .
32:53
That's a very good point .
32:55
Someone could say . So
32:57
you
32:59
have to keep in mind that with assistance
33:02
you're better . Like in
33:04
the podcast episode you mentioned
33:06
, a human together
33:08
with a machine is better than the machine . Other
33:12
ways you could say a human with
33:14
a machine is better than another human or
33:17
just a human . And I
33:20
think at some point in time , companies
33:22
who are looking for accountability
33:25
and responsibility will
33:27
increase the level of you
33:30
have to turn
33:32
on AI assistance . You
33:35
could imagine when you get into a car
33:37
that is recognizing you as
33:40
a driver your facial
33:42
expression or something like that that
33:44
it can recognize if you're
33:46
able to drive or not , and
33:48
then the question is will it allow
33:50
you to drive or
33:52
will it decide no , don't
33:55
touch the wheel , I will drive
33:57
, or something like that . Or if something
34:01
pops up you're not able to
34:03
drive , I decide that for you and
34:05
I won't start the engine . Will
34:08
you override it or not ? That
34:10
are those scenarios that pop
34:12
up in my mind . And and how will
34:14
you decide as a human when
34:17
you have something , uh , emergent
34:21
happening ? You have to drive someone
34:23
to the , to the hospital or something like that ? You
34:26
will override , but will the
34:28
system ask is it really
34:30
an emergency ? Or something like that ? You say
34:32
I just want to do this
34:35
. How are you
34:37
reacting in this moment ?
34:40
I think that's super interesting
34:42
. And coming back to the transparency
34:44
thing , one of my favorite examples
34:46
is if
34:51
I go to the bank and I need
34:53
to borrow some money , for
34:55
many years , and even before AI , there's
34:57
been some algorithm that
34:59
the bank person don't
35:02
even know about how it works , probably
35:04
, but can just see a red or green light
35:06
after I ask so
35:09
, okay , how much money do you want to borrow ? Oh , I want to borrow
35:11
100K . No , you can't
35:13
do that , sorry . Uh , machine says
35:15
no , right . And and
35:17
uh , even before ai
35:19
, if something is complex enough , uh
35:22
, it doesn't really matter if it's ai or not
35:24
. But in these sort of life impacting
35:27
situations , do I
35:29
have a right for
35:32
transparency ? Do I have a right to know
35:34
why they say no to lend
35:36
me money , for example ? The
35:39
same if I get rejected for a job interview
35:41
based on some decision made by an
35:43
algorithm or AI . These
35:45
are very serious situations
35:48
where that will impact my life and
35:52
of course , they don't go . You can't claim transparency
35:54
everywhere , but I think
35:56
there are some of these situations where , as
35:59
humans , we do have a right for transparency
36:01
and to know how do these things know ? And
36:03
there is a problem if the person who's conveying
36:07
the information to us . The bank bank person doesn't
36:09
even have that insight , doesn't even know how it works
36:11
. They
36:14
just push the button and then the
36:16
light turns red or green . So
36:21
that's yeah , but
36:23
again , so many questions , and
36:25
that's why I'm actually happy that today I
36:27
don't know if you saw it we released
36:29
a documentation article for BC
36:32
about the sales audit agent that
36:34
, in very detailed way , describes what
36:36
this agent does , what it tries to
36:38
do , what kind of data it
36:41
has access to , what kind of permissions
36:43
it has , all these things . I think that's
36:45
a very , very transparent way of describing
36:48
a piece of AI and I'm actually very , very
36:50
proud of that . We're doing that . Yeah
36:52
, just want to make that , doesn't make that segue
36:55
.
36:56
Yeah , it's
36:58
filling the need of humans
37:00
to know how
37:02
does the system
37:04
work or does the system make decisions ? To
37:06
proceed to the next step , Because
37:11
I think there's a need to have
37:13
a view on is what
37:16
has happened before and has
37:19
an influence on me as a human is
37:21
judged in a way that is doing
37:24
good for me or not ? Like
37:26
your example , what is evaluated
37:28
when you ask for a back credit
37:30
or something like that . And
37:34
having this transparency brings us
37:37
back to yes , I have an influence
37:39
on how it is needed , Because
37:42
I can override the AI , because
37:44
I can see where it makes
37:46
a wrong decision or wrong step
37:48
or something like that . Make
37:51
the wrong decision or wrong step or something
37:54
like that , Like I would do when I talk to my bank
37:56
account manager and say , hey
37:59
, does it have the
38:01
old address ? I moved already . Oh
38:04
no , it's not in the system . Let's
38:10
change that and then make another evaluation or something like
38:12
that . And I think this autonomy
38:14
for us as users to keep
38:16
this in play , that
38:18
we can override it or we can add
38:20
information , new
38:23
information , in some kind of way . We
38:26
can just do it when we know where
38:29
is this information taken . We
38:31
can just do it when we know where is this information
38:33
taken , how old is it and how is it processed
38:35
. So I like
38:37
that approach very much . I don't
38:39
think every user is looking
38:41
at it , but
38:45
as an ERP system owner like I'm
38:47
in our company as well needs
38:54
to have answers to those questions from our users when we use
38:56
these features , but it's true and just so
38:58
.
38:58
Yeah , coming back , just come back to the banking
39:00
sample just again . So the bank person
39:03
probably doesn't know if
39:05
their AI or algorithm takes into
39:07
account how many pictures they can find
39:09
with me on it on Facebook
39:11
where I hold a beer , like
39:13
would that be an
39:15
influencing factor on if
39:18
they want to lend me money ? So
39:20
all these things . But we just don't have that
39:22
insight and I think that's a problem
39:25
in many cases . You
39:27
could argue I don't know how the
39:30
Tesla autopilot
39:33
does its . You know whatever
39:35
influences it to take decisions
39:38
, but that's why I like
39:40
the semi-autonomous piece
39:43
of work right now .
39:45
No , it is , I think . But
39:48
listening to what you're saying , I do like
39:50
the transparency , or at least the understanding
39:53
. I like the agent
39:55
approach because you have specific functions
39:57
. I do like the transparency so that you understand
40:00
what it does , so you know
40:02
what it's making a decision on . So if you're going to trust
40:05
it in a sense or you want to use the information
40:07
, you have to know where it came from . Ai
40:11
or computers in general can process data much
40:13
faster than humans . So
40:15
, being able to go back to
40:17
your bank credit check example , it
40:20
can process much more information than
40:22
a person can
40:24
. I mean a person could come up to the same
40:27
results , but it may not be as
40:29
quick as
40:32
a computer can , as
40:34
long as that information is available to it . But
40:36
I do think for certain functions the transparency
40:39
needs to be there because in the case of bank credit
40:41
, how can you improve your credit
40:43
if you don't know what's being evaluated to maybe
40:46
work on or correct that ? Or , to
40:48
Christian's point , there may be some misinformation in
40:50
there that , for whatever reason is in there , that's impacting , so that . Or to Christian's point , there may be some misinformation in there that you
40:52
know , for whatever reason was in there , that's impacting so
40:54
that you need to force it
40:56
. Some other things , to
40:59
the point that Christian also made . You
41:01
know humans with a machine is
41:03
better than a human . You know
41:05
, potentially in some cases , because
41:07
the machine can
41:09
be the tool to help you do
41:11
something , whatever it may be . You referenced
41:13
the hammer before and I use that example a
41:15
lot . You have hammers , you have screwdrivers , you have air guns . Which
41:18
tools do you use to do the job ? Well , it depends on what you're
41:21
trying to put together . Are you doing some rough work on a
41:23
house where you need to put up the frame
41:25
, so maybe a hammer or an air gun will work , and
41:27
if you're doing some finish work , maybe you need a screwdriver
41:29
. You know , with a small screw to do something . So there
41:32
does have to be a decision made . And
41:37
at what point can AI make that decision versus a human make that decision ? And , to
41:39
your point , where do you have that
41:41
human interaction ? But
41:43
I want to go with the human
41:45
interaction of de-skilling , because
41:48
if you have all these tools that
41:50
we rely on . To go back to the calculator , and
41:52
you know we've all been
41:54
reading , you know I think we all read the same book
41:56
and I think we all listened to some of the same episodes
41:58
. But you look at pilots and
42:00
planes with autopilots right same
42:03
thing with someone driving a vehicle like , do you
42:05
lose the skill to ? You know ai
42:07
does so much portion of flying a plane . I didn't even really think about that
42:09
. You know AI does so much portion of flying a plane . I didn't even really think about that . You
42:12
know the most difficult or the most
42:14
most dangerous is what ? The taking off and landing
42:16
of a plane , and that's where AI gets used
42:18
the most . And then a human
42:21
is in there to take over in the event that AI fails
42:23
. But if the human isn't doing
42:25
it often right
42:28
, even with the reaction time , okay well , how
42:30
quickly can a human react , you
42:32
know , to a defense system ? Same thing , you
42:34
know , if you look at the Patriot missile examples , where you
42:37
know the Patriot missile detects a threat
42:39
in a moment and then will
42:41
go up and try to , you
42:44
know , disarm the threat . So
42:47
at what point do
42:49
we as humans lose
42:52
a skill ? Because we
42:54
become dependent upon these tools and we
42:56
may not know what to do in
42:58
a situation because we
43:00
lost that skill .
43:04
That's a good point . Sorry
43:06
, go ahead . No , it's a really good point .
43:08
Sorry , go ahead . No , it's a really good point . I like that example from
43:10
I think it was from the Moral AI book as well
43:12
, where there's this example
43:14
of some military people
43:16
that you know they sit in their
43:18
bunker somewhere and
43:21
handle these drones like
43:23
day in and day out and
43:25
, because they're so autonomous
43:28
, everything happens without their
43:30
. You know they don't need to be involved , but
43:37
then suddenly a situation occurs . They need to react in sort of a split second and take
43:39
a decision , and I think one of the outcomes was you know
43:41
, their manager says that
43:43
. Well , who can blame
43:45
them if they take a wrong decision at that point
43:47
? Because
43:52
it's three hours of boredom and then it's three
43:54
seconds of action . So they , they're just not feeling it . Where
43:57
, to your point , right , if they were like they're , they're
43:59
, they're being de-skilled for two hours
44:01
and 57 minutes and now there's three minutes
44:03
of action where everything happens . Right
44:06
, who can , who can
44:08
expect that they keep up the level of
44:10
you know , skills and what have you
44:12
if , if they're just not involved . So it's
44:14
super interesting point . Um
44:16
, yeah
44:19
, so many , so
44:21
many questions that it raises .
44:23
Uh this , it goes
44:25
on , it goes on , it goes on , it's , and
44:27
it is in that moral a book is , and it was the
44:29
patriotot missile example . Because the
44:32
Patriot missile had two failures
44:34
, one with a British jet and one with an American jet
44:36
shortly thereafter . And that's what they were talking
44:38
about is how do you put human
44:41
intervention in there , you know , to reconfirm
44:43
a launch ? Because in the event , if it's a
44:45
threat , it will use the word threat . How
44:48
much time do you have to immobilize
44:51
that threat ? Right , you may only have a second
44:54
to two . I mean , things move quickly in the . In the
44:56
case of the patriot missile , again , it was
44:58
intended to disarm , uh , you
45:01
know , and again , missiles that are coming at you
45:03
, that are being launched , you know , over the pond
45:05
, as they say , so they can take them
45:07
down , and that's the point with
45:10
that .
45:11
And if
45:13
I could step back for a second . You
45:17
know when we're having a conversation about the usefulness
45:19
of AI is based upon the source
45:21
that it has access to and
45:25
you know understanding where
45:27
it's getting its source from and
45:30
what access it has . If
45:33
you're limiting the source
45:35
that it can consume to
45:37
be a better tool , are we
45:40
potentially limiting
45:42
its capabilities as
45:45
well , because we wanna control
45:47
it so much , in
45:49
a sense , to where it's more focused , but
45:51
are we also limiting its potential
45:54
, right ? Yes , so
45:57
yeah , go
45:59
ahead , sorry .
46:01
Yeah , no , I think that's very well put
46:03
and I think that's a consequence
46:05
and I think that's
46:07
fine . I mean , just take the sales auto
46:10
agent again as an example . We
46:13
have railed it very
46:15
hard . We put many constraints
46:18
up for it , so we can only do a
46:21
certain number of tasks . We
46:23
can only do task A , b
46:25
and C , d , e , f . It cannot do
46:27
. We had to set some guardrails
46:29
for what it can do . It's not
46:32
just about and I think this is a misconception
46:34
sometimes people think about agents and say here's
46:37
an agent , here's my keys
46:39
to my kingdom . Now
46:41
, agent , you can just do anything in
46:43
this business , in this system , and user
46:46
will tell you what to do or we've given you a task
46:48
. That's not our
46:50
approach to agents . In BC . We
46:52
basically said here's an end-to-end process
46:55
or a process that has sort of a natural beginning
46:57
and a natural ending . In
46:59
between that process you can trigger
47:01
the agent in various places , but the agent
47:03
has a set instruction
47:06
. You
47:08
receive inquiries for products
47:10
and eventually you'll create a sales order
47:13
. Like everything in between there could be
47:15
all kinds of you know human in the loop and
47:17
discussions back and forth , but
47:19
that's the limit of what that agent can do and
47:22
that's totally fine . It's not fully
47:24
autonomous . You can't just now go and say , oh
47:27
, by the way , buy more inventory
47:29
for our stock , that's
47:32
out of scope for it , and at that
47:34
point I think that's
47:36
totally fine . And it's
47:38
about finding those good use cases where there
47:41
is a process to be automated , where the
47:43
agent can play a part , and
47:46
not about just
47:48
creating a let's call it a super agent that
47:50
can do anything
47:53
with like . So I
47:55
think that's it's a very natural development
47:57
.
47:58
So you don't aim
48:00
for a T-shape profile
48:03
agent like it is in many
48:05
job descriptions Now . You want a T-shape
48:07
profile employee with
48:10
a broad and deep knowledge . We
48:13
as human can develop this , but
48:15
the agent
48:18
approach is different . I
48:20
would more say it's not
48:22
limiting the agent or
48:24
the AI of the
48:27
input or the capabilities . It is more
48:29
like going more deep , having
48:32
deep knowledge . In this specific
48:34
functionality , the AI agent
48:37
is assisting . That
48:39
can be more information
48:41
and it can go deeper than
48:43
a human can be
48:45
. For example , I was
48:47
very impressed by one
48:49
AI function I
48:52
had in my future
48:55
leadership education . We had an alumni
48:57
meeting in September
49:00
and the company set up an AI
49:02
agent that is behaving like
49:04
a conventional business
49:06
manager . Because we
49:08
learn how to set
49:11
up businesses differently
49:14
and when you have something new
49:16
you want to introduce to an
49:18
organization , often you
49:20
are hit by the cultural barriers
49:23
and just to
49:25
train that more
49:27
without humans , they invented
49:30
an ai model where
49:32
you can put your ideas in and you
49:34
have a conversation
49:36
with someone who has traditional
49:39
tayloristic business
49:41
thinking and something like that
49:43
. So you can train how
49:46
you um put your ideas
49:48
to such a person and
49:50
what will the reactions will
49:52
be just to train your ability
49:54
to be better
49:57
when you place these new ideas
49:59
to a real person in a traditional
50:01
organization or something like
50:03
that and that
50:05
had such a deep knowledge about
50:08
all these methodologies and thinking
50:10
and something like that . I
50:12
don't know who
50:14
I could find to
50:16
be so deep in this knowledge
50:18
and have exactly this profile
50:21
, this
50:28
deep profile that I needed to train
50:30
myself on .
50:31
That is a really interesting use case . I think then it becomes
50:33
to continuing a conversation about
50:35
maybe there's a misconception
50:37
or misunderstanding in the business
50:39
space , because right now , you
50:42
know , I've had several conversations
50:45
where AI is going to solve their problems
50:47
. Ai is going to solve their business
50:50
challenges
50:52
, but they , you
50:54
know , from a lot of people's perspective
50:57
, it's just this one entity of
50:59
, like it's going to solve all my business
51:01
problems , whereas for
51:03
us engineers , we understand that you
51:05
can have specific AI
51:07
tool that would solve a specific
51:09
problem or a specific process
51:11
in your business . But right now a
51:13
lot of people believe , like I'm just going to install
51:16
it , it's going to solve everything for me , and
51:18
so not realizing that there are different
51:20
categories for that , you know different areas
51:22
and I
51:25
think having these kinds of conversation in
51:27
hopes that know it's it's not just a one-size-fit-all
51:30
um kind of solution
51:32
out there , yeah , and indeed , and when
51:34
you see , like the um
51:37
industrial work developed
51:39
in the first phases , it's like going
51:41
back to um having
51:44
one person just
51:46
fitting is a
51:48
bold or a school or something like that
51:50
.
51:51
That is the agent at the moment , just
51:53
one single task it can do . But
51:56
it can do
51:58
many , many things into this
52:00
task at the moment and
52:04
what I think it will
52:07
take some time to develop is
52:10
developing this T-shape
52:13
from the ground of the T to have this
52:15
broad knowledge and broad capabilities
52:18
out of one agent
52:20
, or the
52:22
development of the network of agents . So
52:24
in some sessions in Vienna that was presented
52:27
, the team of agents , that
52:29
was presented , the team of agents . So
52:32
you have a coordinator that coordinates the agents and then
52:34
brings back the proposal from the agent
52:36
to the user or something like that . That
52:38
will look like the
52:41
one agent can do all of these capabilities
52:44
for the user . That is presented
52:46
. But in the deep functionality
52:50
there is a team of agents
52:52
and a variety of agents doing
52:54
very specific things .
52:57
I like that case . It goes to
52:59
, chris , to your point of sometimes
53:03
it's just a misunderstanding of what AI is
53:05
, because I think there's so many different levels of
53:07
AI and we talked about that before
53:09
. You know what is machine learning , what is large language
53:11
models . I mean , that's all in AI . A
53:13
lot of things you know can
53:15
fall into AI . But to the point of the
53:17
agents to go into ERP software
53:19
, even Christian , to your point , maybe even
53:21
in an assembly line or manufacturing
53:24
, I'd like the agents in
53:26
the business aspect to
53:28
have a team of agents
53:30
together so they all do specific
53:32
functions . To Soren's point of where
53:35
do you have some repetitive
53:38
tasks or some precision tasks , or
53:41
even , in some cases , some skilled tasks that
53:43
need to be done , and then you can chain
53:45
them together . Because even if you look at an automobile
53:48
we talked about an automobile there isn't
53:50
an automobile , that just appears . You
53:53
have tires , you have engines
53:55
, you have batteries , you have right
53:57
. The battery provides the power , the wheel provides
54:00
, you know the , the ability
54:02
to easily move right . The engine
54:04
will give you the force to push . So putting that all
54:06
together see , this is how I start to look at putting
54:08
that all together now gives you a vehicle
54:11
. So the same thing if you're looking at erp software
54:13
. That's why when I first heard about the agent
54:15
approach when we talked some months ago , soren , that
54:18
having an agent for sales orders
54:20
or having an agent for finance
54:23
or having an agent for purchase orders or
54:25
something , a specific task , you
54:27
can put them all together and then use the
54:29
ones you need and
54:31
then have somebody administer those agents
54:33
, so you have like an agent administrator .
54:35
That is where
54:40
the human comes back into the loop
54:42
, because at some point you have to
54:44
put these pieces together . I think
54:46
at the moment , this is the
54:48
user that needs to do this , but
54:51
this will develop further
54:53
in the future . So
54:56
you have another point where you end
54:59
in or where you need
55:01
ideas or something like that , because
55:03
that is also what I learned and found
55:05
very interesting . When
55:07
you see an AI
55:10
suggesting something to you
55:12
, this feeling
55:14
this is a fit for
55:17
my problem is inside your
55:19
body and at the moment
55:21
, you cannot put this into a machine . So
55:24
the idea , if the suggestion is right
55:26
and you decide to take it and
55:28
to use it , you
55:31
need a human to make this
55:33
decision , because you need the human
55:35
body , the brain and everything together
55:37
seeing and perceiving this , to
55:40
make this decision if it is wrong
55:43
or good for
55:45
this use case .
55:48
I think that depends
55:51
a bit Christian , if I may . So
55:54
there are places where , let's say
55:56
, one AI could you
55:58
could give it a problem to tackle and it will come
56:00
with some outcomes . And there could
56:03
then be another AI and
56:05
now I use the term loosely but another process
56:07
that is only tasked
56:09
with assessing the output
56:11
of the first one within
56:14
some criteria , within some
56:16
aspects . So that has been , say
56:19
loosely , now trained , but its only
56:21
purpose is to say , okay , give
56:23
me the outcome here and
56:26
then assess that with complete fresh
56:29
eyes like it was a different person
56:31
. Of course it's not a person and we should
56:33
never make it look like it's a person but
56:36
one machine can assess
56:38
the other .
56:38
Basically , that's what I'd say to a certain
56:41
degree , right , if
56:45
we can frame the problem , right
56:47
, yeah , and you had mentioned about from the human aspect
56:49
, to take over and said you know that's wrong
56:51
. Right , like , oh , it's wrong
56:53
, I know it's wrong , I'm going to take over . It
56:57
reminds me of a story when I did a
57:00
NAV implementation a while back
57:03
where we had demand forecasting
57:05
and
57:11
when we introduced that to the organization it does like tons of calculation and it's going
57:13
to give you a really good output of what you need based
57:16
upon information and data that you have
57:19
. And I had this individual
57:21
person that I was working with , or that
57:23
person was working for this organization , where that's
57:26
not right , that's wrong
57:28
, and I would
57:30
ask can you tell me why it's
57:32
wrong ? I'd love to know
57:34
, like , how are you feeling ? Like , what
57:36
made you feel like it was wrong ? Do you have
57:38
any calculations ? No , I just
57:40
know it's wrong because typically we do
57:42
it , you know we , typically it's this number right
57:45
, but they couldn't prove it . So
57:47
that's also a dangerous component
57:50
where a person could take over
57:53
and then whatever decision , whatever they
57:55
feel like it's wrong , it
57:57
could . Where they think it's wrong , they
57:59
can also be wrong . Right
58:01
, it's just like the human aspect of
58:03
it . But , but they can . But
58:06
they can .
58:07
Yes , but they can , yeah , yeah
58:09
and I think I mean and that . So
58:11
the first time when I learned
58:13
more about sort of ai , like these recent years
58:15
, was some eight , nine
58:17
years ago when we we did some of the classic
58:19
sort of machine learning stuff for some
58:22
customers and what
58:24
was an eye-opener for me was that
58:26
it didn't have to be a black box . So back then
58:28
, let's say , you had a data
58:30
set . I think the specific customer wanted
58:32
to predict which
58:35
of their subscribers would churn
58:37
right , and
58:39
there was a machine learning model for that on
58:42
Azure that they
58:44
could use for that . I don't know the specific
58:46
name of it and the
58:48
data guy that helped us one
58:51
of my colleagues from Microsoft back then showed
58:55
them data because they had their
58:57
ideas on what were the influencing factors
59:00
that made consumers churn . These
59:02
were , these were magazines that
59:05
they were subscribing to , and
59:07
when he told them , show them
59:09
the data , and then said
59:11
uh , and showed
59:13
them because they could do that with with the
59:15
machine learning tools they could , he could show them
59:17
these are the influencing factors
59:20
, like actually determine
59:22
based on the data that you just see and
59:26
he had validated against their historic data
59:29
. They were just mind-blown
59:31
. So it turned out I'm just paraphrasing
59:33
now that people in the western
59:35
part of the country were the ones
59:37
who churned the most . So the geography was
59:39
the predominant influencing factor
59:41
to predict churn . They
59:45
were just mind-blown because they had never
59:47
seen that data . They had other
59:49
ideas of what it means to churn . Like to your
59:51
point , chris , like . But that was just so
59:53
cool that we could bring that kind of transparency
59:56
and say this is how the model calculates
59:58
, these are the influencing factors that it has
1:00:00
found by looking at the data
1:00:02
. So I just thought that was a great
1:00:04
example of bringing that transparency when humans
1:00:07
, like you say , are just
1:00:09
being stubborn and saying no , it doesn't
1:00:11
work , it's not right .
1:00:15
That's definitely another factor , because we've
1:00:19
all come into those situations where that just doesn't feel
1:00:21
right and in some cases it
1:00:23
could be correct .
1:00:25
But it depends on the skills . That's
1:00:27
what I want to go back to is the skills
1:00:29
. It's the skills .
1:00:31
How , if
1:00:33
we're going to keep creating AI
1:00:35
tools to help
1:00:37
us do tasks
1:00:40
okay , one
1:00:42
, I'm going to go off
1:00:44
on a tangent a little bit . One how do
1:00:46
we ensure we have the skills to
1:00:48
monitor the AI ? How
1:00:50
do we ensure that we have the skills to
1:00:53
perform a task ? Now I understand . The
1:00:56
dishwasher Chris you talked about was invented . Now
1:00:58
we don't have to wash dishes manually
1:01:01
all the time to save us time to do other
1:01:03
things . We're always building these tools
1:01:05
to make things easier for us and
1:01:07
, in essence , up the required skill
1:01:09
to do a function , saying we need to work on more valuable
1:01:12
things . Right , we shouldn't have to
1:01:14
be clicking post all day long . Let's
1:01:17
have the system do a few checks
1:01:19
on a sales order . If it meets those checks
1:01:21
, let the system post it . But
1:01:24
when is there a point
1:01:26
where we lose the ability to have
1:01:28
the skill to progress forward ? And
1:01:31
then with this , with all of these tools that help
1:01:33
us do so much , because now
1:01:35
that we have efficiency with tools
1:01:37
, oftentimes it takes
1:01:40
a reduction of personnel . I'm not trying to say
1:01:42
people are losing their jobs . It's going to take a reduction
1:01:44
of personnel to do a task . Therefore
1:01:48
, relieving the dependency on
1:01:50
others . Humans are communal . Are
1:01:52
we getting to the point
1:01:54
where we're going to lose skill and
1:01:56
not be able to do some complex tasks
1:01:58
because we rely on other tools ? And
1:02:01
if the tools
1:02:03
are to get more complex and we
1:02:05
need to have the skill to determine that complexity
1:02:07
, if we miss that little middle
1:02:09
layer of all that mundane building
1:02:12
block stuff , how do we have the skill
1:02:14
to do something ? And two , if
1:02:16
I can now I
1:02:18
see AI images , I see AI
1:02:21
videos being
1:02:23
created all the time . It does a great job . Before
1:02:26
we used to rely on artists
1:02:28
, publishers , other individuals
1:02:30
to create that content for the
1:02:35
videos , for
1:02:37
brochures , pictures , images , the
1:02:40
B-roll type stuff we'll call it . If
1:02:42
we don't need any of that stuff and we're doing it all
1:02:44
of ourselves , what are we doing to us
1:02:46
to be able to work together as a species
1:02:48
if now I can do all the stuff myself
1:02:51
with less people ? So I have many points
1:02:53
there . One , it's the complexity of
1:02:55
the skill . And how do we get that skill if
1:02:57
we immediately cut out the
1:03:00
need , for we no longer need someone
1:03:02
to put the screw on that bolt
1:03:04
. As you pointed , christian , we need someone to come in
1:03:06
and be able to analyze these complex results
1:03:09
of ai . But if nobody
1:03:11
can learn that by
1:03:13
doing all those tasks , what does that give
1:03:15
us ? So that's my little , so
1:03:17
two points so what is ?
1:03:19
yeah , no , that's great , great questions
1:03:21
. So what you're saying is how do we
1:03:23
determine if this car is built right
1:03:25
if there's no drivers left to
1:03:28
to to test it , like no
1:03:30
, no one has the skill to drive anymore . So
1:03:32
how ? How can they determine if this car is built
1:03:34
up to a certain quality standard and what have
1:03:36
you ? Well , the other answer would be
1:03:39
you don't have to because it
1:03:41
will drive itself . But until we get that point
1:03:43
, like in that time
1:03:46
in between , you need someone
1:03:48
to still be able to validate and probably
1:03:50
for some realms of our
1:03:52
work and jobs and society , you will
1:03:54
always need some people to validate . So what do you do ? I
1:03:56
think those are great questions and
1:03:59
I certainly don't have the answer to it .
1:04:01
I would say I've had
1:04:03
this conversation with Brad for a couple
1:04:05
of years , I think him and I , you
1:04:07
know , we just we love where I
1:04:11
love where AI is coming and I pose
1:04:13
the question about , you know , is AI becomes a necessity
1:04:15
for the survival of humanity . Becomes
1:04:17
a necessity for the survival of humanity
1:04:20
Because , as
1:04:22
you all pointed out , that
1:04:24
eventually you'll lose some of those skills
1:04:26
because you're so dependent . Eventually
1:04:29
you'll lose it . And I've had
1:04:31
tons of conversation Right
1:04:33
now we don't need AI . We
1:04:37
don't need AI for the survival of humanity , but
1:04:40
as we become more dependent
1:04:43
, as we lose some of those
1:04:45
skills , because we're giving it to AI
1:04:47
to do some tedious tasks sometimes it
1:04:49
could be in the medical field
1:04:52
or whatnot it
1:04:54
becomes a necessity in the
1:04:56
future . It will eventually
1:04:58
become a necessity in the future for humanity's survival
1:05:01
, but we're forcing it
1:05:03
. Right now we don't need it .
1:05:03
We are forcing the dependency by
1:05:06
losing this Because . I'm not saying it's right or wrong
1:05:08
, but I'm listening to what you're saying , saying
1:05:10
that we are going to be dependent
1:05:12
on machine for the
1:05:15
survival of the human race . I
1:05:18
mean , humans have been around for how long ?
1:05:23
But we're already dependent on machines . Right , we've been around for how long ? But we're already dependent
1:05:25
on machines . Right , we've been there for a long time . We're forcing ourselves to be dependent
1:05:28
upon it .
1:05:29
That's why I use the word machine , because
1:05:31
we force ourselves to
1:05:33
be dependent upon that right
1:05:35
. We force ourselves to lose
1:05:38
the skill or use something so
1:05:40
much that it's something that we must have to
1:05:43
continue moving forward
1:05:45
.
1:05:47
Yeah , my point was that that's not
1:05:49
new . I mean , we've done that for 50
1:05:52
years like force dependency
1:05:55
of some machines , right ? So without them we wouldn't
1:05:58
even know where to begin where to do
1:06:00
that task . So AI is just probably
1:06:03
accelerating that in
1:06:05
some realms now , I think
1:06:07
.
1:06:07
Yeah , it is , Because , you know , as
1:06:09
humans' desire is to improve
1:06:12
quality of life , expand
1:06:14
our knowledge and mitigate
1:06:16
risk . It's not improving quality of life
1:06:18
.
1:06:18
It's to be lazy ? I hate to tell
1:06:20
you it's . Humans take the path of least resistance
1:06:23
and I'm not trying to be there's a little levity
1:06:25
in that comment . But
1:06:27
why do we create the tools to do the things
1:06:29
that we do ? Right ? We create
1:06:31
tools to harvest fruits
1:06:34
and vegetables from the farm , right
1:06:36
, so we can do them quicker and easier and
1:06:39
require less people , right
1:06:41
? So it's not necessarily
1:06:43
, you know , we do it because
1:06:45
to make things better . We do it because
1:06:47
, well , we don't want someone to have to go
1:06:50
to the field and , you know , pick the
1:06:52
cucumbers from the cucumber vine , right
1:06:54
, we want , you know , they shouldn't have to do that , they
1:06:57
should do something else . We're kind of , in my opinion , forcing
1:06:59
ourselves to go that way . It is necessary
1:07:01
to harvest the fruits and the vegetables
1:07:03
and the nuts to eat , but
1:07:06
, you know , is it necessary
1:07:08
to have a machine do it ? Well , no , we just said it would
1:07:10
be easier , because I don't want to go out
1:07:12
in the hot sun all day long and you
1:07:14
know harvest .
1:07:16
You can do the dishes by hand if you like , right
1:07:18
yeah ?
1:07:20
If you like , yeah , if you choose
1:07:22
to . No one wants to . No one wants to do the dishes
1:07:24
.
1:07:24
trust me I will never live in a place without a dishwasher
1:07:26
. I mean , it's the worst
1:07:29
that can happen .
1:07:31
It is , and the pots and the pans forget
1:07:33
it right .
1:07:35
If you take this , further at
1:07:38
some point in time . If you have a new colleague
1:07:40
and you have to educate him or her
1:07:43
, do
1:07:46
you educate him to make these steps
1:07:48
the sales order agent is doing by
1:07:51
him or herself , just
1:07:54
to have the skill to know
1:07:56
what you're doing . Or if
1:07:59
you are saying , just
1:08:01
push the button .
1:08:06
Yeah , but I think what ? Eventually , as
1:08:09
you continue to build upon these
1:08:11
co-pilots in AI , eventually you just
1:08:13
have two ER pieces and talk to each other . And
1:08:16
then what then ? Where
1:08:19
are we then ?
1:08:23
Yeah , super interesting . What then ? Where are we then ? Yeah , super
1:08:25
interesting . I mean , who knows ? I
1:08:28
think it's so hard to predict where we'll
1:08:30
be even just in 10 years .
1:08:34
I don't think we'll be able to predict where we'll be in two years , I
1:08:37
think it's . Will
1:08:40
we ever be able to press a button Like
1:08:42
right now ? I can create video images
1:08:45
and still images . I'm using that
1:08:47
because a lot of people relate to that , but I can
1:08:49
create content , create things . I've
1:08:52
also worked with AI
1:08:54
from programming in a sense
1:08:56
, to create things . I was listening to a podcast the other
1:08:58
day . In the podcast they said
1:09:00
within 10 years , the most common
1:09:02
programming language is going to be the human language . Because
1:09:06
it's getting to the point where you can say create
1:09:08
me this . It needs to do
1:09:10
this , this and this , and an
1:09:12
application will create it , it will do the test
1:09:14
and produce it . You wake up in the morning and now you have an app
1:09:17
. So it's going to get to the point where what
1:09:20
happens now ? Let's move fast forward a little bit , because
1:09:22
you even look at github , co-pilot
1:09:24
for coding , right . You look at the sales agents
1:09:26
chris's point erp systems can just talk
1:09:28
to each other . What do you need to do ? Is
1:09:31
there going to be a point where that's what I was getting
1:09:33
at where we don't need other people because
1:09:35
we can do everything for ourselves ? And
1:09:38
then how do we survive if we don't
1:09:40
know how to work together
1:09:42
because we're not going to need to ?
1:09:45
that is so how we go
1:09:47
yeah , I'm sorry .
1:09:49
Sorry , now , that's so . To go to your point , how
1:09:51
is ai going to help progress
1:09:53
, the human civilization
1:09:55
, right , or the species , if
1:09:57
we're going to get to the point where we're
1:09:59
not going to need to do anything , we're all just going
1:10:02
to sit in my house because
1:10:04
I can say make me a computer and
1:10:07
click a button , it will be , you know
1:10:09
there and
1:10:12
that's you know where I come
1:10:14
from with I would in that
1:10:16
other podcast show that you mentioned , where I
1:10:18
quote james burke when he says that we
1:10:20
will have these nanofabricators , that in
1:10:22
60 years , everyone will have everything they
1:10:24
need , and just produce it from air , water and
1:10:27
dirt .
1:10:27
Basically right , so and uh
1:10:29
, so that that's the end of scarcity . So all the stuff
1:10:31
that we're thinking about right now are just temporary
1:10:34
issues that we don't need to worry about
1:10:36
in 100 years . So that
1:10:38
that's just impossible to even imagine . But
1:10:40
because , as one of you said just
1:10:42
before , we'll probably always just move the needle and
1:10:45
figure out something else to desire , something
1:10:48
else to do . But I
1:10:50
think it is a good question to ask but
1:10:52
what will we do with this productivity that
1:10:54
we gain from AI ? Where
1:10:57
will we spend it ? So now you're a company , now you
1:10:59
have saved 20% cost because you're a company . Now you save 20%
1:11:01
cost because you're more efficient in some
1:11:03
processes due to AI or
1:11:05
IT in general . What will
1:11:07
you do with that 20% ? Do
1:11:11
you want to give your employees more time off ? Do you
1:11:13
want to buy a new
1:11:15
private jet ? I don't know . You
1:11:18
have choices right , but
1:11:21
as a humanity , I definitely personally my's . Uh , you have choices , right , um , but as a but as a humanity
1:11:23
, I definitely . I personally , my personal opinion is I
1:11:26
mean , I would welcome a future
1:11:28
where we would , where we could work less
1:11:30
, where we could have machines to do
1:11:32
things for us . But it requires that we have a conversation
1:11:35
, start thinking about how will we interact
1:11:37
in such a world where we don't
1:11:39
have to work the same way we do today . What ? What will
1:11:41
our social lives look like ? Why do we need
1:11:43
each other ? Do we need each other ? We
1:11:46
are social creatures , we are communal creatures . So
1:11:49
, yes , I think we do . But how , what
1:11:52
will that world look like ? I think this keeps
1:11:55
me up at night sometimes .
1:12:04
I can't imagine , and nor did I imagine , there'd be full self-driving vehicles within a short
1:12:06
period of time , as it had to . I mean , I think , as you made a great
1:12:08
point , soren , I don't think anyone can know what
1:12:11
tomorrow will be or what tomorrow will bring
1:12:13
with this , because it's advancing
1:12:16
so rapidly . And go back to the points I said I
1:12:18
had mentioned you talked about the podcast
1:12:21
with James Burke , which was a great podcast as well
1:12:23
too . That was the You're
1:12:25
Not so Smart episode I think
1:12:27
it was 118 on connections , which
1:12:29
talked a lot about that . And
1:12:32
yes , it was a great episode . That's another
1:12:34
great podcast , and a
1:12:37
lot of this stuff is going to be building blocks that
1:12:39
we don't even envision what it's going to build
1:12:41
. You know , look at the history of the engine . You look at the history
1:12:43
of a number of inventions . They
1:12:45
were all made of small little pieces
1:12:47
. So we're building those pieces now . But
1:12:50
also our mind is going to need to be I
1:12:52
use the word stimulated . If we're going to get to the
1:12:54
point where we don't have to do anything , how
1:12:57
are we going to entertain
1:12:59
ourselves ? We're
1:13:04
we going to entertain ourselves ? We're always going to find something else right to have to do , but
1:13:06
is there going to be a point where there is nothing else because
1:13:08
it's all done
1:13:10
for us ?
1:13:12
yeah , just want to comment on that one
1:13:14
thing . You said there like that you referenced that
1:13:16
no one , no one just
1:13:18
imagined the car , like you
1:13:20
. You know , people did stuff
1:13:22
, invented stuff , but suddenly some other people
1:13:25
could build on that and invent other stuff and
1:13:27
then eventually you had
1:13:29
a car , right ? Or anything else
1:13:31
that we know in our life . And
1:13:33
I think James Berg also says that innovation
1:13:36
is what happens between the disciplines
1:13:38
, and I really love that
1:13:40
. I mean , just look at agents today
1:13:42
. Like four years ago
1:13:44
, before LLMs were such a big
1:13:47
thing . I know they were in a very niche
1:13:49
community , but with sort
1:13:51
of the level of LLMs
1:13:53
today , no one
1:13:55
said let's invent LLMs
1:13:57
so we could do agents . No
1:13:59
, I mean , LLMs was invented Now because we have LLMs , so we can do agents
1:14:02
. No , I mean , llms was invented Now because we
1:14:04
have LLMs . Now we think , oh , now we can
1:14:06
do this thing called agents and
1:14:08
what else comes to mind in six months , right ? So
1:14:11
it just proves that no
1:14:13
one has this sort of five-year plan of
1:14:15
, oh , let's , in five years , do this and this . No
1:14:18
, because in six months someone will have invented
1:14:21
something that , oh , we can use
1:14:23
that and oh , now we can build this
1:14:25
entirely new thing . So that's what's just
1:14:28
super . It's both super exciting , but it's also
1:14:30
a bit scary . I mean I can , I can speak
1:14:32
for as as a product
1:14:34
developer . It's definitely
1:14:36
challenged me to rethink
1:14:39
my whole existence as a product
1:14:41
person , because now I
1:14:44
don't actually know my toolbox
1:14:46
anymore . Two years
1:14:48
ago I knew what AL could do Great
1:14:51
. I knew the confines of what we could build
1:14:53
. I knew the page types in BC and stuff
1:14:55
. So if I had a use case , I
1:14:57
could visualize it and see how we can
1:14:59
probably build something . If we need a new
1:15:02
piece from the client , then we could talk
1:15:04
to them about it and we can figure that out . But
1:15:06
now I don't
1:15:09
even know if we can build it until we're
1:15:11
very close to having built it . I mean , so
1:15:13
it's . There's so much experimentation that
1:15:16
, yeah , we're building
1:15:19
the airplane where we're flying it in that sense , right
1:15:21
and so that also challenges our whole testing
1:15:24
approach and testability and frameworks . But so
1:15:26
, which is super exciting in itself
1:15:28
, so it's just a mindset change
1:15:30
, right , um , but , but definitely
1:15:32
challenge your product people oh
1:15:35
, it definitely does .
1:15:36
I I think uh ai is
1:15:39
um , it's definitely
1:15:41
changing things and it's here to stay
1:15:43
. I guess you could say . I'm just wondering
1:15:45
, you
1:15:48
know . I say I think back of a movie
1:15:50
was it from the 80s , called Idiocracy . You
1:15:54
know , if you haven't watched it it's a mindless
1:15:56
movie , but it is . It's the same type of thing where
1:15:58
a man from the past goes into the future
1:16:01
movie , but it is . It's
1:16:03
the same type of thing where a man from the past goes into the future
1:16:05
and you know what happens to the human species in the future
1:16:07
and how they are . It's pretty comical
1:16:09
. It's funny how some of these movies are
1:16:12
some of these circling back . Yeah , they
1:16:14
circle back , you know with .
1:16:16
You know star trek , star wars I'm
1:16:18
wondering when we will be there .
1:16:25
That already happened . I just hope we won't get to the
1:16:27
state where I think you said that cartoon or
1:16:30
that animated movie Wall-E where
1:16:34
the people are just lying back
1:16:36
all day and eating and their bones
1:16:38
are deteriorating because they don't use
1:16:40
their bones and muscles anymore . So the skeleton sort
1:16:43
of turns
1:16:45
into something like they just become like wobbly
1:16:48
creatures that just lie there .
1:16:50
As I don't know seals
1:16:52
, or consuming what
1:16:56
was really interesting with Back to
1:16:58
the Future is this thing here , because
1:17:02
Doc Brown made
1:17:04
this time
1:17:06
machine using a
1:17:09
banana to
1:17:11
have the energy of
1:17:15
1.2.1 gigawatts
1:17:17
or something like that . You
1:17:19
don't have to wait for a thunderstorm to
1:17:21
travel into time a bit . This
1:17:23
idea was mind-blowing back then and
1:17:26
and I I'm
1:17:28
dreaming of using my
1:17:30
using free time as as a human
1:17:32
to to make this leaps . Because
1:17:34
we are . We
1:17:37
have this scarcity in resources and
1:17:40
, even if this goes further and further
1:17:42
and further , I assume that we
1:17:44
don't have enough resources to make this
1:17:46
machine computing power to
1:17:49
fulfill all that . I
1:17:51
think there will be limitations
1:17:53
at some point in time , and
1:17:56
most of what
1:17:58
is AI freeing us up is to
1:18:00
have ideas on how are we using
1:18:02
our resources that is sustainable
1:18:05
.
1:18:07
I like that . I
1:18:09
have no way to say what
1:18:11
you fear will become
1:18:13
true or not , but I like the idea
1:18:16
of using whatever productivity
1:18:18
we gain for more sort of
1:18:20
humanity-wide purposes , and I also
1:18:22
hope that whatever we do with technology and
1:18:25
AI will
1:18:27
reach a far audience and also help the people who
1:18:29
today don't even have access to clean
1:18:31
drinking water and things like that . So
1:18:34
I hope AI will benefit most
1:18:36
people and
1:18:38
, yeah , let's see how that goes .
1:18:41
Yeah , I think it's going to redefine human identity
1:18:44
. Yeah .
1:18:44
I'd like to take it further and I'd say the planet . I
1:18:47
think you know , with the
1:18:50
AI , I hope we gain some efficiencies
1:18:53
, to go to your point , christian , that
1:18:55
we don't . We can have it
1:18:57
all sustainable so we're not so destructive
1:18:59
, because you know
1:19:01
the whole circle of life , as they say . You
1:19:04
know it's important to have all
1:19:06
of the species of animals
1:19:09
. You know plants , water , you
1:19:11
know anything else is on the planet . It's
1:19:14
an entire ecosystem that needs to work together . So
1:19:16
I'm hoping , with this AI
1:19:19
, that's something that we get out of . It is
1:19:21
how to become less destructive
1:19:23
and more efficient and more sustainable
1:19:25
, so that everything benefits
1:19:27
, not just humans because
1:19:29
we are heavily dependent upon everyone else
1:19:31
.
1:19:32
That's the moral aspect of it . So
1:19:34
if we use it to
1:19:37
use all of the resources , then
1:19:42
it is moral aspects
1:19:44
bad because it is not
1:19:47
sustainable for us as
1:19:49
a society and as human beings
1:19:51
on this planet . So
1:19:54
, as I see , moral is
1:19:56
a function
1:19:58
of keeping
1:20:00
the system alive , because
1:20:03
we use the distinction between
1:20:05
good and bad in that way that it is
1:20:07
not morally good
1:20:09
to use all the resources . So
1:20:16
if we could extend anything that we can do with AI using all of the resources , that
1:20:18
is not really good and that
1:20:21
what we can use with our brains
1:20:23
is think ahead when
1:20:25
will this point in time will
1:20:28
be and label
1:20:30
it as bad behavior . So
1:20:33
the discussion we are having now and
1:20:36
I'm very glad that
1:20:38
you brought this point , sorin is
1:20:41
that we have this discussion now to think
1:20:43
ahead . Where will the
1:20:45
use of AI be
1:20:47
bad for us as a society
1:20:49
and as human beings
1:20:51
and for the planet ? Because now
1:20:54
is the time we can think ahead what we
1:20:56
have to watch out in the next
1:20:58
month or years or something
1:21:00
like that , and
1:21:02
that is the moral aspect I think
1:21:04
we should keep in mind when we
1:21:06
are going further with AI .
1:21:09
I think there are so many aspects there to your
1:21:11
point , christian . So one is of course the whole
1:21:13
, like we all know , the
1:21:16
energy consumption of AI in itself , of AI in itself
1:21:18
. But there's also the other side , I
1:21:23
mean the flip side
1:21:25
, where AI could maybe help us spotlight or shine a
1:21:27
bright light on where can we save on
1:21:30
energy in companies and
1:21:32
where can AI help us , let's
1:21:35
say , calibrate our moral compasses
1:21:38
by shining a light on
1:21:40
where we don't behave as well today
1:21:42
as a species
1:21:44
. So I think there's a flip side
1:21:46
. I'm
1:21:50
hoping we will make some good decisions along the way
1:21:52
to have AI help us
1:21:54
in that .
1:21:58
There's so many things I could talk about with AI
1:22:00
and we'll have to have I
1:22:03
think we'll have to schedule another discussion to
1:22:05
have you on , because I did . I had a whole list of notes
1:22:07
of things that I wanted to talk about when
1:22:10
it comes with AI , not just from the ERP point
1:22:12
of view , but from the AI point of view , because , you
1:22:15
know , after getting into the more AI book
1:22:17
and listening to several
1:22:19
podcasts about AI and humanity
1:22:22
, there's a lot of things that I
1:22:25
wanted to jump into . You
1:22:27
know we talked about the de-skilling . We talked about
1:22:29
too much trust . I'd like to get into harm
1:22:32
bias and also , you know how
1:22:34
AI can analyze data . You
1:22:37
know that everyone thinks anonymous because , reading
1:22:41
that Morley , I booked some statistics they put in
1:22:43
there . I was kind of fascinated . Just
1:22:45
to throw it out , there is that
1:22:47
87% of the
1:22:49
United States population can be identified by
1:22:52
their birth date , gender and their zip
1:22:54
code . That was mind
1:22:56
blowing . And then 99.98%
1:23:00
of people can be identified with 15
1:23:02
data points . So all of this anonymous
1:23:05
data . You know , with the data sharing
1:23:07
that's going on , it's very easy to make
1:23:09
many pieces of anonymous
1:23:11
data no longer anonymous
1:23:13
. Is what I got from that . Um . So
1:23:16
all that data sharing with those points , that um
1:23:18
, the , the birth date , gender
1:23:20
and five digit us zip code here again
1:23:22
, that's in the united states was one that that
1:23:25
shocked me , and now I understand why those
1:23:27
questions get asked the most because it's going to give
1:23:29
, with a high probability , 87 percent
1:23:31
.
1:23:32
Uh who you are maybe
1:23:36
just for the audience , uh , watching
1:23:38
this or listening to this . So so
1:23:40
the book that we're talking about is this
1:23:42
one Mole AI . I
1:23:45
don't know if you can see it . Does it get into focus ? I
1:23:47
don't know if it does .
1:23:48
Yeah now it does .
1:23:50
So it's this one , mole , ai and
1:23:52
how we Get there . It's really
1:23:55
a great book that
1:23:57
goes across fairness , privacy
1:23:59
, responsibility , accountability , bias
1:24:03
, safety , all kinds
1:24:05
of and it tries to take sort of a pro-con
1:24:07
approach . You know , because I
1:24:10
think maybe this is a good way to end the discussion
1:24:12
, because I have to go . I
1:24:15
think one cannot just say AI
1:24:17
is all good or AI is
1:24:19
all bad , like it depends on
1:24:21
what you use it for and how we , how
1:24:24
we use it and how we let
1:24:27
it be biased or not , or how
1:24:29
we implement fairness into algorithms , and
1:24:31
so there's just so many things that we could talk about for an hour
1:24:33
. But that's what this book is all about and
1:24:36
that's what triggered me to to share
1:24:38
a month back . So just thank
1:24:40
you for the , for the chance to talk about
1:24:42
some of these things , and I'd be happy to jump on
1:24:44
another one .
1:24:46
Absolutely , We'll have to schedule one up , but
1:24:48
thank you for the book recommendation . I did
1:24:51
start reading the Moral AI book that you just mentioned
1:24:53
. Again , it's Pelican Books . Anyone's looking
1:24:55
for it . It's a great book
1:24:57
. Thank you , both Soren
1:24:59
and Christian , for taking the time to speak with us this
1:25:02
afternoon , this morning , this evening , whatever it may
1:25:04
be anywhere . I know where I have the time zones
1:25:06
and we'll definitely have to schedule to
1:25:08
talk a little bit more about AI and some of the other aspects
1:25:10
of AI . But if you would
1:25:12
, before we depart , how can
1:25:14
anyone get in contact with you to
1:25:17
learn a little bit more about AI , learn a little bit more
1:25:19
about AI , learn a little bit more about what you do and
1:25:21
learn a little bit more about all the great things that
1:25:23
you're doing ?
1:25:26
Soren , so the best place to find me is probably
1:25:28
on LinkedIn . That is my only
1:25:30
media that I participate in
1:25:32
these days . I deleted all the other accounts and
1:25:35
that's a topic for another discussion .
1:25:37
It's so cleansing to do that too .
1:25:38
Yeah , and
1:25:42
for me it's also on LinkedIn and
1:25:44
on Blue Sky . It's Curate Ideas
1:25:46
excellent
1:25:48
, great .
1:25:48
Thank you both . Look forward to talking with both of you again soon
1:25:50
.
1:25:51
Ciao , ciao thanks for having us . Thank you so
1:25:53
much bye , thank you guys .
1:25:57
Thank you , chris , for your time for
1:25:59
another episode of In the Dynamics Corner Chair
1:26:01
and thank you to our guests for participating . Thank you for your time for another episode of In the Dynamics Corner Chair and thank you to our guests
1:26:03
for participating .
1:26:04
Thank you , brad , for your time . It is
1:26:06
a wonderful episode of Dynamics Corner
1:26:09
Chair . I would also like to thank
1:26:11
our guests for joining us . Thank
1:26:13
you for all of our listeners tuning in as well
1:26:16
. You can find Brad at
1:26:18
developerlifecom , that
1:26:20
is D-V-L-P-R-L-I-F-Ecom
1:26:24
, and you can interact with them via
1:26:27
Twitter D-V-L-P-R-L-I-F-E
1:26:30
. You can also find
1:26:32
me at matalinoio
1:26:35
, m-a-t-a-l-i-n-oi-o
1:26:38
L
1:26:46
I N O , dot I O , and my Twitter handle is Mattelino16 . And see , you can see those links down
1:26:48
below in their show notes . Again , thank you everyone . Thank you and take care
1:26:50
.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More