Achieving Hyper Performance

Achieving Hyper Performance

Released Tuesday, 18th February 2025
Good episode? Give it some love!
Achieving Hyper Performance

Achieving Hyper Performance

Achieving Hyper Performance

Achieving Hyper Performance

Tuesday, 18th February 2025
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Have you ever wished you had

0:02

more influence at work that

0:04

people would naturally be more

0:06

likely to buy in on

0:08

whatever idea you're selling them,

0:10

whether they report to you

0:13

or not? Well, you're in

0:15

luck. I teach a virtual

0:17

10-week class on internal communication

0:19

and change management through Texas

0:21

A&M University, and it's

0:23

enrolling now. Get details and

0:25

enroll at HBO. and click

0:28

on certificate program. You get

0:30

to learn directly from me,

0:32

including live virtual office

0:34

hours over Zoom, with

0:37

a cohort of interested,

0:39

brainy folks like you

0:41

from around the world.

0:43

Again, learn more and

0:45

enroll in the Internal

0:47

Communication and Change Management

0:49

Course at HBL.TAMU. EDU.

0:51

That's HBL like Human

0:54

Behavior Lab. T-A-M-U, like

1:07

Welcome to episode 472 of

1:09

the Brainy Business, understanding the

1:11

psychology of why people buy. Today's

1:14

episode is all about

1:16

achieving hyperperformance with Dr. Agnes

1:18

Steeb. You

1:25

are listening to the Braney Business

1:28

Podcast, where we dig into

1:30

the psychology of why people

1:32

buy and help you incorporate

1:34

behavioral economics into your business,

1:36

making it more brain-friendly. Now

1:38

here's your host, Molina Palmer.

1:41

Hello, hello everyone. My name is

1:43

Molina Palmer, and I want to

1:45

welcome you to the Braney Business

1:47

Podcast. One of my favorite case

1:49

studies to share is of a

1:51

company called The Littery. Their CEO

1:54

joined me on the podcast way

1:56

back in episode 75 and talked

1:58

about the way they used behavioral

2:00

economics to reframe the problem of

2:02

getting people to properly throw away

2:04

and sort their garbage by looking

2:07

at motivation and incentives differently and

2:09

inventing smart garbage cans that have

2:11

allowed them to turn litter into

2:13

lottery tickets. It's an amazing story

2:15

and one that my guest today,

2:18

Dr. Agnes Steve, was a big

2:20

part of in the behavioral aspects

2:22

that went into that company. He

2:24

will be talking about that work

2:26

and many other amazing things he

2:29

does in our conversation today, which

2:31

originally aired back in June of

2:33

2022. He's a fascinating person with

2:35

some really great case studies. I

2:38

can't wait to share them with

2:40

you. Don't forget links for my

2:42

top related past episodes and books

2:44

are waiting for you in the

2:46

show notes for this episode which

2:49

are found within the app you're

2:51

listening to and at the brainy

2:53

business.com/four seven two All right, let's

2:55

jump right in dr. Agnes Steve.

2:57

Welcome to the brainy business podcast

3:00

Happy to be here. Let's see

3:02

you again. Absolutely always delighted to

3:04

see and chat with you and

3:06

For everyone who is not yet

3:08

familiar with you and your amazing

3:11

work, can you share a little

3:13

bit of your background and the

3:15

work that you do? I'm really

3:17

passionate about helping people, teams, organizations,

3:19

and societies to get where they

3:21

want to get and usually they

3:24

want to get to a higher

3:26

level of satisfaction, whether through the

3:28

well-being of society or that's through

3:30

the performance in organization or just

3:32

individuals to boosting their self-esteem. So

3:35

how do we get there? I

3:37

think it's a good to combine

3:39

the resources of technology that we

3:41

currently are developing, especially the most

3:43

promising one artificial intelligence and craft

3:46

it with the understanding of how

3:48

humans are and how humans can

3:50

change, or more specifically, how humans

3:52

are not willing to change and

3:54

instruct the artificial intelligence to help

3:56

us to get where we want

3:59

to get. I label it as

4:01

hyperperformance. And what do I mean

4:03

by that is not by adding

4:05

more skills, practice, or knowledge, but

4:07

removing the obstacles from human thinking.

4:10

It's our counterproductive psychology that is

4:12

oftentimes the roadblock to our own

4:14

success and happiness. individual at societal

4:16

and also organizational levels. And I'm

4:18

doing research in that, I'm teaching

4:21

subject related to that, I also

4:23

give keynotes and also master classes

4:25

for organizations to help embrace that

4:27

perspective because unfortunately the history tells

4:29

and the habits are in especially

4:32

in organizations that is, oh we

4:34

all people are not performing very

4:36

well, our KPIs are quite low.

4:38

Maybe we should send to the

4:40

training. So after training, there will

4:42

be different people. Now, my good

4:45

friend from when I worked at

4:47

Hewlett-Packard said, you know, a fool

4:49

with a tool is still a

4:51

fool. So I took that perspective

4:53

and I tried to tailor it

4:56

from the perspective if there is

4:58

a bias in employees or presidents

5:00

or individuals somewhere. the bias is

5:02

still there no matter how well

5:04

you train the person. So that's

5:07

kind of the comparison between increased

5:09

performance and hyper performance that I'm

5:11

really passionate about. So that's my

5:13

journey. And with that, where you

5:15

say, you know, the fool with

5:17

the tool is still a fool,

5:20

right? And I believe what I'm

5:22

hearing and what you're saying there

5:24

is, like, even though, even when

5:26

we know about the biases, we

5:28

can't fully eliminate them. It's not

5:31

like, oh, I'm not going to.

5:33

stereotype this way anymore. I'm not

5:35

going to have confirmation bias be

5:37

a thing for me, right? We

5:39

don't get to just turn that

5:42

off when we're aware of them

5:44

unfortunately. So what sort of tips

5:46

and things do you incorporate into

5:48

the work that you're doing to

5:50

help people be a little less

5:52

of a fool with their tools.

5:55

Okay, it's kind of interesting to

5:57

have like a very simplified version

5:59

of it, but I expressed that

6:01

a friend of mine told me

6:03

many years ago. But of course,

6:06

we also kind of need to

6:08

move away from it gradually in

6:10

a conversation answering your question. Obviously,

6:12

if there is no awareness of

6:14

a bias, so the bias doesn't

6:17

exist. Therefore, people just have a

6:19

feeling like everything is fine. nothing's

6:21

wrong with it. And then the

6:23

next step is how the awareness

6:25

can arrive to a person is

6:28

either the person arrives to that

6:30

awareness himself or herself or somebody

6:32

says and somebody can be a

6:34

human being and somebody can be

6:36

a technology. And when that awareness

6:38

arrives from external sources, then it's

6:41

also a question of credibility, and

6:43

there's a question of whether somebody

6:45

wants to rule my life and

6:47

suggest what's the better way of

6:49

living and all of these other

6:52

ways, how people experience, number one,

6:54

resistance, number two, denial. And those

6:56

are the typical. trajectories where people

6:58

can find themselves. So therefore, number

7:00

one awareness. It's just like with

7:03

everything, like we can look at

7:05

the addictions, for example, people are

7:07

addicted and they in denial, so

7:09

nothing will really help. Awareness number

7:11

one, and then once the awareness

7:13

is there, there has to be

7:16

a degree of willingness, willingness to

7:18

be aware. So what I usually

7:20

say, and that starts with a

7:22

young age. speaking with the kids

7:24

and then goes up to the

7:27

students and up to the adults

7:29

is do you want to that's

7:31

kind of a very trivial but

7:33

at the same time the key

7:35

essential question do you want to

7:38

do you want to be aware

7:40

do you and then after if

7:42

you want to be aware do

7:44

you want to be aware do

7:46

you and then after if you

7:48

want to be aware do you

7:51

want to do anything about it?

7:53

And from my work there is

7:55

a tool which I call circles

7:57

and people are they know what

7:59

they want and they do it

8:02

and then quite many people fall

8:04

into the middle circle which I

8:06

call the January 1st so people

8:08

kind of want to change their

8:10

lives but not always are following

8:13

up with the choices and behaviors

8:15

and decisions. Yes yeah definitely and

8:17

I had I did episode 1998

8:19

of the podcast was on the

8:21

Dunning Kruger effect which was So

8:24

I've always loved that concept myself

8:26

and it was something that ended

8:28

up being so popular beyond what

8:30

I was even expecting as people

8:32

became they got Dunning Kruger on

8:34

the Dunning Kruger effect itself in

8:37

which for anyone who hasn't listened

8:39

to that yet, that is essentially.

8:41

that competence and confidence have kind

8:43

of this strange relationship that when

8:45

you don't know things yet, when

8:48

you don't know all of what

8:50

you don't know, you're at the

8:52

peak of Mount Stupid, as is

8:54

said in kind of the fun

8:56

industry term there, where you're super

8:59

confident even though you don't know

9:01

anything about what you're talking about.

9:03

And then as you get a

9:05

little bit more competent, your confidence...

9:07

tanks and then you have to

9:09

slowly kind of work your way

9:12

back up out of the valley

9:14

of despair on there. And like

9:16

you said, that awareness point, the

9:18

knowing what you don't know is

9:20

so important. You have to then

9:23

decide, I talked about in the

9:25

episode, not everything is worth coming

9:27

up the slope of enlightenment as

9:29

they talk about in that kind

9:31

of chart there. There are some

9:34

things where you can just accept.

9:36

Wow, there's a lot more to

9:38

this than I thought and it's

9:40

not worth it. for me. So

9:42

while the red is bad in

9:45

a lot of ways that you

9:47

should maybe advance, in some cases

9:49

it's okay I think to just

9:51

sort of accept that this is

9:53

part of what I've got going

9:55

on and it's not worth my

9:58

time so I can focus on

10:00

these other things that really are

10:02

important and that I want to

10:04

change. Right. And maybe we should

10:06

give the listeners more of a

10:09

simplified vocabulary, meanwhile, kind of attaching

10:11

upon like a deep science that's

10:13

explaining how the humans change or

10:15

can't change. For sure. I think

10:17

the everyday word for that is

10:20

curiosity. I think that's kind of

10:22

we'll look at the kids there

10:24

curious. Okay, let's try this and

10:26

that we are having the same.

10:28

access to the same curiosity throughout

10:30

our lives. And this is the

10:33

question, do we allow ourselves to

10:35

be using it? Therefore, simplifying the

10:37

awareness is curiosity. Are you curious?

10:39

Are you furious that everything is

10:41

in alignment in your life? And

10:44

if a question arrives to you

10:46

that says, well, are you happy

10:48

with what you do at the

10:50

work? Are you happy with how

10:52

you spend your morning time? Are

10:55

you happy? How is your relationship

10:57

with technology? And this is how

10:59

I shifted from the industry successful

11:01

jobs, working at the Hewlett Packard

11:03

in Oracle and doing personal relationships,

11:05

having nice business cards and trips

11:08

and stuff like that into my

11:10

scientific endeavor. And it was really

11:12

about, okay, I was curious. So

11:14

if the question arrives to me

11:16

having all these kind of benefits

11:19

from the industry work, and the

11:21

question was very straightforward, then I

11:23

might assume many listeners have the

11:25

same question. Am I happy? Yes,

11:27

and I am all about questions

11:30

as the audience knows. I recently

11:32

had Warren Berger on the show

11:34

who wrote my favorite book, A

11:36

More Beautiful Questions. So for anyone

11:38

who wants to think more about

11:41

great questioning, I highly recommend that

11:43

one of his episode 200. And

11:45

so... You've talked a bit about

11:47

AI and the role of technology

11:49

and I know that is something

11:51

that people are asking about a

11:54

lot right now about how AI

11:56

and machine learning ties in with

11:58

the behavioral sciences. What are your

12:00

thoughts? Well, what are your, I

12:02

guess, simplified? for the podcast thoughts

12:05

because I know as you said

12:07

you have many many that would

12:09

take much longer than the time

12:11

you're willing to give us today.

12:13

I have a lot to say

12:16

about this topic. Let me let

12:18

me give you the kind of

12:20

key essential takeaways or key essential

12:22

perspectives. So if the question is

12:24

about the power of technology assuming

12:26

the AI is the most promising,

12:29

the artificial intelligence. So number one.

12:31

I would really encourage people to

12:33

take away unnecessary bias towards artificial

12:35

intelligence before we start the conversation

12:37

of the use of AI for

12:40

our behavioral changes. on the different

12:42

scales. So number one, I will

12:44

give you a perspective where humans

12:46

long, long time ago, our ancestors,

12:48

like many thousands of years ago,

12:51

we're trying to figure out how

12:53

we can be more intelligent as

12:55

a species. So of course, first

12:57

of all, we survive because of

12:59

the collaboration. The next was how

13:02

can we collaborate more efficiently? Okay,

13:04

let's invent speaking language. and stop.

13:06

After that, okay, it's not so

13:08

efficient we need to kind of

13:10

speak to each other, so maybe

13:12

it puts right down so everybody

13:15

can read it. Okay, that makes

13:17

life easier. The next is, oh,

13:19

I need to rewrite it all

13:21

the time, so maybe even when

13:23

it's printing, wow, nice, scaling up,

13:26

next, oh, it's accumulating in one

13:28

place, libraries, that's great. The next

13:30

thing is, oh, digital, digital, digital

13:32

is great. So let's have more

13:34

access from different locations from different

13:37

locations and we can share it.

13:39

Wonderful. And the next is the

13:41

emergence of technology, not only as

13:43

the accumulator, as the place to

13:45

share and do many other things,

13:47

but as the intelligence itself, looking

13:50

at our human intelligence, revising it,

13:52

finding a new patterns into our

13:54

knowledge, for example, or bringing pieces

13:56

together in innovative ways. And I

13:58

say that's just another contributor, intelligent

14:01

machine-driven intelligence, contributing. to coevolve in

14:03

united intelligence. That's the paradigm I

14:05

was developing in my mind and

14:07

I'm sharing this with people and

14:09

I'm like, okay, that makes sense

14:12

of number one, we are together

14:14

with it on our journey and

14:16

the only obstacles are of human

14:18

biases or of human dark sides

14:20

leveraging the tools against ourselves or

14:22

against the other human beings. Now

14:25

moving from that understanding that artificial

14:27

intelligence, this technology driven technology that

14:29

helps humans to achieve their goals.

14:31

Then we can look at behavioral

14:33

science, specifically for this podcast. Here,

14:36

a wonderful use for the AI.

14:38

AI, looking at data and trying

14:40

and giving more deeper insights into

14:42

behavioral patterns. We can start with

14:44

organizational processes, how people make decisions

14:47

when and how to do with

14:49

their technology. Everything is in a

14:51

log files. You can look at

14:53

that. So everything is registered. From

14:55

there, they can make better analysis.

14:58

visual representations and also suggestions for

15:00

decision making. So those are the

15:02

most common uses of the AI

15:04

for addressing human behavior change at

15:06

different scales. Yeah. Do you have

15:08

any projects or anything you're able

15:11

to share to kind of show

15:13

practically how you've used this? I

15:15

know that many things cannot be

15:17

shared. So if you don't have

15:19

anything, totally understand. There are there

15:22

are success stories and one of

15:24

the famous ones are addressing the

15:26

famous example is where real life

15:28

problem in organizations and in many

15:30

other occasions where people have agreed

15:33

to come and meet and discuss

15:35

and agree on certain things like

15:37

a project meetings and organizational meetings.

15:39

And we know, I think everyone

15:41

on this podcast who's listening has

15:43

been a situation when somebody arrives

15:46

late, late to the meeting. And

15:48

that was a real case, really

15:50

implementation. We simply. Simplifying to test

15:52

the proof of concept works. There

15:54

was a computer in the meeting

15:57

room, there was a bigger, let's

15:59

say TV or the screen, which

16:01

usually the conference rooms have a

16:03

TV where you can present the

16:05

PowerPoint and stuff. And that was

16:08

all connected in a sense that

16:10

when every person arrived to the

16:12

meeting, whether you were on time

16:14

or late, was giving you the

16:16

color of the meeting for the

16:18

representation of your performance on that

16:21

visual. on the screen. So if

16:23

you were on time, you've got

16:25

the color of that meeting, if

16:27

you were late, you didn't get

16:29

the color of that meeting, and

16:32

everyone was able to see their

16:34

own performance across the last meetings,

16:36

and then you could also see

16:38

how others are performing. So again,

16:40

staying away from the conventional carrots

16:43

and sticks, punishments and rewards, it

16:45

was purely based on the social

16:47

influence, how we inherently are reacting

16:49

to other people. In this case,

16:51

technology did only one thing. made

16:54

the whole process transparent and accumulated

16:56

over time and you will be

16:58

able to see who is getting

17:00

more colors in their bars and

17:02

who are not and lacking behind.

17:04

After five meetings everyone was on

17:07

time, every single person was on

17:09

time, it took five meetings and

17:11

of course we can imagine if

17:13

that design strategy was to grow

17:15

those bars depending on the next

17:18

meetings and you were either getting

17:20

more colors in your bar or

17:22

you are not, after the three,

17:24

four, or five meetings, you see,

17:26

well, there are people having like

17:29

five colors and you have none.

17:31

So you don't want that and

17:33

you want to escape that. So

17:35

that was one of the most

17:37

vivid ways. And at the same

17:39

time addressing one of the most

17:42

common human biases that people would

17:44

be just saying, oh, it's not

17:46

that important to be on time.

17:48

Or it's pointless to be on

17:50

time. I will have other other

17:53

meetings more important. I have to

17:55

take this call. I have to

17:57

answer this email and it goes

17:59

on and on. I love this

18:01

example. as you know, and when

18:04

we talked about it, it's been

18:06

a few months now since we

18:08

first talked about it and I

18:10

said, oh, that one's going in

18:12

the book, right? So my second

18:15

book is on what your employees

18:17

need and can't tell you. It's

18:19

all about change management. And I

18:21

love this example of how the

18:23

social proof makes it to where

18:25

you feel that pressure to be

18:28

on time. Really fascinating about this

18:30

example, I think, is everyone can

18:32

see if you're on time or

18:34

not. Everybody knows anyway. So why

18:36

does the chart have any impact

18:39

at all, right? Everybody should know

18:41

that I was late for the

18:43

last four meetings. So why do

18:45

I care about the chart? Why

18:47

does seeing it? make a difference.

18:50

And for everyone, for when the

18:52

book comes out, Agnes was kind

18:54

enough to provide some imagery that

18:56

should be included in there, so

18:58

you can actually see, you know,

19:00

how that looks. What sort of

19:03

insight do you have into kind

19:05

of that aspect of, it is

19:07

such a funny quirk, but we

19:09

can all feel that too, right,

19:11

where you would look at that

19:14

chart and say, oh man, I

19:16

hope my company never does something

19:18

like that. But why, right? It's

19:20

so strange. is to face our

19:22

own human nature at its core

19:25

or as its depth. And that's

19:27

the most interesting, at least for

19:29

me. Many people would like to

19:31

deny and not go that way

19:33

at all, but I'm different. That's

19:35

my most desirable. Okay, how can

19:38

I understand how the things work

19:40

deep down in our neurology, physiology,

19:42

sociology, and how it can be

19:44

defined with technology design? So that

19:46

these technologies actually make the transformation

19:49

work in a long term. So

19:51

here it is, there are two

19:53

ways how to look at this.

19:55

And let me start with the

19:57

one that would be possible. popping

20:00

into the minds of the listeners.

20:02

Oh, I don't want this. I don't

20:04

want to see myself. I don't

20:06

want to see myself in comparison

20:08

with others on the same screen.

20:11

I don't want that. You see,

20:13

that's the denial and the resistance

20:15

speaking. What they usually put up

20:17

as an argument, they say, that's

20:19

public shaming. That's the most

20:22

common way how the people would

20:24

be reacting in my talks or

20:26

in the massive classes. There is

20:28

always somebody. bringing up this

20:30

idea. And I say, well, let's take

20:32

a closer look to this. What's the

20:35

role of technology in

20:37

this experience? Similarly, as you

20:39

said, if you sit in a

20:41

meeting room without this solution, you

20:43

would be able to see who

20:45

is coming late. And if you want,

20:48

you can. write it down on your

20:50

paper. And it's kind of the basic

20:52

idea of the same thing, like a

20:54

paper-based solution. And now it's just more

20:56

digitized because we have technologies that can

20:59

do it for us. By the way,

21:01

the next thing is AI with a

21:03

camera seeing who is coming in and

21:05

doing it all for us. So that's

21:08

how it's accelerating or amplifying the effect.

21:10

So number one, should take away the

21:12

argument of public shaming is such a

21:14

response where technology is just

21:17

doing the same thing. that we

21:19

already can do is just do

21:21

it more efficiently. So we don't

21:23

need to kind of pay attention

21:25

that much. And why it feels

21:28

so different is because our brains

21:30

are very efficient at filtering what's

21:32

useful and what's not. So therefore,

21:35

if you have seen a person not

21:37

being on time or being late for

21:39

the last meeting, but that was maybe

21:42

an occasion, it's not so valuable information.

21:44

So it's erased. And that repeats for

21:46

quite some time and there is a

21:49

person always being like, of course, that's

21:51

kind of valuable information that we start

21:53

to remember who is the person or

21:56

who are the people coming alive. So

21:58

therefore technology is taking away. Number

22:00

one, the bias, because everyone can

22:02

remember, okay, that person was later,

22:05

that person wasn't, but it's inaccurate.

22:07

Our brains are good, but not

22:09

perfect. The technology, number one, is

22:11

having a more reliable perspective on

22:14

the behavioral performances of the people,

22:16

and secondly, it makes existing. patterns

22:18

just more transparent and more visible

22:20

so that we as collective look,

22:23

look at the representation and we

22:25

are not stuck into debating our

22:27

subjective perspectives. So those are the

22:29

major reasons. One on one hand,

22:32

what people are resisting and kind

22:34

of expressing their not willingness to

22:36

have them. On the other hand,

22:38

what are the real benefits and

22:41

what are the real improvements that

22:43

the companies can get by implementing

22:45

these solutions taking away these human

22:47

and driven biases. Yeah, definitely. Thank

22:50

you so much for digging in

22:52

on that. Like I said, it's

22:54

it's one of my favorite examples.

22:56

Another of my favorite examples that

22:59

I talk about all the time

23:01

when I it's kind of a

23:03

key one when I talk about

23:05

change is the literary, which I

23:07

love and you were pretty heavily

23:10

involved in that I don't think

23:12

project is the right thing to

23:14

say company the that movement. Yeah.

23:16

What can you share about and

23:19

I'll link of course there's an

23:21

episode where I talk to Michael

23:23

the CEO about the literary in

23:25

episode 75 which I know because

23:28

I talk about it all the

23:30

time like I said and I'm

23:32

always sharing about it. What role.

23:34

Did you play, what can you

23:37

share about the literary project? Number

23:39

one, the literary is the concept

23:41

and startup and idea and the

23:43

business model. How our societies can

23:46

get more... cleaner and our governance

23:48

for the local municipality more efficient

23:50

in taking care of our surroundings

23:52

to be nuts crowded with garbage

23:55

or waste around the basic promise

23:57

that also Michael says is before

23:59

there are humans on earth and

24:01

there are no litter. So it's

24:04

a man made behavioral problem and

24:06

this is all starts there. What

24:08

attracted me to this idea was

24:10

Regardless of how brilliant technology can

24:13

be developed, so they smart litter

24:15

bins, recognizing what kind of litter

24:17

it is, plastic, metal, glass, etc.

24:19

And giving a lottery ticket, if

24:22

you have sorted and put your

24:24

property in a property litter bin,

24:26

that's all fine. The angle that

24:28

I was really passionate to add

24:31

to this project was, well, yes,

24:33

you are using and leveraging the

24:35

lottery idea, which is gaining something

24:37

and anticipating that you might be

24:40

the winner. That's kind of our

24:42

optimism bias a little bit. It's

24:44

all fine. And we know lotteries

24:46

have been around for 4,000 years.

24:49

And they work in a specific

24:51

way. And I said, well, it's

24:53

great. Let's do it. to the

24:55

next level where not only the

24:58

people are engaged and attracted by

25:00

the lottery concept alone, let's add

25:02

the social influence to that. Let's

25:04

add the perspective of other people.

25:07

For example, if you have a

25:09

neighborhood and a few neighborhoods in

25:11

the city, and then depends how

25:13

well the people are sorting their

25:16

litter and putting their right bins,

25:18

we could quantify that. And this

25:20

is what... the work that I

25:22

did at the MIT Media and

25:25

the quantifying communities, then we could

25:27

be able to compare these communities

25:29

on the screens. next to the

25:31

bins. What the bins themselves would

25:34

be telling you whether you performed

25:36

well or not, depending what you

25:38

put in, but also what was

25:40

the behavior of previous people that

25:43

were throwing something into this bin.

25:45

And a bin would be having

25:47

this endless, not really analyzed, but

25:49

like continuous. continuous feedback loop about

25:52

the behavior of your choice about

25:54

the other people around this bin

25:56

and then around the neighborhood and

25:58

then comparing to other neighborhoods and

26:01

this is the way how we

26:03

are amplifying the effects of success

26:05

for that kind of solution just

26:07

by adding this social layer to

26:10

it because otherwise people if they

26:12

are left alone okay they are

26:14

just sitting and hoping okay am

26:16

I going to win or not.

26:19

But then you see, it's not

26:21

only about winning or losing, it's

26:23

about we, we as a collective,

26:25

do we want to live in

26:28

the kind of cleaner community? Do

26:30

we have the engagement? People desire

26:32

to change many things in their

26:34

lives, not only individual but on

26:37

the collective level. What have we

26:39

done wrong? Maybe not wrong, but

26:41

kind of what is our current

26:43

architecture in the cities? It's not

26:46

transparent. But technologies comes in and

26:48

technologies come and help us to

26:50

make it again more transparent like

26:52

a 2,000 years ago. Looking out

26:55

to the fields, you can see

26:57

some people are hunting there, some

26:59

people are being ground, some people

27:01

are doing something else. Our cities

27:04

now are regaining and gaining back

27:06

that and giving the possibility for

27:08

people to benefit from seeing others,

27:10

especially the good performers. That's that's

27:13

always important. Yeah, I really like

27:15

that. We haven't talked about that

27:17

before so thank you for sharing

27:19

that and it's very reminiscent I

27:22

guess or I like how it

27:24

goes kind of beyond just social

27:26

proof to so Chaldini you know

27:28

he's got his now seven principles

27:31

of persuasion and was on the

27:33

podcast talking about unity that unity

27:35

principle and I think that that's

27:37

really present there in. the being

27:40

part of something bigger and we're

27:42

all on one team working together

27:44

and being able to help reinforce

27:46

the community and something so simple,

27:49

but that can have really lasting

27:51

and compounding effects for the better

27:53

by just a tiny little ad

27:55

that can really change everything, which

27:58

is what it's all about, right?

28:00

Unfortunately, we humans. over our evolution.

28:02

We have gained a lot and

28:04

we have lost some essential awareness

28:07

of who we are, and therefore

28:09

also we have this debate about

28:11

AI being different from us, which

28:13

is wrong in many aspects, and

28:16

also the way that they against

28:18

us, seeing the differences and all

28:20

the characteristics of the people, that

28:22

is driven by one of the

28:25

social influence principles, social comparison. We

28:27

compare everything every time, depends on

28:29

the cultures that could be more

28:31

of a stronger effect of social

28:34

comparison, some of the cultures maybe

28:36

less. Generally what it means, it's

28:38

if you look back to how

28:40

you thought about the grades in

28:43

your high school, you look at

28:45

how well did you perform comparing

28:47

to your classmates and then you

28:49

were specifically looking, oh. I never

28:52

knew I would perform less than

28:54

this other person and never expected

28:56

him to perform that. That's social

28:58

comparison. It has multiple angles. So

29:01

here, by the way how our

29:03

societies have evolved, and of course

29:05

huge impact this by technological advancements,

29:07

radio, television, mass communication media, social

29:10

media and so forth, and it's

29:12

again depends how it was designed

29:14

and how it was used again

29:16

by other human beings to shift

29:19

our society and our society of

29:21

thinking. But there's a good news.

29:23

We are coming back to our

29:25

basis of our human nature. more

29:28

and more see that people want

29:30

to choose healthier lifestyles. They want

29:32

to live in a safer communities.

29:34

They want to get together so

29:37

that they improve their quality of

29:39

life. And not only as individuals

29:41

in society, but also as entrepreneurs,

29:43

as a company, as the products

29:46

and services, and all this paycheck

29:48

to the earth, idea on the

29:50

balance sheet, okay, what is the

29:52

impact? So therefore, I see a

29:55

good trends. While the dark side

29:57

of our human nature, I'm not

29:59

saying we should get rid of

30:01

it. I think we cannot. A

30:04

DNA is just remembering all the

30:06

dark ages that we went through

30:08

as a human nature. But now,

30:10

thanks to technology, we get more

30:13

deeper and clearer, more detailed perspective

30:15

of who we are. And if

30:17

we use our intellectual capacity to

30:19

say out to ourselves, well, the

30:22

dark sides comes from the dark

30:24

ages and we would like to

30:26

leave them there and we will

30:28

use the positive and the beneficial

30:31

ones for today and we will

30:33

leverage technologies for our own benefit

30:35

collectively and not for making any

30:37

unnecessary tensions between groups of people

30:39

or countries or perspectives and so

30:42

forth. So I see it's and

30:44

also thanks to the scholars like

30:46

Shelvini and others who have done

30:48

great contribution to helping people to

30:51

realize how the things work on

30:53

this intercommunication level and then also

30:55

giving that as giving that as

30:57

inside full tools for technology designers.

31:00

So Chilvini is one, BJ Fogg,

31:02

and there are many other scholars

31:04

and scientists who have done a

31:06

great help, especially remembering the BJ

31:09

Fogg from Stanford University. He runs

31:11

his behavioral design lab right now,

31:13

and he wrote the book, Persuasive

31:15

Technology almost 20 years ago, and

31:18

I think that was really, really

31:20

good books for the whole community

31:22

that currently is. Yes, technology can

31:24

help us. Not only we facilitate

31:27

each other, technology can be designed

31:29

for behavioral positive behavior. with changes.

31:31

Yeah, and when you were talking,

31:33

I was realizing, remembering, thinking about,

31:36

so I did a series really

31:38

early on on the podcast that

31:40

I called All the Biuses, and

31:42

I broke them into different categories,

31:45

like the way we think about

31:47

ourselves, the way we think about

31:49

others, things to do with memory,

31:51

just kind of putting them into

31:54

some sort of categories to help

31:56

make sense of many, many biases

31:58

there. through all more than 200

32:00

episodes now, and even though we've

32:03

talked about AI and things like

32:05

that, haven't really had anyone talk

32:07

much about the research and what

32:09

people have found about our biases

32:12

toward technology. And knowing, like, we

32:14

know that a, you know, a

32:16

coworker is a machine versus a

32:18

person or if we're playing games

32:21

against a computer versus a person,

32:23

how we feel differently about that.

32:25

And where we have this really

32:27

interesting. connection to technology, I guess,

32:30

where it's this very polarized relationships.

32:32

We really love it until we

32:34

don't. And then we really, really

32:36

hate and fear it. Can you

32:39

give any insight into some of

32:41

that for people that haven't read

32:43

any of that research or heard

32:45

much about it yet? I think

32:48

it's fascinating. It is. It definitely

32:50

has. Technologies have huge impacts and

32:52

it is such a multifaceted and

32:54

multi-dimensional experience ever since the first

32:57

initial digital technology. Before that we

32:59

had a regular mechanical technology and

33:01

we have mix of technical, electrical,

33:03

all the software, all these technologies

33:06

are now impacting the way how

33:08

we experience. The bias towards technology,

33:10

I wouldn't, there is, there is

33:12

such a thing. That's for sure.

33:15

That's how we experience it. And

33:17

I agree with the kind of

33:19

the ways how new this crime

33:21

so we start with okay we

33:24

look for hope and help and

33:26

making our life. comfortable and enjoy

33:28

that until we find something that

33:30

we don't like about it. Maybe

33:33

it's a question of reliability or

33:35

maybe that's a high expectations we

33:37

have that this technology should deliver

33:39

and work 100% of the time.

33:42

Nevertheless, it's important to realize that

33:44

most likely, and I will make

33:46

a hypothesis here because that's not

33:48

the core of my work, but

33:51

this is a very neighboring area

33:53

so I can I can give

33:55

a educated guess for what we

33:57

scientists call hypothesis. Technology is perceived

34:00

by an average human being, the

34:02

same way as an average human

34:04

being, perceives another human being. So

34:06

we look for trust, and we

34:09

call reliability. We look for consistency,

34:11

which means the level of accessibility

34:13

technology. Then we look for long-term

34:15

relationship. Okay, how many times did

34:18

we have good experience with it

34:20

and how many times there was

34:22

a breakup with technology? So all

34:24

of these and more and much

34:27

more is coming through, I would

34:29

even dare to say to the

34:31

to the way how we are

34:33

trying to impose our human characteristics

34:36

and values on any non-human objects.

34:38

poor experiences with the non-human events.

34:40

The same way how we see

34:42

a face on the moon and

34:45

the same way, and we see

34:47

cats in the clouds. And why?

34:49

Because it was essential for our

34:51

survival. If we walk through the

34:54

forest many thousand years ago, we

34:56

better kind of read the other

34:58

human beings and the faces. Are

35:00

they friendly or not? Are they

35:03

telling us the truth or not?

35:05

The same we expect from technology.

35:07

And that's why. It's not always

35:09

trivial and especially when we think

35:12

about humanized robots. Yes, we would

35:14

on the one hand see similarity

35:16

with the other human beings, and

35:18

then would expect, for example, emotional

35:21

intelligence. But they are still working

35:23

on that. And when they first,

35:25

the robot makes the first mistake,

35:27

we disqualify it. And the takeaway

35:30

here is, technology is a tool,

35:32

we are developing it, our intentions

35:34

are behind, whether it's good or

35:36

bad, ethical or unethical, so. Everything

35:39

we experience with technology is giving

35:41

us opportunity to look at ourselves.

35:43

And the more advanced technology we

35:45

develop, the closer and deeper we

35:48

see ourselves in the whole spectrum

35:50

of our bad and good, bright

35:52

and dark sides of the human

35:54

nature. And that's going to continue.

35:57

And if you have this awareness,

35:59

then we also are aware of

36:01

the biases towards technology because they...

36:03

pretty much should be the same

36:06

biases we have against other people,

36:08

against other nations, against different other

36:10

things that we would naturally do

36:12

without technology. So it should be

36:15

the same game, just applied to

36:17

another actor, which is technology driven.

36:19

Awesome. Thank you for taking that.

36:21

little sort of side path with

36:24

me, I appreciate it. And I

36:26

would be remiss if we did

36:28

not talk about the Skype method

36:30

for hyperperformance before you go because,

36:33

you know, it has your name.

36:35

So you should share a little

36:37

bit about what it is and

36:39

how you use that to help

36:42

people and companies. For a long

36:44

time I was labeling it as

36:46

a transformation design methodology on a

36:48

different other words that were attached

36:51

as it was emerging from a

36:53

deep scientific understanding of the human

36:55

change and then refining how the

36:57

changes are different and then getting

37:00

to a way to investigating where

37:02

is the problem and then applying

37:04

technology so that technology can help

37:06

us to deal with that problem

37:09

more consistent way. So that was

37:11

the journey. It all started by

37:13

my question to my question. myself,

37:15

which I mentioned earlier in the

37:18

podcast, am I happy? And I

37:20

said, no, I'm not really. So

37:22

I went to invest again. What

37:24

is the possibility to merge, to

37:27

intertwine, to blend technology intelligence with

37:29

the human intelligence for our own

37:31

benefit? And this is how over

37:33

the years, I arrived to Stephen

37:36

Method. And Stephen Method implies, and

37:38

there are three major stages. There

37:40

are 10 tools, and they are

37:42

splitting the three stages. First two

37:45

tools fall into the guidance phase.

37:47

So we kind of look for

37:49

what is the best way for

37:51

us to help ourselves and the

37:54

guidance tools have to not only

37:56

specify where we want to get,

37:58

which I call the green vector,

38:00

and at the same time it

38:03

helps to map out the ways

38:05

that we don't want to go.

38:07

By the way, in the management,

38:09

for example, oftentimes people have or

38:11

the companies have vision and mission.

38:14

But do they have the map

38:16

around that area where they get

38:18

the traps of the previous habits

38:20

and people leading to the previous

38:23

poor decisions that they've done before

38:25

until they don't have these other

38:27

traps or the map of the

38:29

traps? They just will fall into

38:32

them without even knowing that. So

38:34

therefore, the first step is guidance

38:36

and there are two tools. One

38:38

is the pathology of change, understanding

38:41

between the one time period of

38:43

time and the long term changes

38:45

which are. transaction, transition, transformation, then

38:47

is a vector. Okay, there's a

38:50

green vector. There should be yellow

38:52

vectors and the red vectors and

38:54

then mapping out what are the

38:56

likelihood you are actually constantly sticking

38:59

with your desired direction. That means

39:01

the green vector. And after that

39:03

starts the framework, the framework of

39:05

eight tools and there are actually

39:08

there are three phases. The first

39:10

is investigation phase with four tools.

39:12

Most of the times the problem

39:14

with failed solutions are that there

39:17

is not enough awareness where really

39:19

is the problem. So therefore four

39:21

tools go to investing. investigate, investigate,

39:23

and find out that the most

39:26

of the problems reside in human

39:28

thinking. Most of the biggest challenges

39:30

for performance, and especially for hyper

39:32

performance, are the human biases, poor

39:35

decisions, conscious, productive thinking, and all

39:37

of those sorts of things. Once

39:39

that's realized, they can start designing

39:41

and using technology. Then we have

39:44

design phase. Design phase fundamentally is

39:46

data driven with. intelligence to analyze

39:48

this data. And then, which is

39:50

essential, which is the top layer

39:53

of the architecture, which comes after

39:55

the big buzz around smart cities

39:57

and so forth, is a transforming

39:59

layer. How we communicate this information,

40:02

relevant information back to our primary

40:04

target audience, stand users. And this

40:06

is where I really emphasize and

40:08

I cannot emphasize more instant feedback

40:11

loops. Instant feedback loops means if

40:13

you receive your electricity bill next

40:15

month. It doesn't really have any

40:17

instant reflection or it doesn't give

40:20

you the time to do that.

40:22

But if you have, every single

40:24

time you switch on the lab

40:26

at home or you switch it

40:29

off, you see something like a

40:31

colors of the light, the end

40:33

of the light changes, it gives

40:35

you the instant feedback. That's essential

40:38

for designing. And the other component

40:40

is social influence. It means we

40:42

integrate into that instant feedback group

40:44

about other positive behaviors, people in

40:47

your building, switching of the previous

40:49

month. that encourages you to follow

40:51

or get the inspiration. And the

40:53

final phase, the crucial phase is

40:56

to make like assurance for a

40:58

long-term success. And there are two

41:00

tools. One is how you can

41:02

avoid the times when everybody is

41:05

just switching all the lights on

41:07

and just burning down the energy.

41:09

And that would naturally, the social

41:11

influence is working with the same

41:14

power both ends. And you don't

41:16

want your system to promote the

41:18

opposite behavior just for the... I

41:20

know because there is a celebration

41:23

going on or something like this.

41:25

So that's one and the final

41:27

one is ethics. So we need

41:29

to be very aware that it's

41:32

not that. that is willing to

41:34

do bad. It's how we are

41:36

on one hand, sometimes having bad

41:38

intentions. Some people might have bad

41:41

intentions using the same tool. And

41:43

the other one is something from

41:45

our psychology. We can predict many

41:47

of behavioral aspects, but there are

41:50

some unpredictability. So we cannot predict

41:52

that there will be some emergence

41:54

of the subgroup of the people

41:56

would say, well, oh, this is

41:59

how you do it. Do you

42:01

light up the buildings during the

42:03

night? the residents of the building

42:05

was comparing to the previous week.

42:08

This is how you do it.

42:10

So I will rebel and I

42:12

will switch all my life just

42:14

to kind of not allow you

42:17

to look up my building and

42:19

I will destroy our building's performance

42:21

just because I don't like the

42:23

solution to have employment. So that's

42:26

that's shortly the overview of the

42:28

method. Awesome. And I know where

42:30

you're talking about some of the

42:32

vectors and aspects and color things

42:35

there. We will be linking to

42:37

some articles and to your website

42:39

that has more information for people

42:41

that you are now intrigued and

42:44

you want to go take a

42:46

look and really see what he's

42:48

talking about will definitely have the

42:50

link there because as with anything

42:53

those visuals do help us to

42:55

kind of get a feel for

42:57

what's there and clearly you know

42:59

like you said there are 10

43:02

aspects there there's a lot going

43:04

on in this model it's very

43:06

very useful and helpful to have

43:08

that sort of cheat sheet that

43:11

you can go to for sure.

43:13

So as I said, definitely linking

43:15

to that. And for everyone that

43:17

is now so excited to go

43:20

learn more about what you do

43:22

and connect with you, watch your

43:24

TED Talk, TED Talks, I'm not,

43:26

I'm not sure, this multiple, I

43:29

believe. Yes. Where are the best

43:31

places to go to connect and

43:33

learn more about you? So if

43:35

you are the person. who is

43:38

more in the preference of entertainments,

43:40

you welcome to my. YouTube channel.

43:42

There are like short videos explaining

43:44

some of the things and some

43:47

are more personal development related, some

43:49

of the organizational, some explain the

43:51

method. So that's kind of a

43:53

place. I think it's very appropriate

43:56

for the times that many people

43:58

are learning through watching the videos.

44:00

And again, what's good about it?

44:02

Videos are like usually containing other

44:05

person telling you, which is a

44:07

social influence and it's at its

44:09

core. So that's one, but if

44:11

you are more serious. If you

44:14

really want to dig deeper, then

44:16

you're welcome to my website, which

44:18

is my name, surname, dot com.

44:20

And then you have the scientific

44:23

literature behind that, you have the

44:25

cases, success stories, you can have

44:27

some interesting collections of the videos

44:29

that I have. If you don't

44:32

want to just browse the YouTube,

44:34

you can have like a more

44:36

specifically tailored way of the contents

44:38

related with my work. Wonderful, and

44:41

you know, right there for anyone

44:43

who is excited to have Agnes

44:45

come do a talk for your

44:47

company or, you know, you want

44:50

to have more information, you know,

44:52

there are links there for more

44:54

information, you can, you know, schedule

44:56

a call to talk with him

44:59

about coming in to speak to

45:01

your group, whatever it is that

45:03

you're looking for, I say, definitely

45:05

check it out and there will

45:08

be links in those show notes

45:10

waiting for you. Thank you again,

45:12

Dr. Seib for joining me on

45:14

the show. I always learned something

45:17

chatting with you and I'm sure

45:19

everyone listening learned lots of things

45:21

today as well. My pleasure. Always

45:23

welcome. See you next time. Sounds

45:26

good. So, what got your brain

45:28

buzzing as you learned from Agnes

45:30

today? For me, while the literary

45:32

is a big piece of what

45:35

I always associate with him, I

45:37

also love the example of social

45:39

proof to help get people to

45:41

show up on time to meetings.

45:43

This little somewhat counterintuitive nudge is

45:46

so powerful and a great example

45:48

of reframing a problem to change

45:50

behavior. It is so often our

45:52

tendency to be too myopic when

45:55

looking to solve a problem. You

45:57

look right at what you think

45:59

the problem is and jump into

46:01

trying to solve it instead of

46:04

stepping back and asking thoughtful questions

46:06

to see what might be hiding

46:08

on the periphery. What's something we

46:10

could do just before or adjacent

46:13

to the problem that we're seeing

46:15

now that might have more influence

46:17

on the behavior we're trying to

46:19

change? Those questions can lead to

46:22

such amazing insights and true behavior

46:24

change, which is fantastic. And of

46:26

course Agnes's work in achieving hyper

46:28

performance is why I refreshed this

46:31

episode today. It felt like the

46:33

perfect primer for the conversation that's

46:35

airing in just a couple days

46:37

with Barry Conchy and Sarah Dalton

46:40

discussing the five talents that really

46:42

matter. How great leaders drive extraordinary

46:44

performance. Now that your brain is

46:46

working on this idea of hyperperformance

46:49

at work, you will be set

46:51

up to understand these five key

46:53

talents that matter, which Barry and

46:55

Sarah evaluated over 58,000 executives to

46:58

unlock and share. It's super fascinating

47:00

stuff and I know you don't

47:02

want to miss that episode. If

47:04

you aren't already subscribed to the

47:07

brainy business podcast, now is a

47:09

great time to do so to

47:11

ensure you don't miss that or

47:13

any other episode. As we close

47:16

out the show, don't forget about

47:18

those show notes with links to

47:20

my top related past episodes and

47:22

books and more. It's all waiting

47:25

for you in the app you're

47:27

listening to and at the brainy

47:29

business.com/472. And thank

47:32

you again to Dr. Agnes Steep

47:34

for joining me on the show

47:36

today. It was a delight to

47:38

chat with and learn from you.

47:40

Join me Friday for a brand

47:42

new episode with Barry Conchy and

47:44

Sarah Dalton, co-authors of the five

47:46

talents that really matter. It's going

47:48

to be a lot of fun.

47:50

You don't want to miss it.

47:52

Until then, thanks again for listening

47:54

and learning with me. And remember

47:56

to be thoughtful. Thank

48:00

you for listening to the

48:02

Braney Business podcast. Molina offers

48:05

virtual strategy sessions, workshops, and

48:07

other services to help businesses

48:09

be more brain-friendly. For more

48:11

free resources, visit the Braney

48:14

Business.com.

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features