Transforming Your Acquisition

Transforming Your Acquisition

Released Tuesday, 4th June 2024
Good episode? Give it some love!
Transforming Your Acquisition

Transforming Your Acquisition

Transforming Your Acquisition

Transforming Your Acquisition

Tuesday, 4th June 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:02

Imagine your the chief information officer

0:04

of a large company and everything

0:06

is going swimmingly. You've recently modernized

0:08

your systems. Your teams are making

0:10

the most of the tools at

0:13

their disposal. Their evaluating

0:15

opportunities for future infrastructure.

0:17

Updates. They've got seven months

0:19

to plan for and execute one

0:21

of the biggest bank murders in

0:23

history. Everything is crazy. Wait,

0:26

what? Only seven months to pull

0:28

off a massive merger? Digital

0:32

Transformation is great when it rolls

0:34

out smoothly and everyone is familiar

0:36

with the systems they work with.

0:38

Hopefully the pace of change doesn't

0:40

affect that familiarity. but when you

0:42

have a merger or an acquisition

0:44

all of a sudden, your teams

0:46

were going to find themselves with

0:48

another set of systems billie to

0:50

integrate with their own. And that's

0:52

not easy at a stretch timeline

0:55

and that difficult integration turns into

0:57

a real doozy of a challenge.

0:59

A tool Vermont Chief Information Officer

1:01

at Bank of Montreal. Shares

1:03

how merging to gigantic systems

1:05

on a deadline requires both

1:07

meticulous planning and quick reaction

1:09

times. Developing the monumental plan

1:11

relied on it, as did

1:13

the numerous customers who trust.

1:15

These banks with their money. In

1:26

February of Twenty Twenty Three, the Bank

1:29

of Montreal announced it had completed his

1:31

acquisition of the Bank of the West.

1:33

Legally, the two entities could now become

1:35

one. Practically they

1:38

had until September to get er

1:40

done. As digital transformation projects go,

1:42

this one was tough. This

1:45

to give a sense of the skill. This

1:47

was the largest bank acoustician in Canadian history

1:49

and the banks had under the old banks

1:52

actually. Rate said this was really a big

1:54

large acquisition so from that perspective it was

1:56

really unique and had it's own challenges with

1:58

that. But to. Them some logic programs

2:01

and does permission slip for. This is about.

2:04

The teams at Bank of Montreal

2:06

completed other digital transformation. Projects before

2:08

this from. That experience with

2:10

serve them well. But. That doesn't

2:12

mean it would be easy. As

2:15

complex as some of these projects or

2:17

getting them down on a tight timeline increases

2:19

the potential for something to go wrong.

2:22

And when you're dealing with millions of

2:24

financial accounts, you really need to get

2:26

it right. And for a

2:28

big acquisition of that suits, a tool says

2:30

there are two main things that need to

2:33

be achieved. The

2:35

first one is to make sure you my group all

2:37

the customer data of the have access to the same

2:39

information when they come over on the other side. And.

2:41

The second thing did he make sure there's no

2:44

errors in that process. Nobody's money gets missing. Race.

2:46

Merged. The systems move all the

2:49

data to one system and make.

2:51

Absolutely sure that all the money is

2:53

a chemical. Pretty straightforward

2:55

rights. How much

2:57

did are we talking? About anyways so that amount

2:59

of be that it was involved in across

3:01

all different products and services was at oh bytes

3:04

of data she was a very large did a

3:06

said that we had a to map or

3:08

what and move. One

3:10

pet a by is one thousand

3:12

terabyte as they were a few

3:14

of those to move without mistakes.

3:16

That's a lot of ones and

3:18

zeroes. Keeps with enough planning. In

3:20

practice, the teams at Bank of

3:22

Montreal and think the worst could

3:24

get it done that completed. Difficult

3:26

technological transitions with the acquisition had

3:29

first been announced in December of

3:31

Twenty Twenty One, but the to

3:33

banks could not get started until

3:36

final regulatory. Approvals came through. What?

3:39

We did little bit more complex other than

3:41

just Adidas Complexity was the fact that. It

3:44

took. Us about. Thirty. To forty

3:46

miles to get to the glitzy a pool. For

3:48

this a position. And that meant

3:50

that yeah, that men that we did not

3:52

had full access to the data before that

3:55

a pro was deemed to be was still

3:57

operating as two separate company. So be timothy

3:59

early. go in and look at

4:01

the data. So we had to think

4:06

and complete our product and both product

4:10

and account mapping and we had about seven

4:12

months after the approval came to actually execute

4:14

on that status that we put

4:16

together. It was a pretty compressed

4:18

timeline. They had a little over

4:21

a year to plan but without looking at the actual data

4:24

they could only do so much. Finally in

4:26

February of 2023 they got that

4:28

approval and could start the work in

4:30

earnest. The next step was to identify the

4:32

bulk time to carry out the migration. Given

4:35

the size and the complexity of this we

4:37

were looking for a three-day weekend to complete

4:40

the conversion. When put into

4:42

action this kind of project needs the

4:44

bank systems to go offline. No

4:46

changes to accounts while it's in progress. The

4:49

banks had to stay open for business before the

4:52

migration and open for business

4:54

as usual after the migration. People

4:56

need to be able to access their funds. So

4:59

regular weekdays were out. It's

5:01

a massive migration so going offline for a

5:03

two-day weekend wouldn't be enough time to get

5:06

it all done. So they

5:08

looked at all the three-day weekends available and

5:10

picked one. You only

5:12

have so many two-day weekends in the year

5:14

right so we landed on our labor day

5:17

of last year as the

5:19

window that we would complete this conversion into.

5:23

That itself is pretty complex right because if

5:26

we are moving the data across multiple bank

5:28

systems it's not just one set

5:30

of data. So if you think about the way the

5:32

data is structured on the acquired bank to the acquiring

5:35

bank that mapping is pretty complex.

5:37

So that itself was a lot of

5:39

challenge not just the size of the data how you

5:41

move that data in time but also

5:43

the sequencing of that data move to

5:45

completing that three-day weekend. They

5:49

weren't just lifting data from server A and

5:51

drop again into server B. They

5:53

also had to account for how the data are recorded,

5:56

how to reformat the data and

5:58

figure out the movements, the pivots, In

6:00

the order all of that needed to be accomplished, a

6:03

tool shared a few examples of the kinds of

6:05

data and systems they had to integrate. A

6:07

simple example I'll give is your

6:09

online banking mobile ID and

6:11

password, right? You had a set of

6:14

passwords in the other side. When that came over, we

6:16

wanted to make sure that they can use the same

6:18

ID and password on the Biva

6:20

side as well. And that itself was a lot of

6:22

challenge on how you encrypt that and how you make

6:24

it available. We're

6:27

not just talking about financial statements

6:29

and account histories, which are complex enough

6:31

on their own. This was

6:33

a top-down integration of two complex

6:36

and different IT systems that millions

6:38

of customers used and relied on.

6:41

The tool and the bank's teams had their work

6:43

laid out before them. All

6:45

that was left to do was to get it done. They

6:52

started by doing as much planning as they

6:54

could before having access to the data itself.

6:57

They had to answer lots of questions. How

6:59

are they going to prepare? How many

7:02

dry runs do they have? How much

7:04

time between runs? And how would

7:06

they distribute all the work they were waiting to get

7:08

started on? So before our legal

7:10

approval, we were able to do some of what

7:12

I would consider as a high-level mapping and planning

7:14

exercise, right? So we were able to do that.

7:16

We were not able to look at the data

7:18

itself, but a lot of

7:20

planning went in that phase. Once we had

7:22

access to the data, that's when it became

7:24

real. And data for this magnitude had lots

7:26

of interfaces into it, right? That you have

7:28

to manage as we order. We

7:31

went to four mock conversions

7:33

in the process in those seven months.

7:36

And these are conversions which we do to

7:38

replicate what will happen on the actual conversion

7:41

day. They

7:44

decided on four mock conversions in the

7:46

span of seven months. That

7:48

only gave them a few weeks between each test

7:50

run. Now that they had access to

7:52

the data, they had to put the pedal

7:55

to the metal to make their deadlines. First

7:57

up was to determine exactly what they

7:59

were working on. Although

8:01

both companies are banks, there were significant

8:03

differences in how they set up their systems.

8:06

They had different priorities, different features

8:08

and products, and even defined customers

8:10

in different ways. All that

8:12

translates into the differences in the data sets. Even

8:14

if some of our systems were the same, the

8:17

underlying data structure and the schemas were

8:19

very different. So we went

8:22

through a very extensive exercise to map that,

8:24

to say, okay, a database A, or even

8:26

at a field level, a field A on

8:28

this side means a field B on this

8:30

side. So that mapping

8:32

took a long time, and that addition of

8:35

that mapping to the mark is where we

8:37

found initially errors and we had

8:39

to go back and fix them, right? They

8:42

had to comb through all the databases

8:44

and all the applications and figure

8:47

out what was in common and what was different. And

8:49

then they had to make notes about it all

8:52

so they'd know how to translate the information from

8:54

one format to the other. That's the mapping a

8:56

tool is talking about. And given

8:58

the scale of the merger, getting such

9:00

a complicated process 100% correct

9:03

on the first try would have been

9:05

miraculous. Getting

9:07

all those petabytes of data mapped and

9:09

correctly translated was their top priority.

9:12

Our first mark was a technical

9:14

mark only, so we did not do what

9:16

we call this end-to-end. So we just

9:19

wanted to make sure that the data can come over in

9:21

time and that it can be converted so all the conversion

9:23

routines can run on time. They

9:26

got through their first test run and

9:28

immediately tuned their processes to address the issues they ran

9:30

into. But that

9:32

first test run was limited, and they only had three left.

9:36

They had to increase the scope of the next run to

9:38

stay on schedule. And from

9:40

there on, we added all the other

9:42

intricacies about the business processes and end-to-end aspects

9:45

of the data, right? So if you

9:47

start from, let's say, your checking account, you

9:49

want to make sure that data flows all the way to your financials

9:51

and GLs and your regulatory reporting rights. So

9:54

there's a whole spectrum of the

9:56

same data flowing through the process. That

9:59

was very incremental. be started with some

10:01

standalone conversions in Mach 1 and

10:03

then moved on to Mach 4 where it

10:06

was truly a replica of what the conversion

10:08

weekend was supposed to be. Each

10:11

test run revealed the errors they needed to

10:13

fix before the next one, but

10:15

also increased the number of things they needed

10:17

to test until they conducted

10:19

a full rehearsal of the entire process.

10:23

And there were issues to deal with in

10:25

a short period of time. They

10:27

couldn't do it all quickly, all

10:29

by themselves. We

10:31

had about 40 different vendor partners that

10:33

were also working with us to facilitate

10:36

this data migration. A lot

10:39

of this was in-house, but a lot of this was also

10:41

with the vendor partners. All that

10:43

orchestration was not perfect from day

10:45

one or Mach 1. It took some time for

10:47

us to really work our conversion

10:49

day playbook and make sure it's absolutely

10:51

correct because you only have 83 hours.

10:56

It took 83 hours to

10:58

carry out the migration, and they

11:00

had very little time to practice the process.

11:04

We know modern IT systems are complex. There

11:07

are a lot of components that interact with each other

11:09

to make up an application and the

11:11

infrastructure that it runs on. Hiring

11:13

the talent to build and manage all

11:16

those components is difficult, as

11:18

is coordinating all those teams to work

11:20

in concert. Add in outside

11:22

teams to the mix, and that

11:25

coordination can get really tricky, especially

11:27

when there are 40 vendors to

11:29

work with. But they made

11:31

all the difference because they had the expertise

11:33

to keep the project moving. A

11:37

tool in his teams couldn't afford to get stock.

11:40

They only had a few weeks between each

11:42

trial run to fix all the issues they

11:44

encountered. That is not long at

11:46

all. And especially if you talk about those Mach events, each

11:48

one of them was a conversion. Each one of them was

11:51

forced to six to eight weeks. Initially, we had a little

11:53

bit longer, I think eight weeks at the beginning, and then

11:55

they got more compressed as we got closer. So

11:58

that was not an easy decision. feed

12:00

but I think the team was very

12:02

well oiled machine here and the collaboration

12:04

that happened between the business and the

12:06

technology teams and all different aspects of

12:09

the technology team. With

12:12

their previous experiences modernizing systems

12:14

and with the help of outside vendors, the

12:17

Batons were able to plan for and carry

12:19

out a rigorous testing schedule to

12:21

math and transfer over massive amounts of

12:23

data. But that data transfer was only

12:26

part of the work they needed to do. When

12:29

we come back, we'll hear about how they

12:31

prepared for the massive influx of new

12:33

users. Hi,

12:38

I'm Jeff Ligon. I'm the director of engineering

12:40

for Edge and Automotive at Red Hat. The

12:42

number one hesitation that I think folks have

12:44

about Edge is that they assume they're not

12:46

ready for it. They think that

12:48

it'll mean starting over from scratch or

12:50

that every time their business needs to

12:52

change, they'll be re-engineering solutions and re-evaluating

12:55

vendors. And look, Edge is complex and

12:57

it does mean a different approach, particularly

12:59

for devices that are far edge. But

13:02

with Red Hat, it doesn't mean starting

13:04

over with new platforms. We've taken the

13:06

same enterprise solutions that simplify and accelerate

13:08

work across the hybrid cloud and optimize them

13:10

for use at the Edge. So

13:13

you can manage your Edge environments just like you

13:15

do the others. Come find

13:17

out how at redhat.com/Edge. It's

13:24

one thing to transfer petabytes of data in the

13:27

span of 83 hours. It's

13:30

another to make sure that everyone can

13:32

actually access their accounts and check to make

13:34

sure their information is right. Bank

13:37

of Montreal had some work to do to

13:39

make sure their infrastructure could meet the

13:41

increased demands. We need to

13:43

make sure that the infrastructure is scaled up

13:45

to handle the new warnings. We actually scaled

13:48

our infrastructure to six times the warning. It

13:50

was not necessary, but we kept some buffer

13:52

in there to make sure what we call

13:55

is the day one experience for our new

13:57

customers and new clients coming in

13:59

to the end. acquisition was

14:01

absolutely seamless. They

14:04

increased their capacity to accommodate six

14:07

times the number of people who

14:09

use their systems. That's a lot.

14:12

Bank of Montreal was already a large

14:14

financial institution used to serving a great

14:16

many customers every day. They

14:18

were acquiring and admittedly also large

14:20

bank. But was the

14:22

new combined customer base six times larger

14:25

than what they were used to? And

14:28

up where we can say we doubled our

14:30

volumes, we doubled our number of customers, we

14:32

doubled our number of accounts, number of transactions.

14:35

So that was the baseline. So we had

14:37

to make sure that two times was. But

14:40

if you look at some of these customer facing

14:42

applications and banking for example, this was a big

14:44

conversion. Customers are anxious. So

14:46

normally you will have a let's

14:48

say X amount of logins in

14:50

a day after conversion. We

14:52

thought everybody will log in. Just

14:54

to make sure their money is safe on the other side.

14:57

So that caused us to create not just

14:59

2X which would have a normal 6X

15:02

buffer for to handle those peak volumes

15:04

actually on day one or week one.

15:08

The banks don't typically have all of their customers

15:10

logging in to check their accounts at the same

15:12

time. And that's pretty typical for

15:14

any company with an application. But

15:16

this situation was different. Akin

15:20

to many big launches, people wanted

15:22

to try out the new system to see

15:24

if their funds had transferred correctly and

15:26

figure out how to work the new system. That

15:29

spike of concurrent users would eventually take her

15:31

off much like with any hip new

15:33

app. They looked at

15:35

the historical user data from both banks

15:37

and peak user counts to estimate how

15:39

many people might log on at once

15:41

after the new roll out. And

15:44

then they added extra buffers just to be

15:46

safe. Increasing the

15:48

capacity by six times wasn't an easy

15:50

task either. They added extra

15:53

hardware. But that wasn't the bottleneck. The

15:56

hardware side of that was fairly straightforward.

16:00

The next thing is to test the performance

16:02

of the scaled up infrastructure to that level.

16:04

And what I mean by that is actually

16:06

to be in a testing mode and still

16:08

be to verify that it could actually run

16:10

at 6x. Throwing

16:12

more servers at the problem is necessary, but

16:15

it's not enough to bring the system's capacity to

16:17

six times the scale. With that

16:19

kind of increase, there are other performance issues

16:22

to iron out, and they had to make

16:24

sure it could sustain that level of load

16:26

for an extended period of time. That

16:29

performance testing and then being able to

16:31

achieve that peak in our test environments

16:33

and then also to sustain that peak

16:35

for a period of time. Because

16:38

we had no idea how long that peak

16:40

will last actually, right? It could last an

16:42

hour. It could last a week actually, right?

16:44

So there's huge variability in how that could

16:46

go. Maybe people will

16:48

try logging into the system for a day to

16:50

try things out and then move on. Maybe

16:53

they'll log in multiple times during the week

16:55

because they want to triple or quadruple check

16:57

that everything was fine. A

16:59

tool and his teams couldn't know ahead of

17:02

time how customers would behave. So

17:04

they had to plan for the biggest challenge. Luckily,

17:06

the infrastructure side of this project wasn't

17:09

restricted by regulatory approval. So a lot

17:11

of the infrastructure scaling happened before. You really

17:14

can't do that in that 82-hour weekend, right?

17:16

So a lot of that infrastructure was a

17:18

period of time actually. We actually started that

17:20

even before our legal approval because that was

17:22

on us. I thought we can just do

17:24

it independently from the hardware perspective. Breaking

17:27

up the different requirements of the project into

17:29

what they could do and couldn't do yet,

17:32

helped them get as much done as they could

17:34

before facing the time crunch. Checking

17:37

off those parts of the project allowed them

17:40

to focus on fewer problems at once. Because

17:42

of the complexity and the nature of the ecosystem

17:44

with all our vendor partners, we had hiccups. We

17:48

had hiccups all the way leading up to the conversion

17:50

week actually. One good thing we

17:52

did in the process was we staggered some

17:54

of our conversions. So we had certain parts

17:56

that we converted a Couple

17:58

of weeks before the actual big conversion. Hyundai

18:00

and that was done because some I'm dependencies

18:02

for the vendor partners but also to to

18:04

decouple or davis the conversion as much as

18:06

be can with a long weekend but there

18:08

will issue. They were comfortable issues around the

18:10

accuracy of the data. The

18:13

month runs help the team find those errors

18:16

and correct them before doing the actual. Transfer.

18:19

They. Also helped them get a handle on

18:21

the timing of the whole process and make

18:23

sure that could actually completed within the eighty

18:25

three hour window. That was a

18:27

big deal and and be had to perfect

18:29

that process over time. It was not perfect

18:31

from day one. As so we have thought

18:33

issues like that and the Marquis rented south

18:36

during the event. Sometimes you something last longer

18:38

than you think it to around. The.

18:40

Conversion process he thinks about in four hours

18:42

and he staggered like that in her eighties.

18:44

He our plan. it takes ten hours right?

18:47

so then everything else back seven we're you

18:49

can distress they d see I was asked

18:51

if really hard because the bank has to

18:53

open. After

18:56

the first few trials it looks like they'd

18:58

be able to get it done and time

19:00

is. Everything went according to plan. But.

19:02

We all know things don't always go according

19:05

to plan. Make

19:07

a trio of chose to include some.

19:09

Chaos Engineering is a test runs. So.

19:11

We simulated said as scenarios and in in

19:14

getting a remarks and and in our infrastructure

19:16

scaling to see how ago busting side right

19:18

because we have to repeat that again and

19:20

and it had to be did a robust

19:23

are both on a process perspective and also

19:25

from a technology infrastructure perspective. They

19:28

develop their. Process to execute a transfer

19:30

in a very short window. then

19:33

later wrenches into the worse and

19:35

develops digit ceased to be both that

19:38

process but. It's once they got started they

19:40

had to see it through. Regardless,

19:42

Of what the world with throw at. Them. I

19:44

felt prepared. But. They also sell

19:47

some nervousness to. the eurovision there

19:49

was something like this we were prepared for

19:51

that oh we had buff recommend simply had

19:53

that buffers in those eighty two hours said

19:55

something takes longer to we can still catch

19:57

up we had everybody on site so we

20:00

very quick in triaging issues and very,

20:02

very quick in actually fixing issues as

20:04

they happen during that 83-hour weekend. We

20:06

had all our vendor partners on site. I

20:08

think that was really helpful.

20:12

Issues did happen, but the team's ability to gather

20:14

around the issue and then triage it and then

20:16

fix it was pretty amazing actually. So that made

20:18

sure that we were able to actually finish

20:21

it before 83 hours. It

20:27

took just under two years from announcing

20:29

the acquisition in December of 2021 to launching after

20:33

Labor Day weekend in September of

20:35

2023. Most of the time

20:38

was spent on infrastructure upgrades and planning with

20:40

only about seven months of hands-on practice with

20:43

the data. They practiced all

20:45

they could, but it was still

20:47

a massive project on a tight turnaround. And

20:50

in the end, we had almost

20:52

no forced conversion customer issues

20:54

or errors of that nature.

20:56

And I think that that happened because all

20:59

the planning that went into this

21:01

and before our regulatory approval and

21:04

into our MOCs and Drosianasil, that

21:06

was absolutely essential and

21:08

critical and why the conversion

21:10

was so good. The

21:14

Bank of Montreal had undergone several

21:16

digital transformation projects. And

21:18

when the acquisition of the Bank of the

21:20

West was announced, they'll have to come up

21:22

with a plan to execute the largest transformation

21:24

projects they've ever attempted. They

21:27

designed that plan for over a year

21:29

and expanded their infrastructure. And

21:31

when they finally got the thumbs up from

21:33

regulatory agencies, they put their plan

21:35

into action over the course of seven months. That

21:38

included testing, debugging, and testing

21:41

again, with four mock trials,

21:44

all to be ready to do the

21:46

real thing over 2023's Labor Day weekend.

21:48

Thanks to their previous experience, meticulous

21:50

planning, and help from their vendors,

21:53

they were able to pull off the

21:55

largest data migration in Canadian banking history

21:58

with time to spare for business. It's as usual

22:00

to resume that Tuesday morning. That's

22:09

it for Season 3 of Code Comments. We

22:12

hope you've enjoyed our journey into the

22:14

weeds of digital transformation. Season

22:16

4 is coming soon. Stay

22:19

tuned for more of our guests' riveting

22:21

stories delivered with effortless elements.

22:24

You can learn more at

22:26

redhat.com/Code Comments podcast, or

22:29

visit redhat.com to find our guides to

22:31

digital transformation. Many thanks

22:33

to Atul Verma for being our guest. Thank

22:35

you for joining us. This

22:38

episode was produced by Johan Philippine,

22:41

Kim Hoang, Caroline Craighand, and Brent

22:43

Simino. Our

22:45

audio engineer is Christian Prahm. Mike

22:52

Esser, Nick Durence, Eric

22:54

King, Cara King, Jared

22:56

Oates, Rachel Ortell, Perry

22:59

De Silva, Mary Zierl, Ocean

23:01

Matthews, Paige Stroud,

23:04

Alex Trebulsi, Booboo House,

23:06

and Victoria Lorne. I'm

23:10

Jamie Parker, and this has been Code Comments,

23:12

an original podcast from Red Hat. www.redhat.com

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features