Misuse of Data is Solvable

Misuse of Data is Solvable

Released Wednesday, 17th July 2019
Good episode? Give it some love!
Misuse of Data is Solvable

Misuse of Data is Solvable

Misuse of Data is Solvable

Misuse of Data is Solvable

Wednesday, 17th July 2019
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:15

Pushkin. I'm

0:20

Maybe Higgins, and this is solvable

0:23

Interviews with the world's most innovative

0:26

thinkers who are working to solve the

0:28

world's biggest problems. My

0:30

solvable is that every frontline

0:33

social organization is the ability

0:35

to use data and AI same way,

0:37

same capacity that the big tech companies

0:40

do today. I want to see a world where the

0:42

same algorithms that are routing your packages

0:44

to you your house coming so efficiently

0:46

because an AI figured out the best way to avoid traffic

0:48

and weather are just as equally being applied

0:51

to delivering a vaccine through an area

0:53

before it spoils. That's Jake

0:55

poor Away, the founder and CEO

0:58

of nonprofit Data Kind. He's

1:00

talking to Jacob Weisberg about how

1:02

he's working to make that world

1:05

a reality. The Rockefeller Foundation

1:07

has thought about this too. More than

1:09

two point five quintillion bytes

1:12

of data are produced every day.

1:14

That's one hundred trillion bytes.

1:17

This abundance of data, combined

1:19

with rapidly advancing analytics

1:21

capabilities, could really improve

1:23

the lives of billions of people

1:25

around the world, but it's

1:28

only living up to a fraction of that potential.

1:31

While private sector businesses have been building

1:34

and deploying data science capabilities

1:36

for many years now. Most organizations

1:39

in the nonprofit and civic and public

1:41

sectors are way behind. Of

1:44

course, they want to use the applied

1:46

data to make their work go farther and

1:48

faster and to help more people, but

1:51

they don't often have the resources.

1:54

I mean, put yourself in the shoes of a newly

1:57

minted graduate. They're probably

1:59

wearing tivas. Last year, the

2:01

San Francisco Chronicle analyze

2:03

glass door data of the starting salaries

2:06

of some of the biggest tech companies in

2:08

the Bay Area. They found

2:10

out that tech pays even for the

2:12

young and inexperienced. The

2:14

average starting salary for a software

2:16

engineer was almost ninety two thousand

2:19

dollars. So there's

2:21

the workers and then there's the technology

2:23

itself. We know the power data

2:25

science can have for social good because

2:28

we've seen it in action. When mission

2:30

driven organizations have the right talent

2:32

and tools and knowledge, data

2:34

science can generate real human impact,

2:37

helping vulnerable families access public

2:40

benefits, saving water and

2:42

money during droughts, and saving time

2:44

in resettling refugees so

2:46

that they can find homes and jobs

2:49

faster. Jake Borway

2:51

works on this stuff every day.

2:53

He's a machine learning and technology

2:56

enthusiast who loves nothing more than seeing

2:58

good values in data. In

3:00

twenty eleven, he found a data Kind,

3:02

bringing together leading data scientists

3:04

with high impact social organizations

3:07

to better collect, analyze, and

3:09

visualized data in the service of humanity.

3:12

Jake works to ensure organizations

3:15

like the Red Cross have access to AI

3:18

and data science that's as good as

3:20

the access enjoyed by huge companies

3:22

like Facebook. Data Kind has

3:25

twenty thousand volunteers around the world,

3:27

who he likens to mets on San Frontier,

3:30

the doctors without Borders, except their

3:32

data scientists working pro bono

3:34

with leading social change organizations

3:37

on all kinds of projects,

3:39

including one that has data scientists

3:41

from Netflix predicting water usage

3:44

in a California neighborhood. It's

3:46

fascinating, So enjoy this conversation

3:49

and I'll talk to you after. What's

3:56

the problem. In a nutshell, the

3:58

problem is that digital technology

4:00

and artificial intelligence

4:02

have exploded over the last ten or fifteen

4:05

years, which have created huge opportunities

4:08

in the corporate space or

4:10

in building new apps for society, but

4:12

there's very little application

4:14

of that to social sector

4:16

causes. So we have this huge

4:18

opportunity to use a revolutionary technology

4:21

to predict the future

4:23

of things, to understand our society better,

4:25

to automate things that we either don't want to or couldn't

4:27

do, And yet there's a huge

4:30

potential loss in that it's very difficult

4:32

to get that applied to pro social causes

4:34

that we need. Jake is a data

4:36

scientist. When did you start

4:39

to see some of the downsides

4:41

around big data? Really? The article

4:44

that I used to point to is like the beginnings

4:46

of the tide turning to the negative. Was

4:49

the article that was titled

4:51

very salaciously, Target Knows

4:53

You're pregnant, And if you remember this one

4:55

from twenty thirteen, but the basic idea

4:57

was that someone had their

4:59

daughter, that maybe sixteen seventeen year old daughter

5:02

was receiving mailers from Target that said,

5:04

Hey, we think you need to buy kupons

5:06

for baby diapers or formula,

5:09

and the dad called up, you know Target, all, Matt,

5:11

So what are you sending me all my daughter all these

5:13

deals for having babies. She's not pregnant,

5:15

Like, why are you trying to get her to become pregnant?

5:18

And the person on the other end of the line, of course

5:20

didn't know what was happening, because you know, the

5:22

algorithms just send you what they think

5:24

you're going to buy based on other stuff you've bought, and

5:28

it's He called back later, kind of shame facedly and

5:30

said, you know, I talked to my daughter and actually she is

5:32

pregnant, and you know, the data had

5:34

picked up on that simply because you know, it watched what she

5:36

bought and she was probably buying you know, prenatal care,

5:39

vitamins and stuff. But that article

5:41

got shared around as the

5:44

sign that big data was going to

5:46

be negative. Target knows you're pregnant.

5:48

What a horrible invasion of privacy. That title alone

5:50

should, you know, make everyone's skin crawl.

5:53

But that's the problem is that that shouldn't

5:55

be the case. We think of there are so many

5:57

opportunities to be using data and

5:59

algorithms to see where

6:02

disease outbreaks are going to occur or predict

6:05

in the same way as what kind of conditions

6:07

you might have so you can live a healthier life.

6:09

And so I think it was then that we really thought,

6:12

Okay, we need to come out

6:14

and show the positive sides of this. Otherwise

6:16

everyone's going to just run to the fear around

6:19

what data science can do. We're

6:21

interested on this podcast and

6:23

people who've taken this leap to become

6:26

problem solvers and to take on the

6:28

biggest problems in the world. What

6:31

made you take a leap

6:33

to leave the private sector

6:36

to start an organization with

6:38

an ambitious goal. Well, I

6:40

have to say it was a bit of an accident. Actually it

6:43

was maybe twenty ten or eleven, and

6:45

I had just coincidentally come out

6:47

of school with a computer science and a statistics

6:49

degree, which little did I know was going to become

6:52

what would lead to the title data scientist. And

6:55

I was working at the New York Times R and D Lab,

6:57

and really what seemed obvious

7:00

was the fact that we had all of this new digital

7:02

technology, from cell phones that people

7:04

were carrying around with them, to satellites

7:07

launching in the air, to sends

7:09

being put around the world, that we were digitizing

7:11

our very existence. We were becoming

7:14

a digital species. There was almost like a central

7:16

nervous system to the world, and that

7:19

meant that were these huge opportunities to

7:21

learn from that to you know, have algorithms

7:24

drive maybe our greatest human values.

7:26

But the folks who really knew how to convert data

7:29

into those actions. The data scientists

7:31

were largely locked up in tech companies, and

7:35

you know, I would actually go to hackathons,

7:37

which are you know, like weekend events where technologists

7:40

would get together and just work on whatever

7:42

they thought was cool. And I would sit there

7:44

and think, this is so interesting because you know, we're not

7:46

at a company, we're not at our jobs.

7:48

We're here on the weekend. You know, I'm sitting next

7:50

to some machine learning engineer from Google and

7:52

NASA scientist, and I'm like, this is great. We

7:55

can make whatever we want. Like the world

7:57

has just become so ripe for what's possible.

8:00

And at the end of the day, the stuff that people made was

8:02

just so unfulfilling.

8:04

You know that someone had made like Twitter for pets,

8:07

or had improved how you'd find local

8:09

deals in your neighborhood, and

8:11

so I just said, man, there's got to be something

8:14

more we can do for society, or something

8:16

more fulfilling really than this, as opposed to

8:18

solving the problems of very well paid

8:20

twenty somethings in the Bay Area, right,

8:22

which is the parody, but that is a lot of

8:24

the new companies you hear about are

8:27

solving problems like how do you get your food

8:29

delivered or god knows how to get cannabis

8:31

delivered? You know when you when you could already

8:34

buy it by walking around the corner. You're exactly

8:36

right. We solve the problems that we ourselves

8:38

have. And as you've pointed out, the tech community

8:40

for better for worse, excused young male

8:43

US. So, yeah, I just thought, you

8:45

know, what would it take for to be applied to the social sector.

8:48

Where are the people who are on the front lines of getting

8:50

people food or clean water? And how could you apply

8:52

it there? And so I just wanted that

8:54

job myself. What didn't exist?

8:57

So I just wrote to a couple of folks in the

8:59

community here in New York and said, hey,

9:01

you know, instead of going and building you know,

9:04

a door dash competitor, could

9:06

we, I don't know, work with the Red Cross

9:09

US or Kiva who goes cash

9:11

transfers to folks, and say what could we do with

9:13

their data? What could we learn? What are the positive

9:15

ways we could work together with them? And

9:18

I thought people would just say, yeah, good idea, Jake,

9:20

but no thanks. I kind of just

9:22

buried the little sign up

9:24

link for folks, and I

9:26

was surprised to find that people started sharing around before

9:29

I knew it. I came back to work the next time,

9:31

hundreds of emails in my inbox from people

9:33

not just in the city but around the world saying, though

9:36

this is great, I want to get involved with data

9:38

kind, I want to do data kind France. At

9:41

one point, a few months into this, the White House

9:43

called and said, hey, we're interested in big data initiatives.

9:45

What's this thing? And you know, joke because

9:48

I don't know, it's not really

9:50

a thing, But to me it really tapped

9:52

into an energy from both

9:54

the technology side and the nonprofits

9:56

and governments who are writing, who said, we're

9:59

energetic to take on this new

10:01

wave of this technology and figure out how could

10:04

be applied. And so our job ever since

10:06

has really just been trying to support that

10:08

community, harness its energy, and be

10:10

helpful in any way we can. Since

10:12

you've been doing this, it's amazing how quickly

10:14

attitudes have shifted around

10:17

big data and algorithms. I mean,

10:19

just think about Facebook, which even

10:21

a few years ago was thought as

10:24

a socially positive

10:26

company. That was why part of why people went

10:28

to work there, and in just

10:30

a couple of years it's become

10:33

something that people think is an overwhelmingly

10:36

negative force. Are we're swinging too

10:38

far in the other direction in our skepticism

10:41

about what data is going

10:43

to be used for? Well, I think there's

10:46

a healthy reckoning on how

10:48

we've been using data and technology in the

10:50

past. You're right that in the last

10:52

couple of years there was sort of unfettered techno

10:54

optimism amongst a lot of the big

10:56

companies and that this would just change everything and

10:59

nothing could ever go wrong with social media

11:01

and data. So I think there is an obviously

11:04

very healthy reckoning of this, and we're starting to realize

11:06

what the downsides could be. What

11:09

your point I think is missing and we really need

11:11

to get acclimated to, is where

11:13

do we go from there? You know, is the

11:16

idea that we're just going to put the genie

11:18

back in the bottle, not use digital

11:20

information in these ways, regulate

11:22

all companies into existence. I'm

11:25

in favor of, by the way, stronger regulation,

11:27

for sure, But I think what we need

11:29

now is more examples and

11:31

more of a community of practice around what

11:34

it looks like to use these technologies ethically.

11:36

That's a big conversation obviously, that's in the space

11:38

right now. You hear a lot about the ethics of data

11:41

use, ethics of AI, but even then

11:43

I find those conversations fairly academic. I

11:45

think what we need are some more positive examples

11:47

of how it can be applied and positive principles

11:49

that we all agree to adhere to. And

11:51

so the data kind that's something we're

11:54

really working to try to demonstrate,

11:56

is to say, yes, we need

11:58

to protect ourselves, uphold

12:01

our civil liberties through data. Make

12:03

sure that we're not degrading human life

12:05

with what's going on with data in the business

12:07

world? And what does

12:09

it look like when you want to use

12:12

data and algorithms to predict,

12:14

say, inclement weather that could wipe out a

12:16

crop and that's critical to someone's

12:19

sustenance in another part of the world. What's

12:21

the good version of this? You know? How do you make

12:23

sure that it's accountable to those folks? How do we

12:26

make sure that everyone involved has

12:28

some sense of what the algorithm is doing and how their

12:30

data is being used. And I don't think

12:32

we can move past that point just by talking

12:34

about it. I think we need real concrete

12:37

examples of data scientists,

12:40

nonprofits, social organizations, constituents

12:42

getting together to say, what does the good version

12:44

of this look like a better version. I should say

12:46

there was a positive example in the news

12:48

recently with the prediction of

12:51

the cyclone in South

12:53

Asia that killed

12:55

very few people, and in the world before

12:58

big data, that same storm

13:00

might have killed a lot of people through panic,

13:03

through all sorts of consequences

13:05

because people wouldn't have known it was coming. I mean,

13:08

is that the kind of example we're talking about

13:10

here? Something positive? I think that's exactly

13:12

right. So at data Kind

13:14

we team technologists like data scientists

13:16

who want to volunteer their time alongside

13:18

social change organizations, be they government

13:21

agencies or nonprofits who have

13:23

a pro social mission, might be able to use data

13:25

and algorithms to do even more, and

13:27

we together they collaborate and kind of codesign

13:30

the solutions that they might foster a

13:33

better world. So some examples that we've seen

13:35

are exactly what you're talking about. There was

13:37

a project that a group did as a water district

13:39

in California, and the problem they faced

13:41

was when drought season comes, you know,

13:44

it's really hard to get water to folks. People don't have

13:46

water. That's obviously problematic.

13:48

You need drinking water and water to bathe,

13:51

etc. But more than that, the

13:53

cost of not getting them water is

13:55

really high because the only way

13:58

that they can get water to the places they don't have it is

14:00

to actually take a dump truck, drive

14:03

it up to some other reservoir, maybe over to

14:05

Nevada, literally fill it by hand and drive

14:07

it back. So you're also facing

14:09

like huge environmental costs, huge energy

14:12

costs. So they ask the question,

14:14

you know, could we figure out a way to predict

14:16

how much water demand there's going to be at

14:18

a more granular level so we can really understand

14:21

and ration more effectively. And

14:23

so we team them up with some data scientists

14:25

that come from everywhere from Netflix to

14:27

environmental science organizations, and

14:30

together they collected the data at

14:32

almost a block by block level, and they built

14:34

an algorithm that sort of takes that data in

14:36

and continually gives updates. Does water district

14:39

to say, hey, this is how much we think people

14:41

are going to use. Here's how much they've already used. Tomorrow,

14:43

you're probably going to see this, And they said, in the

14:45

first year of using this, they saved over twenty five

14:47

million dollars in addition to getting water

14:49

to people much more effectively. So

14:51

I think when you hear about cases like that those

14:53

are the kinds of examples that we want to kind

14:56

of platform and see more even the world where

14:59

within the confines of social

15:01

organization these data and algorithms that

15:03

can really drive real effectiveness.

15:06

Now your people are all doing this for good.

15:09

We've all heard about the kinds

15:11

of bias issues that have started

15:13

to turn up with predictive algorithms

15:16

of different kinds, and they seem

15:18

to get embedded just because

15:20

of the inherited unconscious

15:23

biases of the people who write

15:25

the algorithm. Absolutely, how do you avoid

15:28

recapitulating that problem again

15:30

with the projects you're working on? Such

15:33

an awesome question, and I think just

15:35

to comment on the challenge generally,

15:37

I think you really nailed it there. That

15:39

the challenge that we face is that

15:42

humans have been collecting data from

15:44

our activities that incorporate

15:47

unconscious bias, and so if you then have a machine

15:49

learn from it or you analyze it, you

15:51

write replicating that. So, while

15:54

I will not admit that we have a perfect solution, because

15:56

I mean we're sort of talking about the challenge of

15:58

bias and humanity, some

16:01

of the things that we really focus on is the

16:04

technology to us that we're building

16:06

is secondary to the outcome for

16:08

people. So, for example, it's not

16:10

exciting to us to build an

16:12

algorithm that helps a

16:15

homeless shelter triage people

16:17

to the right homeless shelters correctly just

16:19

because it's a cool algorithm. We only

16:21

care if at the end of the day, the ultimate success

16:23

metric that you know, a wide

16:26

range of inclusive folks are getting

16:28

housing is achieved. So

16:30

I want to say that first because I think one of the reasons

16:33

we see some of these biased challenges

16:35

rise up is that folks say, hey,

16:37

the algorithm is doing something. It's doing a thing I

16:39

want, like giving out sentences

16:42

in courts or you know, policing

16:44

folks, but without a question of and

16:46

how is it biased? Towards the end, you know, what's

16:49

it achieving. But the other thing we do is we work

16:51

extremely closely with our NGEO

16:53

partners who are on the ground and

16:56

who understand a lot of those challenges. And

16:58

so we'll actually do what we call a pre mortem

17:01

some other companies do, which is before we even start

17:03

a project, we'll say, okay, let's pretend we jump to

17:05

the end. Well, you know, basic

17:07

questions like how will this be maintained, who's

17:10

actually going to use this tool at the end of the day. But

17:12

then we'll also ask two questions, which is one,

17:15

what's the worst that happens if we fail? So

17:17

if you're relying on us to build, this

17:19

is not something we would necessarily build. But let's say someone

17:22

said, hey, we want a tool that predicts

17:24

whether you have cancer or not. Okay, well

17:26

that's pretty serious. And if we don't succeed,

17:28

are you stuck because you really

17:30

needed that and now your organization can't proceed.

17:32

That's important to know. But then we also ask

17:35

what's the worst that happens if we succeed? So

17:38

who is this going to affect? How would you know that it's

17:40

wrong? Right? Like, how would you know just because it's

17:42

chugging away making predictions? Is it doing

17:44

the right thing? Is it disenfranchising certain

17:46

groups? Could somebody use it to intentionally

17:49

target people who have cancer? We

17:51

ask a lot of those questions, and what's really

17:53

important us in that questioning is who

17:56

has the power and agency to both

17:58

understand the algorithm and change

18:00

the algorithm Because in the current landscape,

18:03

when tech companies build algorithms, it's not much

18:05

you can do. But you know, I don't have enough agency to know

18:07

how Facebook's news feed algorithm

18:09

works, nor can I really affect it much? But

18:12

that's not acceptable to me when you're bringing algorithms

18:15

into the public good space and this is actually

18:17

affecting folks lives. So those are some

18:19

of the questions we ask up front and really try to be rigorous

18:21

with our partners around oversight of and oftentimes

18:24

that's enough for us to not take on a project. It's

18:26

great that you're thinking steps ahead about

18:28

these projects, and your own

18:31

solvable is, ironically,

18:33

to put yourself out of business is to create

18:35

a world in which you don't need a data

18:38

kind to point people towards positive uses

18:40

of data. That's right, What would it take to make

18:42

that happen? And I guess

18:44

playing your chess game. What

18:47

happens when that happens. The day we close our

18:49

doors is the data. Every frontline social

18:51

change organization has the capabilities

18:53

to use data and AI the same way the big tech

18:55

companies do ethically and capably.

18:58

And so you know, our little slice of

19:00

that today is to bridge the gap

19:02

in getting the human capital, the talent, the

19:05

data scientists AI engineers to social

19:07

organizations. That sort of step

19:09

one is to show people the art of the possible

19:12

and really get some of those challenges solved. But

19:14

what do it take to do that? Long runs to think about what

19:16

are the problems and hurdles we're trying to overcome

19:18

with that model today, and they are that

19:21

in the social sector there isn't enough

19:23

awareness about what the technology could do or

19:25

where it would be applied. So we have to start with

19:27

that, and I think now increasingly you're seeing

19:29

more of more folks understanding

19:31

that, more companies talking

19:34

about doing data and AI for good. So

19:36

I feel like there's some progress there, But

19:38

if you go further, you have to think, well, how

19:41

would a government or nonprofit get

19:43

access to these resources in the long term,

19:45

And there I think there's going to be a

19:47

long term shift in getting funding

19:50

to move towards nonprofits

19:52

for things like data science and AI.

19:55

You're going to need maybe consultancies

19:57

that actually provide this service in

19:59

the social sector. There's lots of different

20:02

models for where that capacity could come from, but

20:04

I think the biggest things that we need right now

20:06

are that awareness of how could be used and then

20:08

the I say, the funding for ngox

20:11

to be able to hire a data sciences and

20:13

incorporate them into the work they do. Now.

20:16

When that happens, what happens. Oh,

20:18

I mean, I'd love to say that all

20:21

challenges that are stymied by

20:23

not having data science and AI are solved live

20:26

apply ever after. But actually, what I

20:28

think my most ambitious hope

20:31

for the world is that we could actually

20:33

tip the balance a little bit to where

20:36

the social sector is paving the

20:38

path for how machine learning

20:41

and AI could be used. I think we're so

20:43

built into this default model that business

20:45

and wealthy countries set the agenda

20:48

and everyone else kind of struggles to catch up and

20:50

imitate. We're talking

20:52

about a technology that is so fundamental

20:54

to humanity because it relies on data

20:56

about us. When we talk about AI,

20:59

it is like automating human processes that

21:01

I don't think that's something that should be just a business

21:03

application that is ported to the world.

21:06

There should be a place for us to say,

21:08

what does it look like when we apply the technology

21:11

to the better angels of our nature? What is

21:13

human based AI? What are the things

21:15

we care about? And I can't think of any other

21:18

place besides the social sector whose sole

21:20

mandate is to look out for humanity.

21:22

So my dream is when you bridge that

21:24

gap, when that's there. You could actually

21:27

have this voice from the social sector

21:29

itself saying what it looks like to have human based

21:31

ai Jick. Do you think about the

21:34

training of data scientists.

21:36

I sometimes think we're just missing

21:38

the intersection between moral

21:41

philosophy and computer

21:43

science. You know, the people who are majoring

21:45

in college and electronic engineering

21:48

aren't reading much Kant, and the people who

21:50

are reading Kant don't understand much

21:52

about computer programming, you know, And

21:55

in a way, the problem is that the people at

21:57

these tech companies don't have

21:59

a different kind of background in literature

22:02

and philosophy and history to

22:04

think through the implications of what

22:06

they're building the way you clearly are

22:09

thinking through those implications. I think

22:11

it's a really great point that when

22:14

wielding the technology, it's really

22:16

important to have a

22:18

very varied sense of skills

22:21

somewhere in the conversation. And increasingly

22:23

you're seeing data science and tech curricula

22:25

incorporate ethics training into

22:27

their courses, which I think is great. In the same

22:30

way that I'm not a historian myself,

22:33

I feel like physics went through this reckoning with

22:36

the ethics of what was being built when they went from

22:38

the joy of all energy and nuclear

22:40

power to the realizations of the downsides of

22:42

the nuclear bomb nuclear weapons. So

22:45

I think you're going to see that similar shift, which is which

22:47

is great, But you know, I think what

22:50

your question raises actually a bigger point to

22:52

me, which is who holds

22:54

the responsibility for the ethical

22:56

applications of this technology? And

22:59

I'll just say, while I would love to

23:01

see, you know, ethical code around

23:03

data science, it's a lot

23:05

of responsibility to say that engineer

23:08

x it has come out of college engineering

23:11

college for two years and is working at big tech

23:13

company and gets asked by

23:15

their boss to build something fairly

23:17

benign, like I upgrade

23:20

to their their GPS system that

23:22

recommends routes you can walk that

23:24

avoid crime ridden areas. I say,

23:26

here's an algorith build that. Well,

23:29

number one, that's not necessarily a bad thing to

23:31

builds not like you know, it's not as black and white

23:34

as some people may feel about building a weapon or

23:36

something. But of course, if you sort

23:38

of play the game through, if

23:40

everyone were using an app that avoided crime ridden

23:42

areas, probably end up with some sort of digital

23:45

segregation. So number one, there's already

23:47

long range effects that you'd have to anticipate.

23:49

But more than that, It also relies on that,

23:51

you know, second year engineer to say, hey

23:54

boss, yeah, I'm not doing that. You know

23:56

this is I'm quitting, which, given

23:59

you know people's career paths and

24:01

the money associate with these jobs, is a

24:03

big ask. So I would say

24:06

it's not just about the technologies.

24:08

I think the question is, you know, how do we

24:10

share that responsibility? Is it the technologist

24:12

to make this call? Was it the manager said we want

24:14

to build this feature? Was it the constituents

24:17

would be affected by that? Is a government

24:19

to come regulate. I don't think there's any one answer,

24:22

but I do think the frame that people have

24:24

I'm hearing more in the public

24:26

right now around technologists need to know

24:28

the ethics, I think is missing the bigger

24:30

picture that that alone isn't the right responsibility

24:33

model. In my mind. You have two very

24:35

different ideas of capitalism, right. I

24:37

mean, there's an older idea

24:40

that government sets the rules,

24:42

tells you what you can and can't do, and that businesses

24:44

should obey the law and regulation

24:47

but go be very free to do what they want.

24:49

Within that, the newer

24:52

model suggests that the

24:54

businesses themselves have a higher degree

24:56

of social responsibility, and

24:58

it's not enough to follow the rules

25:01

that they have to be thinking about outcomes.

25:03

Look, I would love to live in a world where

25:06

business and social

25:08

outcome were somehow linked,

25:11

where the fact that businesses were accountable

25:14

somehow to at least not doing harm,

25:16

if not improving human life. That would be a really

25:18

great intersection. Call

25:20

me a cynic, but we're not really currently

25:22

set up for that. The incentives aren't there.

25:25

In my mind, businesses are still

25:27

held mostly to the bottom line, even though we are

25:29

seeing some increased interest

25:32

in social entrepreneurship, where businesses

25:34

may have a double bottom line, one that's

25:36

monetary and one that's social, or

25:38

new structures like b corps that actually

25:41

say, hey, we are committed to some social cause. But

25:43

I think it's a lot to ask of

25:46

a company. And as much as it's a nice

25:48

idea of a future of capitalism, it's certainly

25:50

not the rule or the law. And

25:53

so I don't think that's

25:55

going to be the sole model that brings us to a world

25:57

of pro social technology and AI.

25:59

If for no other reason then certain human

26:02

needs are inherently

26:05

cost ineffective, I would say to solve

26:07

at least currently if people could cry those if every

26:09

social problem were able to align perfectly

26:11

with a business needs, be in great shape.

26:13

But when it comes to housing

26:16

the homeless or making sure that people have

26:19

food to eat, that is a

26:21

difficult challenge that I don't see an immediate

26:23

market solution too, and so I don't

26:25

think even the best intention companies could survive

26:28

in a market based world trying to solve that

26:30

problem. I mean, Google, which is still

26:32

the first and best known data

26:35

company essentially has held

26:37

out this promise that we're

26:39

going to make a lot of money using data

26:42

commercially targeting advertising,

26:44

but we're going to use a lot of what

26:46

we make, or at least some of it

26:49

in a kind of philanthropy. We're going to try

26:51

to create some of the kinds of solutions

26:54

you're talking about that aren't driven

26:56

by the profit motive. Does that work look

26:59

like? I said, One of the big challenges we

27:01

face, I think in the social sector right

27:03

now is the lack of funding for innovation

27:06

for your technology. And so

27:08

if company are going to offer

27:10

that great netwin,

27:13

do I believe that the world's biggest

27:15

challenges will be solved

27:17

on the you know, philanthropic efforts

27:20

of large companies that

27:22

I'm not so hopeful. I think there.

27:25

I still wonder where are the folks for whom

27:27

the mandate is solely pro social,

27:29

you know, for governments or again nonprofits

27:32

or civic organizations whose very

27:34

guiding mission is to make sure that human prosperity

27:37

is enhanced. There's

27:39

a little bit more of a direct line there. And so that's

27:41

why I think it has to be a combination

27:43

of the two, and why we focus so much on

27:45

saying instead of trying to bend

27:48

the Googles of the world to you

27:50

know, being in charge of clean water, which frankly I

27:52

think is really not not the way you want to go. Where

27:55

the you know, the clean water organizations of the world

27:57

who just need that same technology to be ten hundred

27:59

times more effective. What are some things

28:02

listeners to this podcast might be able

28:04

to do to work towards the kinds

28:06

of solutions you're thinking about. Well,

28:08

the great thing about this cross cutting

28:10

technology is that everyone has a role to play

28:13

in creating this future vision of more

28:15

social and positive AI. Well,

28:18

first, I would say, if you're a technologist who works with

28:20

data and you want to give your time

28:22

and energy back, come aboard. There's

28:24

a whole movement of folks doing this work. Whether

28:26

you want to come work with us at Data Kind and work on projects

28:29

pro bono, or with many of the other organizations

28:32

like Driven Data, Data Science

28:34

for Social Good, CODE for America

28:36

who take technologists and apply them to social

28:38

problems, come aboard. There's no reason to

28:40

wait. And increasing Link asked the

28:42

company you work for if there's

28:44

opportunities to give back, because we see more tech companies

28:47

do that. But if you're not a data scientist,

28:49

non data scientist, I would

28:52

say, yeah, I have to first give a shout out to

28:54

anyone of the funder or donor world. One

28:56

of the big gaps here is that there

28:58

isn't enough funding for technology and innovation

29:00

in the social sector. So I've been very

29:02

impressed with the efforts of Rockefeller Foundation

29:05

and MasterCard Impact Fund and

29:07

others who are giving big amounts

29:09

of funding to data and AI and social

29:11

good to bring it on. We need more of that for

29:14

this happen. But very lastly,

29:16

if not a data scientist and you're not

29:18

a funder, I would say there's a

29:20

huge opportunity to get involved in

29:22

just understanding what this new technology

29:24

can do. Ciicero had a quote that

29:27

you should take an interest in politics, because politics

29:29

is definitely going to take an interest in you. And

29:32

I feel exactly the same about data and algorithms.

29:34

They're going to take an interest in all of us. In fact,

29:37

they're shaping our lives already today. Maybe

29:39

the reason you're listening to this podcast is because an algorithm

29:42

recommended it to you based on your previous listening

29:44

habits. And so if these tools are

29:46

going to be shaping and visibly

29:48

shaping our decisions, then

29:51

it's all the more incumbent on us as

29:53

society to understand

29:56

what the ramifications are, where

29:58

it's showing up in society, and

30:00

how we might have some agency over the role

30:02

we want it to play. I think so much

30:04

of the reason you hear so much negativity today

30:07

is because we don't understand it well

30:09

enough and we don't have any agency to change it. So

30:11

our only options are to shrug and say, well,

30:13

I guess that's going to be the way it is, or

30:15

to rail against it and say this is bad. But

30:17

if we could get to a place where we had

30:20

call it algorithmic literacy. Not

30:22

everyone needs to code, but if you just understand a

30:24

little more about it, then I think we'd progress

30:26

towards a society where we felt like

30:29

we had a more control agency over

30:31

how we work with the machines instead of against

30:33

them. That's a great point. And I have

30:35

to ask you for a reading recommendation. If

30:38

people need to get educated, what should they read.

30:40

What's a thing or two they should read to

30:42

get more sophisticated about data.

30:45

So the best thing I think you can read

30:47

are some of the blogs that actually talk

30:49

about the state of the space today, because

30:52

it's changing so much that you know there's no

30:54

one book that's going to capture it. Yeah. So some of the

30:56

ones I love are the company

30:58

O'Reilly O'Reilly dot com.

31:01

They have a feature on data

31:03

and AI that's a weekly newsletter that comes

31:05

out talking about everything from the interesting

31:07

innovations and AI to what

31:10

kind of privacy concerns are

31:12

in the space today, and it's very readable for

31:14

a common audience. I think that's one of the most interesting

31:16

ones. I would also read Data

31:18

and Society's newsletter. They are a group

31:20

here in New York who are really tackling

31:22

the question of what does it mean to have data

31:25

and algorithms in society. They have some

31:27

really great accessible writing there The

31:29

other thing I would say is if you have

31:31

the privilege of living near a

31:34

medium, miss or big city that has a

31:36

meetup community. There are

31:38

tons of data science AI meetups

31:40

where people go and just talk about what's going on

31:42

in the space. And I always recommend

31:44

that people drop by at least one because

31:46

if you see it and feel it and here people

31:49

are talking about you don't have to understand,

31:51

you know, if there's any math on the board, but just you

31:53

almost immediately, it creates

31:56

a states where people walk and go, oh,

31:58

I actually see what this is all about. So

32:00

I would say if you happen to be a checkout meetup,

32:03

dot com or any of those communities. The

32:05

data scientists AI folks are very friendly

32:07

and I know you'll have a great time, if not an

32:09

educational one. Terrific. Well, Jake

32:12

Probi, thanks for joining us Unsolvable My

32:14

pleasure. Thanks so much for having me reasons

32:18

for hope all of this potential being

32:20

harnessed to improve people's lives,

32:22

the really big stuff. Although

32:25

my ears certainly did prick up when Jake mentioned

32:27

Twitter for pets, as did my dog's

32:30

ears. She has been dying to get online

32:32

and really drag other dogs anonymously,

32:35

of course, but both myself

32:38

and my dog are pleased to see what

32:40

data Kind has actually managed to do so

32:42

far, creating algorithms that

32:44

have helped transport clean water more effectively,

32:48

informed government policy that protects

32:50

communities from corruption, and detected

32:52

crop disease using satellite imagery.

32:55

Jake and his team and all those volunteers

32:58

are leveling the playing fields and you can

33:00

help too. Read more about data

33:02

Kind and how to get involved at

33:04

Rockefella Foundation dot org. Slash

33:07

solvable. Solvable

33:10

is a collaboration between Pushkin Industries

33:12

and the Rockefella Foundation, with production

33:14

by Chalk and Blade. Pushkin's

33:17

executive producer is Mia LaBelle.

33:19

Engineering by Jason Gambrell and

33:21

the fine folks at GSI Studios.

33:24

Original music composed by Pascal

33:26

Wise. Special thanks to Maggie

33:28

Taylor, Heather Faine, Julia Barton,

33:31

Carlie Migliori, Sheriff Vincent,

33:33

Jacob Weisberg, and Malcolm Gladwell.

33:36

You can learn more about solving today's biggest

33:38

problems at Rockefella Foundation

33:41

dot org. Slash Solvable.

33:43

I'm Mave Higgins, Now go solve

33:45

Itt

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features