Higher Ed & AI

Higher Ed & AI

Released Wednesday, 3rd April 2024
Good episode? Give it some love!
Higher Ed & AI

Higher Ed & AI

Higher Ed & AI

Higher Ed & AI

Wednesday, 3rd April 2024
Good episode? Give it some love!
Rate Episode

Episode Transcript

Transcripts are displayed as originally observed. Some content, including advertisements may have changed.

Use Ctrl + F to search

0:00

Than k you everybody for joining us on the Boring

0:02

AI Show . Today we're talking

0:04

about higher education with

0:06

Dr Emily Bailey , Associate

0:08

Professor of Philosophy and Religious Studies

0:11

at Towson University and a member

0:13

of the AI Task Force . So thank you

0:15

for joining us today . Emily , thank you

0:17

for having me Awesome . Well

0:20

, tally , you want to kick us off with the news

0:22

and then we'll get into our topic

0:24

.

0:24

Let's do it . So the news article today

0:26

is about why

0:29

AI is so bad at spelling

0:31

, which is really interesting and kind of delves into , just explains

0:34

you know how AI really works . So

0:38

in this article they talk

0:40

about how essentially

0:43

AI models you know they use patterns , it's

0:45

a lot of pattern recognition and they're getting really

0:47

good at it locally . So

0:50

, for example , if you have a hand with , you

0:53

know , six or seven fingers on it

0:55

, it's like wow , that looks like a finger . Those are extra

0:57

digits that they're kind of adding , but it's , you know

0:59

, a hand has fingers , and so that's the pattern that it's

1:01

creating , and so that's the pattern that it's creating . And a lot

1:03

of times , similarly with generated text

1:05

, you could say something looks like an H

1:07

, that maybe looks like a P

1:09

, and so they're just structuring together these

1:12

different patterns with the

1:14

most likely you in image

1:16

generation . They have an example image there and I'll share

1:18

this article in the comments , following the show

1:21

of a burrito stand with the poorly spelled

1:33

word burrito , which

1:35

is kind of funny to look at . Yeah

1:38

, so it's just interesting that you know text

1:41

is actually a lot harder

1:43

to generate because it's

1:45

really not the process that

1:48

ChatGPT is using or your chatbot

1:50

is using isn't really spelling as

1:53

we would think of it . And

1:55

, tim , I'm not sure if you have anything else to add there , but

1:57

I thought that was just an interesting article

1:59

, interesting call out .

2:00

Yeah , at the end of the article I

2:02

loved the picture of the music store

2:05

and because , again , if you're

2:07

just glancing at it like , you're like , yeah

2:09

, it's a music store , looks like a music store to me

2:11

, but then they call out like

2:13

but somebody who knows music

2:16

and knows these instruments would know this

2:19

is not the right arrangement for the white and

2:21

black keys of a piano and this is

2:23

not the right string

2:25

structure for a guitar . And

2:28

so , again , getting into that , you have to know

2:30

what you're looking at , to know that it's

2:33

not what you're looking at . And

2:36

I feel like that ties into a lot of what we've

2:38

talked about , emily , with knowing

2:41

your topic and knowing your domain and

2:43

being able to not just like , oh

2:45

, ai is the magic answer machine

2:48

. You need to know what's right and

2:50

what's not . And

2:52

so what was your thoughts on the article , emily

2:54

?

2:56

It's definitely a good reminder why prompt engineering

2:58

matters . So

3:00

the rules that we take for granted when

3:02

we think about large language models are

3:04

really important , and thinking about

3:06

how , if you use something

3:09

like generative AI to create

3:11

new text

3:13

or new images , that the conversation with the

3:15

machine , any sort of context

3:17

or nuance or intention

3:20

, is really in the hands of the prompter .

3:21

Yeah .

3:25

Subtleties make a big difference .

3:27

Yes , yes , well

3:30

, ed , I did not know . I've been very closely

3:32

following Adobe's Firefly

3:34

implementation , but in this article

3:36

they said that Adobe has explicitly

3:38

excluded text into

3:42

their image model , which

3:44

I felt was really interesting to say , yeah

3:46

, we know it's not good at this , so we're just not going to do it

3:48

. Um , I thought that

3:50

was very , very fascinating . Um

3:52

, mid meanwhile I know mid journey

3:54

, this has been something that they've been

3:56

really from , or no , stable diffusion

3:59

, three , I think it is they've been really promoting like

4:01

you can have text in your images

4:03

now and it's , you know , know , all useful and good

4:05

. So , you know , I think it's

4:07

interesting and

4:10

context is the key . You

4:12

know making sure that your prompts , like

4:15

you were saying Emily , make sure your prompts are on target

4:17

. You know good stuff , but

4:19

also make sure you know question

4:21

everything and know what

4:23

you're looking at . So

4:25

if there's too many fingers

4:28

, it's probably AI gen , right

4:30

?

4:30

Totally and why

4:32

we're trying to generate something in the first

4:34

place . Now

4:37

I know that's our primary article I did want to touch on just because there's

4:39

been a lot going on in the

4:41

news AI-wise this week going

4:43

on in the news AI-wise

4:46

this week the GTC keynote . Specifically

4:49

, nvidia has been releasing quite a bit of

4:51

new products . So

4:53

I just wanted to call attention to that for those listening that that's

4:55

something to definitely keep an eye on . An example of

4:57

that was they have NVIDIA has

4:59

something called Project Groot , which

5:02

is a project to develop , you

5:04

know , with a robotics company , the

5:07

humanoid robots . They're

5:10

also really advancing their

5:13

chip technology , coming out

5:15

with new solutions for healthcare . So there's

5:17

just so much going on and I just wanted

5:19

to call attention to that for folks listening

5:22

that you know this is something that's continuing

5:25

to grow and the , the organizations

5:28

in this space , are

5:30

coming out with new ideas and projects . It feels

5:32

like daily , if not weekly

5:34

, and so you know

5:36

just something , something to think about there . And , tim

5:39

, I don't know if you have any additional thoughts on the

5:42

GTC keynote , but I just think that's something

5:44

big highlight here for the listeners .

5:46

Yeah . So if you're

5:48

not watching NVIDIA and what they're

5:50

doing in the world of AI , you

5:53

have to be . This is the company leading

5:57

on a whole different level . Openai

5:59

is leading with a lot

6:02

of applications . Anthropic

6:04

was called three and

6:06

you know Mr was there , you know , creating

6:09

these models . These are they're

6:11

leading in software . But a

6:13

lot of these companies are running on NVIDIA

6:15

systems on the back end and that isn't

6:17

just the chips , it's also tools

6:20

like the Omniverse , which is NVIDIA's

6:22

development platform . You know . Basically

6:24

Super fascinating

6:27

to see you can create all these plug-in modules

6:29

to help your tools together . And

6:32

as well , you know NVIDIA has

6:34

things like have your large language model

6:36

running locally on your machine through your RTX

6:39

chips , you know , on your

6:41

local machine . So there is

6:43

a lot going on in the world of NVIDIA

6:45

. Two things I thought

6:47

were really interesting from

6:49

the GTC session . One

6:51

was Project Groot , which

6:53

I did wonder if they got

6:55

the copyright permission from Disney

6:58

on that , but

7:00

it's basically a code

7:03

library for humanoid

7:05

robots and so to accelerate

7:07

the development of humanoid robots and

7:10

it's using simulation systems and all

7:12

kinds of very fascinating technology to

7:14

do this . But the

7:17

second thing I thought was really interesting was

7:19

then they paraded out how

7:21

Disney is implementing these and

7:23

they're implementing them as drawings in

7:25

their Star Wars park and so

7:28

super fascinating use case

7:30

. So we talked about , you know kind of you

7:32

know the the record . You know AI is

7:34

a representative of your brand in past episodes

7:36

. But this is like a great example

7:39

of like imagine walking through the Disney parks

7:41

and you're walking beside

7:43

one of the droids from a movie , you know , and it's walking beside one of the droids from a movie

7:45

, you know , and it's , it is one of the droids

7:47

that they use in the movie , you know , and so lots

7:50

of fascinating capability there . Um

7:53

, I will say the droids were also super cute

7:55

, so that adds

7:57

to the value . The humanoid robots

7:59

they were demoing were like warehouse

8:01

workers and the

8:03

ability to work in assembly line , the ability

8:06

to move

8:08

dishes around , which is funny because

8:10

in the International Monetary

8:12

Fund report that was specifically

8:14

one of the jobs called out from AI dishwashers

8:18

because of the complex nature of the interactions

8:21

. But it was the demo showing otherwise

8:23

, so very fascinating . If you haven't seen the GTC keynote , we'll . But it was the demo showing otherwise , so very fascinating

8:25

. If you haven't seen the GTC keynote

8:27

, we'll post it in the comments

8:29

. Highly recommend you watch it . Emily

8:33

, how would you feel about running into a

8:36

robot autonomous

8:38

robot in Disney in your

8:40

next trip ?

8:43

In Disney .

8:45

That would be um concerning maybe

8:47

than in other spaces yeah

8:50

, like I don't know if you guys saw the movie I

8:52

robot , uh , which people are like ? Oh

8:54

, like the book , mikey has nothing like the book , but

8:57

uh , where they

8:59

had the , the , the , you

9:01

know humanoid robots walking around the

9:03

streets and helping out with tasks

9:05

and walking dogs and stuff like that . I'm

9:08

curious to see how

9:11

far we are from starting

9:13

to see humanoid robots on

9:16

the street just walking around in our

9:18

mix with people . I think we've

9:20

got a while , but it's going

9:22

to be different , all

9:25

right .

9:29

Just an FYI . In terms of the naming

9:32

convention , it looks like it is an acronym for

9:34

Generalist Robot

9:36

Double Zero Technology

9:38

, and so I'm not sure if they needed to . There

9:41

were any

9:44

issues with the naming with Groot

9:46

and Disney there , just because it is a

9:48

an acronym which is interesting it

9:50

just happens to be the acronym that

9:52

says Groot right a very

9:54

very known

9:56

word for a certain thing , so exactly

9:59

exactly anything else

10:01

on the news there .

10:02

Tally , that's it . That should

10:04

do it for today . Awesome , all

10:07

right . Well , dr

10:09

Bailey , thank you for joining us today . We

10:12

are very excited to have this discussion about

10:14

higher ed and AI , and

10:17

why don't we just start out with some context

10:19

setting ? We'll

10:22

just start with kind of the softball question of you

10:25

know , how is AI being received

10:27

in the world of academia ?

10:32

Thank you again . I think it's been

10:34

received sort of on a

10:36

spectrum of acceptance

10:38

. I mean , there are certainly

10:41

a lot of faculty

10:43

members in different institutions sort

10:45

of embracing the

10:48

possibilities of AI , especially generative

10:50

AI , for students in their classrooms

10:53

across a variety of disciplines

10:55

. And

11:01

you know , we know that AI has been part of higher ed for a long time outside of that

11:03

sphere . So most learning management systems have basic grading

11:06

, support for automated grading . There's

11:08

been chatbots and virtual assistants and

11:10

those things for students . But

11:13

we've sort of stepped out of

11:15

it just being a sphere

11:18

where it's used to streamline administrative

11:20

tasks and is starting

11:22

to be something that can be integrated into

11:24

curricula , and so that's a very interesting

11:27

thing . So I think there are definitely folks that embrace

11:29

that . I think there's a lot of dabblers

11:31

. I would consider myself to sort of fall

11:33

in that middle category of it's

11:36

here . It's a tool , let's figure out how we

11:38

can use it , but I have a lot to learn

11:40

still . And then , of course , you

11:42

always have a contingent anytime there's

11:44

something brand new like that that have

11:46

that kind of need to stop it

11:48

. It's apocalyptic , it's

11:50

the end of education as we know

11:53

it . So it's been interesting to follow those

11:55

conversations because there's

11:57

a real range in the way that it's it's being received

11:59

and used in classrooms .

12:01

Very cool and

12:03

used in classrooms . Very cool . What

12:06

do you see as so ? We talked about , like , the

12:09

teachers . What about the students ? How

12:11

are the students receiving it ? What's

12:13

their perspective on it ?

12:15

It's been interesting to kind of follow . So

12:17

in my own experience I

12:21

think , even though our students today are perceived

12:23

as sort of being digital natives and

12:25

having a really

12:28

strong grasp on technology because it's

12:30

been integral in

12:32

lots of different ways over the course

12:34

of their lives , they have very

12:36

limited comfort zones sometimes with things

12:38

. So I've

12:41

seen in my own classrooms with generative

12:44

AI tools that they are less

12:48

wary , I guess , sometimes of using something

12:50

that feels informal , but as soon

12:52

as it becomes a formal activity or exercise

12:55

they feel uncomfortable

12:57

sometimes with what they're able to do with the tool

12:59

, and so I think sometimes they don't always

13:02

feel equipped to

13:04

use it in the way that we maybe would want

13:06

them to use it in the classroom , as opposed

13:08

to just typing something quick

13:10

in while they're working on an assignment and hoping it

13:12

generates content for them and

13:18

so kind of closing that

13:20

gap ?

13:21

I think has become very important

13:23

.

13:23

It's a great call , yeah , and it's interesting to hear

13:25

. You know , I think it sounds like there's some

13:27

differences from what I'm hearing between

13:30

maybe the student and teacher perspective

13:32

and AI , and I'd love to hear

13:34

I guess , emily , is there certain use cases

13:36

that you've used leveraging AI in certain

13:38

assignments or things that you've seen that have

13:40

worked and haven't worked ?

13:42

just out of curiosity that you've

13:44

seen that have worked and haven't worked just out

13:46

of curiosity , Definitely

13:50

. So . I have a lot of colleagues in my own institution at Towson University

13:53

and outside of this context who have spoken about

13:55

using especially generative

13:57

AI tools in their classrooms in different ways

13:59

, having that sense

14:01

again that students are probably using

14:03

it anyway , so helping them

14:05

to understand what the possibilities of

14:08

those tools are and what the limitations

14:10

of those tools are really important

14:13

for students as they're trying to figure

14:15

out what exactly to do with

14:17

this new technology that they have access

14:19

to . So in my own classroom

14:21

, one thing that I found to be

14:23

sort of helpful is to really emphasize

14:26

for them that many

14:28

of these generative tools like

14:30

ChatGTP or DALI

14:33

are really great for brainstorming

14:36

or creating outlines

14:39

, kind of collecting and organizing

14:41

information , but they don't

14:43

often get into some of the

14:46

deeper nuanced stuff that we want students

14:48

to be thinking about , and

14:50

so there's pieces of problem solving

14:52

and critical thinking that are so

14:55

important for students

14:57

in higher ed , something

14:59

that you know the generative AI

15:01

often can't do for them . So

15:04

one activity that I found

15:06

to be really useful is to have students

15:08

answer a set of questions

15:10

about a reading that they've done intensively

15:13

. We do that together in class . They

15:15

have time , either individually

15:18

or with partner or in groups , to

15:21

go back to the article and answer a very

15:23

specific set of questions and then share

15:25

their thoughts and have a discussion . And then

15:27

we go to the AI

15:30

tools . So we've

15:32

used chat , GPT , especially where

15:34

they enter the same information

15:36

into the AI

15:38

tool to see what it generates

15:41

for them as a response and then they

15:43

have that as a comparison for their own

15:45

work and they'll see

15:47

often very quickly that

15:49

there are parallel

15:52

points right , a reference across

15:54

the two different sets of

15:56

responses , but that what the

15:58

AI is generating is often very simplified

16:01

, it's very generalized

16:03

. I teach in religious studies

16:05

and so you know the affective pieces

16:07

of things that we talk about are

16:09

often very much missing , since those tools

16:12

can't do that yet use

16:24

this for , and this is where this is helpful for

16:27

me , but this is why I still need to do this work myself and these are the pieces um of thinking

16:29

and being a college student that the

16:31

ai can't do for me yeah

16:33

, and going back to the , the

16:36

music store example from the article this

16:38

morning , like , if you don't know what

16:40

to look for , it looks great .

16:43

Like you know , everything's

16:45

great , everything's fine , you know the answer is perfect

16:47

. So , um , I and

16:50

I think that's you know , not just

16:52

in student work , not just

16:54

in teaching , but in

16:56

business as well . Like , you know

16:58

, the , the ability to know

17:01

, um , to question

17:03

the ai is so critical

17:05

and that's where , you know , building these

17:07

critical thinking skills are

17:10

. It's just such an important piece of

17:12

the future . Like , uh , and that's you know

17:14

, emily , uh and I both have

17:16

religious studies backgrounds , which is , you

17:19

know , we laugh and say , yes , religious

17:21

studies , we're going to dominate the ai world world

17:23

, you know . But it's

17:26

a good skill set to

17:29

come at AI and say , well , why , why

17:31

is this ? You know , I think that's a

17:34

question that I know in my own

17:36

. You know , studies and religious studies

17:38

. That was the number one question we asked

17:40

about everything . Well

17:42

, why is that ? Why is that ? Yes , yeah

17:45

, so from

17:47

a , from a . I love

17:50

the example you gave . You know , with the exercise

17:52

of , you know , the students doing the work and then

17:54

students working with the AI system , do you know

17:57

? See the difference . Does

17:59

that lead to aha moments for the students ? Have

18:01

you seen students that are like , okay

18:03

, I didn't , you know , I

18:05

was just using this for everything before

18:08

and now I'm questioning it

18:10

, or you know , kind of , what's the transformation

18:12

you're seeing ?

18:14

I hope so . I mean

18:16

, from what I've seen , at least in

18:18

real time after those activities is

18:20

that they're often quite surprised

18:23

how limited the response

18:25

is when they know the content

18:27

really well , and so

18:29

that's a great exercise and

18:32

I've heard others do similar sorts of things

18:34

for students , I think , to start to realize

18:36

that when

18:38

they leave school and they enter

18:41

the workplace , that the

18:44

tools are still going to be there for them

18:46

and they're probably going to be interacting with AI

18:48

in ways that generations before

18:50

them never even thought of , but that

18:53

the content , knowledge , has to be there

18:55

in order for them to be able to leverage the tool

18:57

, and the tool can't do all of that work

18:59

for them .

19:01

Yeah , go ahead , sally

19:04

.

19:04

Oh , I was just going

19:06

to say I think that's a great call out or parallel between

19:08

the student experience and the employee experience . I think

19:10

there's a lot of employees that we've

19:12

seen that have . You

19:15

know , they're just starting to dabble in leveraging

19:18

AI in their day to day and so recognizing

19:20

it's not just something you can use

19:22

and copy and paste the output , it's it's knowing

19:24

when to use it and when is really

19:26

, you know , important to have a human export expert

19:29

, um , reviewing that content .

19:31

So I just like that call out because I think it parallels really

19:33

nicely to the business world yeah , totally

19:36

, and and reminds me , um , this morning

19:38

, as I was on linkedin , I saw I was

19:41

a video from the steven colbert show

19:43

and he was interviewing somebody

19:45

who I will figure out who

19:48

this was , we'll share the video in the comments . But

19:50

the person was talking about , like

19:53

, where we are in a massive

19:55

paradigm shift of how the

19:57

world is going to be , and the

20:00

example they were using was like you know , back in

20:02

you , you know in the 80s

20:04

, you know it was . It was fairly static

20:06

, you know , we knew . You know , get

20:09

a good job , work in an office

20:11

. You know like and good job

20:13

, you know to not to say that

20:15

all jobs outside of an office are not good

20:17

job . Not saying that , just saying

20:19

you know . Again , going into the 80s mindset

20:22

go to college , get that white

20:24

collar job , succeed , make money

20:26

, all that good stuff . That

20:29

was kind of the operating model . And

20:31

what did that look like in the 90s ? Well

20:34

, it was very similar . What did it look like in the

20:36

2000s ? Very similar . But now

20:38

, as we look at the world of AI and

20:40

it really is massively

20:42

different , like we really don't know what

20:45

is that world of work going to look like in

20:47

five years , in 10 years in 20

20:49

years . You

20:52

know , as we were talking about GTC , with everything

20:55

changing daily , you

20:57

know what is possible next

21:00

year , you know , with

21:02

ChatGPT came out , I think it was a real

21:04

eye-opener of AI is

21:06

real , it is

21:08

going to transform our work and

21:10

then you just look at the steady improvements over

21:12

time . You know , I think that's one of

21:14

our big challenges . And so with that

21:16

I would ask you know , emily , what do you think are some

21:18

of the biggest challenges to

21:21

education , higher

21:23

ed , that AI is

21:25

posing ? You know , both for the students

21:28

and for the teachers and

21:30

for the administration .

21:32

Sure , oh , there's so much to

21:34

unpack or something like

21:36

that . I mean , you know , one thing

21:38

that I think has been very important in

21:41

these conversations has been , you

21:44

know , the use of things like predictive analytics

21:46

to help identify at-risk students

21:48

and those sorts of things , but that those

21:51

algorithms have

21:53

, you know , are cultural

21:55

and broader biases

21:57

sort of infused in them and they miss the whole

22:00

picture . So there's a lot of

22:02

ethical considerations there . When we think about

22:04

things like , you know , the digital

22:06

divide and digital access

22:08

and digital literacy and what that looks like

22:10

in an educational context

22:13

. There's also a lot of

22:15

ethical concerns with what

22:18

students are doing and academic integrity

22:20

. There are not AI

22:22

proof assignments , but

22:25

there's a lot of effort being put into designing

22:29

and creating AI sort of resistant

22:31

assignments or AI assignments

22:34

that help students again to employ the

22:36

tools of AI Kind

22:39

of what you mentioned , tim not as a magic solution

22:41

for homework , but how

22:43

do we teach students to use this well

22:46

and effectively ? And

22:49

one of the big things I think has just been questions

22:51

, like you mentioned , with the workforce . So there's

22:54

a shift when we think about what

22:56

an entry level position is going to look

22:58

like for a student when they graduate and

23:01

recent graduates , and certainly future

23:03

graduates entering a very different

23:05

workforce , have some

23:07

great opportunities , I think , ahead of them for

23:09

upskilling , fostering

23:11

creativity , learning how to problem solve

23:14

in new ways . But this leaves

23:16

a lot of questions on the table about what

23:18

it means to be a student right now and

23:21

what it means to be an educator right now and

23:23

how we navigate that . And there's , I

23:25

think , again , some really exciting possibilities

23:28

for that , to sort of rethink the

23:30

way that we're designing

23:33

education and and the skills we

23:35

want students to walk away with . Um

23:37

, but that's another instance of kind of how do we close

23:39

that gap to get everybody to

23:41

that point ?

23:44

very interesting . So do

23:47

you think that from a um

23:50

, do you think that the students

23:52

are ? And

23:54

where I go with this is like as

23:56

we were talking in our pre-show discussion

23:59

, we

24:01

really want , as an employer , I

24:03

want people to know how to use these tools

24:05

. I don't know . Smartly , I guess

24:08

, is the term I'll use . Don't

24:10

just take whatever comes out of them as truth . You

24:13

have to question it , you have to fact check , but

24:15

you have to know how to use it . You

24:17

have to know how to talk to the machine . I

24:20

always use the galaxy quest , like

24:23

the woman whose job is to talk to the machine

24:25

. You've got to know how to talk to the machine . Do

24:28

you feel that the students are kind

24:30

of growing those skills ? Do you feel like

24:32

this is just too new and we're

24:34

not there yet ? What are your thoughts

24:37

?

24:37

Yeah , I think it's

24:40

very similar to sort

24:43

of the classroom example , where

24:46

they are willing and

24:49

they're very interested and they are already

24:51

often using these things outside of

24:53

the classroom anyway . But they don't

24:55

always realize that what

24:59

you put in is what you get out of

25:02

the tools . So if your prompt

25:05

is not tailored

25:08

to generate the kind of response that

25:10

you need for a question that you

25:12

want to answer , it can

25:14

not just be distracting

25:16

, but it could be very problematic or it could

25:19

take you down a path that's not going to help

25:21

you to effectively problem solve in

25:23

the way that you need to . So

25:25

the training and prompting , I think is really

25:27

interesting , and I know you can both speak to this

25:29

a lot better than

25:32

I can . You're coming from a place

25:34

of of a lot more knowledge about

25:36

that than I have . But I'm

25:39

very curious about what

25:42

we're going to need to do moving forward to

25:44

help students learn

25:46

these tools in the context of the

25:48

disciplines that they're studying in , which

25:51

involves a lot of professional development

25:53

and kind of time and energy on faculty

25:56

to get them there too , in order for

25:58

them to be able to step out of our classrooms

26:00

into the workforce and

26:02

know how to use that technology and

26:04

not sort of be starting from scratch .

26:08

Yeah , what do you think , tali ?

26:14

from scratch . Yeah , what do you think , tali ? Yeah , I think prompt engineering is definitely a

26:16

place to start . What I've really liked , I guess ? Two things come

26:18

to mind . One I have a few friends

26:21

currently in grad

26:23

school who have some really interesting assignments

26:25

that they're using AI for

26:27

, similar to what you called out earlier in the show um

26:30

Emily , kind of using it as

26:32

a co-creator , so helping you

26:34

know them build out an essay

26:36

and then using that AI generated essay to

26:38

write their own , and so that's a good practice of , okay

26:41

, this is what they generate . Let me analyze that and figure

26:43

out how I can make this better and and

26:46

really under better understand the gaps

26:48

for um AI

26:50

outputs that you , you know , could

26:52

then apply , hopefully , to their you know , day-to-day

26:55

jobs moving

26:57

forward , you know , and understanding how to best leverage AI

27:00

. I think on the other side of it , it

27:02

is interesting , especially in the context of

27:04

philosophy and religious

27:06

studies , but on a larger note , in

27:09

a lot of ways it's brought up discussions about humanism

27:12

and what it really means to be human , and

27:14

I think it . Tim and

27:16

I , on previous episodes , have talked about a

27:19

skills-based economy and I think that

27:22

, as we move forward , it's not so much what you

27:24

know . While

27:26

it matters you know what type of degree

27:28

you have . It's more While it matters what type of degree

27:31

you have , it's more thinking about what are the skill sets

27:33

as an employee or as a student

27:35

or as a professor that I really

27:37

bring to the table . What's my actual

27:40

value , separate

27:43

from what these machines can accomplish , and what

27:45

are the things that are truly human that AI

27:47

will not be able to replace ? And I think that that's a

27:49

really great conversation piece to allow people

27:51

to understand when to use AI

27:54

is you know , why you use it , and I

27:56

think you

28:00

know there's certain use cases that are great , and then there's certain use cases that

28:02

really need to be either reviewed by or done entirely by

28:04

the human . You know things that involve empathy

28:06

, creativity , problem solving , and so

28:08

I think it's really interesting to think about AI

28:11

from that lens in terms of you know again

28:14

, what's what's making us human is , I think , where

28:16

where my mind goes with that .

28:18

Yeah , and I , you

28:21

know , this is the

28:23

, the , the fact

28:25

I feel like and I use the word fact , that

28:28

is a subjective fact , I guess

28:30

the days

28:32

of like you've got your job and you're in your job

28:35

and that's just your job , that's over , like

28:37

that was over quite a while ago , but

28:39

it's going to really be highlighted that

28:42

it's over in this age of

28:44

AI , because we

28:46

all need to be constantly upskilling

28:48

and constantly learning

28:51

and constantly driving

28:53

towards what is next . How

28:56

is this world changing ? So

29:00

, like you had mentioned staff development , I've

29:03

been in organizations where there

29:05

was the expectation that , well

29:07

, if I need this for my job , my

29:10

employer is going to have to provide it and I'm not

29:12

doing anything until they do . Like

29:14

, those people are going to be

29:16

replaced by people who are

29:19

actively growing and learning . And

29:21

so , if , from my perspective

29:23

, um , for students , for

29:25

professors , for everybody having

29:27

the strong desire

29:29

to learn , learn , that

29:31

is like , irreplaceable , like

29:34

you , yeah , and it's something at mind of

29:36

machines , we look for in our team members , somebody

29:38

who's an eager learner , somebody

29:41

who really wants to grow

29:43

, like man , that's a , that's

29:45

a gold star . We want to , we want to get that person

29:48

, um for

29:50

, uh , you know , the students

29:52

really instilling like you're

29:54

not done once you leave college

29:57

. You know you gotta keep

29:59

learning and keep growing , um

30:01

, and you know one of the quotes I really

30:03

like and I don't . You know it's been reused

30:06

and recycled so many times . You know , um

30:08

, ai won't replace job

30:10

, but somebody who knows how to use AI will . That

30:14

is so one point , because

30:17

you know , again , as we

30:19

just in development , software development is a great example

30:21

for us Tools like

30:23

GitHub Copilot . You know Microsoft's

30:27

research shows that GitHub

30:29

Copilot can accelerate your development by 15%

30:32

. So you're doing your work in half

30:34

the time Like . So

30:36

who would you hire ? A developer who has never used

30:38

GitHub Copilot or a developer who is very skilled

30:40

in GitHub Copilot ? And so

30:42

when we look at , you know okay . So let's translate

30:44

that to students . How many students

30:47

are being told it is okay to

30:49

use AI tools for software development ? That's

30:53

a skill we'd be looking for in developers

30:55

. Now we want you to understand what

30:58

it is to do development . We

31:01

want you to understand what's Tally's role

31:03

, what's the psychology

31:05

of the human-computer interaction . You

31:08

need to understand that . But you also need to know

31:10

where do these AI tools fit in ? And

31:12

that's where you have to experiment , explore

31:15

, try things out , and

31:17

I think that's where a lot of people are looking for

31:19

employers to do that for

31:21

them , not realizing that

31:23

it's really a shared responsibility between

31:26

the employer , the employee and government

31:28

and educational institutions . We

31:30

all need to work together to help up

31:32

skill . A little bit of a soapbox for me

31:34

, but

31:36

that's my take on how people

31:39

can get ready for this workforce is learn

31:42

, learn , learn and don't

31:44

stop , and demonstrate that behavior to their

31:46

students , demonstrate that behavior to

31:48

their faculty and staff . Show

31:52

that . Any

31:55

thoughts on that ? That's

31:57

my soapbox , so feel free to kick the soapbox

31:59

out from under me . What do you

32:02

guys think ? Are we in an age where you have to just keep

32:04

up at all times , or

32:07

is that just like ? I don't want to be in that age

32:09

?

32:13

I don't ever think that encouraging

32:16

students to be lifelong learners is a bad

32:18

thing . I think there is sometimes the mentality

32:21

like I got my piece of paper , I walk out the

32:23

door and , like you said , I step into the workplace

32:25

and

32:39

if I need to learn anything else , then they'll help me get there . So having some sort of impetus

32:41

for being a self-motivated lifelong learner

32:43

is not a bad thing in my mind .

32:44

Go ahead .

32:44

Tali , I was going to say I think with that , one of the biggest skills

32:46

, too , is just understanding how we learn . One of the biggest skills , too

32:48

, is just understanding how we learn . You know , how

32:52

are we in taking knowledge which

32:56

I think really obviously kind of relates back to also philosophy

32:58

and , and you know , religious studies is is , you know , how do humans learn

33:02

, take in knowledge and how can we use AI

33:04

tools for that learning process ? I think is another

33:06

aspect here which I think will be really , really interesting

33:09

from a both a storytelling

33:11

perspective , you know , leveraging AI to

33:13

maybe better understand something , or going to AI

33:16

as a tool to actually learn . That , I think , will also be

33:18

a really I don't know interesting world

33:20

to explore .

33:21

Yeah , and Emily

33:24

, something you said really sparked in my

33:26

mind . You know , there was a leadership

33:29

conference I was at many years ago and

33:31

the speaker said you can't motivate

33:33

anybody . Motivation is

33:35

an internal thing . You

33:39

can only motivate yourself , but you can

33:41

inspire them and

33:43

inspiration can be external , and

33:46

so really demonstrating

33:48

these behaviors , you know , can

33:50

be inspiring . So

33:53

, you know the lifelong learner , and I think this

33:55

is where , you know , one

33:57

of the things we talked about is how can

33:59

businesses , you know , help

34:01

students in this age of AI ? And

34:04

to me , you know , it's

34:06

a call out to professionals like

34:08

you got to grow , like

34:10

look at how you're growing in your work

34:12

and make sure you share that with

34:14

other people . It's good for you , it's

34:17

good for them . So , you know , I just

34:19

really I think that call out of uh , uh

34:22

for us , like for you know , myself

34:24

, for tallying , for , you know , business leaders

34:26

, make sure you tell people

34:28

like , yeah , you've got to grow , like

34:31

you've got to care about your

34:33

personal growth and your professional

34:35

growth and you got to invest in it in your own

34:37

time . Um , now your employer

34:39

should be investing as well , but most

34:42

of it's going to be you doing your own

34:44

thing . So , all right

34:47

, I do have one last question

34:49

here , and I think this is an

34:51

interesting one , because time

34:53

and time again I have heard

34:55

how very different mind

34:57

over machines approaches the world

34:59

of artificial intelligence . And

35:16

you know , one of the

35:18

things we've joked about but you know , I kind of like

35:21

to explore it a little bit is what is the intersection

35:23

between AI and the humanities ? And you know

35:25

, and I say that because the approach to AI and mind

35:27

over machines has been formed by humanities people

35:29

, you

35:34

know , and so we have approached it as a humanities problem , not an engineering problem . And so

35:36

you know , emily , I'd like this kind of hear your thoughts on where do you see an intersection

35:38

between humanities or liberal arts

35:40

and ai ? And

35:42

you know what , what

35:44

does that mean for liberal arts students

35:47

and our humanities people going

35:49

forward ?

35:57

That's a great question and I always enjoy our conversations from the perspective

35:59

of the humanities thinking about technology . As you know , that

36:01

infusion of creative

36:04

and critical thinking and being aware

36:07

of the affective and those

36:10

sorts of approaches to problem solving

36:12

that students learn in humanities

36:14

fields I think are so useful for

36:17

approaching technology . And sometimes there

36:19

is this disconnect . I

36:21

mean often when I work with humanities students

36:23

and I say , ok , we're going to integrate this piece of technology

36:26

, there's sort of that wariness

36:28

again of , well , I'm not really quite sure how to do

36:30

that . I'm not a computer science major , but

36:33

kind of finding a point

36:35

of intersection there I think is often

36:37

very helpful . So I know

36:39

from my experience

36:45

of this , you know , as a

36:47

religious studies instructor , one

36:49

of the things that I've seen kind

36:52

of reflected in that humanities

36:54

content . It's sort of like that question

36:56

of what does it mean to be a student ? What does it mean to

36:58

be a teacher in the era of AI ? What

37:01

does it mean to study something like religion

37:03

when people have access

37:06

to all of the world's

37:08

religious knowledge and these sorts of things

37:10

very quickly and through

37:12

technology ? Ai

37:14

is so often associated with secularism

37:17

. I think it's interesting when we start

37:19

to see some of those intersections

37:21

. There's a work of a British

37:23

anthropologist that I think is really fascinating

37:26

. Her name is Beth Singler you

37:28

may have come across her before and she

37:30

thinks about kind of theistic approaches

37:32

to AI . So she has a study

37:34

that's fascinating called Blessed Be the

37:36

Algorithm . That thinks

37:39

about how people have started to apply

37:41

their divine-like characteristics

37:44

onto things like

37:46

benevolence or blessings like

37:49

you get a better seat on a

37:51

flight or something like that where

37:53

an algorithm has helped you . But

37:55

it's given this sort of benevolent

37:58

, godlike kind of quality

38:00

in the way that people are approaching it

38:04

, kind of using

38:07

the technology to approach concepts

38:09

and topics in the humanities that historically

38:12

have been very distanced from technology

38:14

. I think is fascinating . But

38:17

there's always these questions , like

38:19

you mentioned , tim , that we want our students to sort

38:21

of continuously return to , and

38:24

so something like you

38:26

know , there's prayer chat bots now where

38:28

you can have something like Buddha Bot do

38:31

mantra recitations for you

38:34

, but usually in a ritual context

38:36

in religion . When we look at

38:38

global religious traditions , ritual

38:41

usually has efficacy

38:43

when the actions are done

38:45

in a certain way , but

38:56

there's also a specific intention behind them and in the case of the technology doing

38:58

that work , the actions might be happening perfectly , because it can be designed

39:00

to do that , but the intention behind it if a machine is doing it and not a

39:02

human is very much in question

39:05

. And so getting to use technology

39:08

to help students sort of probe those

39:10

kinds of questions and think about , well , what

39:13

is a ritual and why does that matter , and how

39:15

is it efficacious for somebody , if we're looking at this

39:17

in a religious context Raises

39:20

lots of questions about things like responsible use

39:22

. So it's helpful to think about technology

39:24

, but it's also helpful for them to revisit

39:26

some of these age-old

39:29

questions about the discipline they're studying

39:31

too yeah

39:33

, that .

39:34

So I have not heard of that researcher

39:36

, but , believe me , that is my weekend plans

39:38

. Um

39:41

, sounds fascinating , um , but

39:44

yeah , like and uh

39:46

, that is that is interesting

39:48

, because I see this like and you

39:50

hear this like , oh well

39:53

, you know . And

39:55

again , going back to artists

39:58

, you know , if you go scroll artists

40:00

on threads on

40:02

Instagram , on , you know , tiktok

40:04

, there is always this

40:07

like commentary to

40:09

the algorithm and it is

40:11

, it is asking for

40:13

, you know , blessing is asking

40:15

for grace , you know , and in that kind

40:18

of language and very

40:20

fascinated , um , you

40:23

know , for , for my perspective , like

40:25

, I can't emphasize enough

40:27

for humanities , people , to go into

40:29

technology professions , because

40:32

it has , like , totally been my

40:34

secret weapon for my career , like

40:36

, because you go in and you're talking

40:39

and people expect you to be an engineer and

40:41

think like an engineer and act like an engineer

40:43

and be disconnected and impersonal

40:45

like an engineer . No offense , engineers , I'm

40:48

not trying to call you out , just it's just

40:50

how things tend to be . But when

40:53

you start displaying your humanities aspects

40:55

like asking why , why

40:58

are we doing this ? Who is this

41:00

for ? What's the impact we're trying to

41:02

create here ? And it lights

41:05

up , it just transforms people

41:07

, and so , in AI

41:10

, I think we're going to see even more of that

41:12

, like where you

41:14

have somebody who isn't this

41:16

engineer mind that's just

41:18

trying to solve the technical problem

41:20

, but the human who's trying to solve the human

41:22

problem . We need those people

41:24

, and so I think this

41:27

is a great moment for humanity's people

41:29

to really rise up and

41:31

get involved and executives

41:34

like Brad Smith , the head of

41:36

legal counsel for Microsoft . He's

41:39

been quoted numerous times . We need more

41:41

humanities people in AI

41:43

. I can't emphasize

41:46

that enough . I'll

41:48

ask another humanities person , tali what do

41:50

you think ?

41:50

Yeah , I

41:53

mean , for me , I'm biased , absolutely

41:55

. I'm coming from a psychology perspective

41:58

. So , you know , I know we've done an

42:00

episode on human-centered design thinking

42:03

and you know , ultimately the folks

42:05

hopefully benefiting and

42:07

leveraging or using these AIs are

42:09

humans . At the end of the day , should

42:12

support and be a tool to the people

42:14

and and if , um , if

42:16

we can use them ethically and responsibly

42:19

, um , the outcomes are just really

42:21

exciting and amazing and I think we can solve a lot

42:23

of really incredible issues . I think we just need

42:25

to come together and make sure that we're doing so in a way

42:27

that , yeah , aligns with the values

42:29

of our , of our self , of our

42:31

culture and society , um , so

42:34

, yes , I'm all on board with that

42:36

.

42:39

Awesome , you guys want to move into wings . Is

42:41

there anything else on this topic we want to hit ?

42:44

Let's do it . Talking about you know the positivity

42:47

, the potential benefit of AI is

42:49

a perfect segue .

42:52

Yeah , tali , you want to kick us off .

42:55

Let's do it . So there's a lot

42:57

of stuff out there around I

43:10

think we've actually touched on it in a previous episode in the news of how AI-powered

43:12

companions , or chatbots . You know there are some downsides , but there was a recent article that

43:14

came out about how an AI-powered robot really provides , and can

43:16

provide , companionship to lonely seniors

43:18

. In this example there's

43:21

this woman , Dottie , who lost her husband

43:23

in 2022 , and she really

43:25

found hope

43:28

and kind of took her out

43:30

of her depression through this chatbot

43:35

, through this it's called an intuition robot

43:37

, and so I just think it's a

43:39

really interesting use case to leverage

43:41

AI companions

43:43

as a way to combat loneliness for

43:46

certain demographics of the population

43:48

, how that could really prolong life

43:50

and reduce , you know

43:52

, some negative mental health impacts like depression

43:54

and things like that . So I just thought that was a

43:56

really incredible call out and I wanted to bring

43:59

that to the table . As you know , chatbots

44:01

become ever the most present .

44:04

Yeah , and now put on the layer

44:06

of Project Glit on that

44:08

. You know what happens when we

44:11

can actually make , you know

44:13

, hematoid companions

44:15

for people cost-effectively

44:18

, you know , know , I think that's the biggest challenge , you

44:20

know , and what are the ethical implications

44:22

? I saw this . Um , there was a company I cannot

44:24

remember their name , but this was also

44:27

. Their paradigm was to make ai

44:29

companions for , uh

44:31

, elderly and um , like for medication

44:34

, adherence and loneliness

44:36

and things like that . But their big push was

44:38

we absolutely do not want it to

44:40

be thought of as a person , we

44:42

don't want it to be thought of as human

44:45

, like it

44:47

constantly reminds the

44:49

elderly person that it

44:51

is an AI system and

44:54

they can do these things and not these things . So

44:58

very fascinating , yeah

45:00

, interesting topic , emily

45:07

. What's your win ? I

45:10

think , for me pedagogically .

45:13

It might seem like a little bit of a paradox the

45:15

way that some folks think about this in education

45:18

, but I think it's exciting that

45:20

AI has the potential to make us less complacent

45:22

, as teachers and students

45:25

, when it's used as an effective tool and

45:27

it can help us think in such

45:29

a variety of new ways . So

45:31

I think that the shift from a mindset

45:33

of transmitting knowledge

45:35

to a focus on learning what to do with that

45:37

information is a win for me . Awesome

45:41

.

45:42

Yeah , I

45:45

really like that reframing of

45:48

like this was a major win

45:50

and here's how Like that's cool , my

45:54

win , I think . This week our

45:57

team member , nick , brought this to me which

45:59

I thought was really cool . It's

46:01

called WePaint and

46:03

it is using

46:06

so you can go use

46:08

like MidJourney or Dolly

46:10

or whatever , to create your image

46:12

, like your AI image , and

46:14

then you submit it to this company and they

46:16

have a network of artists that will paint

46:18

your image and

46:21

paint it on canvas with paint , just

46:23

to be clear , like physical paint , and

46:26

then they send you your painting , and

46:29

so I think this is a very fascinating

46:32

kind of flipping

46:34

the whole AI art discussion on

46:36

its head of like no

46:38

, you generate your image , you want us to

46:40

do and we'll do it , and

46:43

I personally have done that myself

46:45

, where I'll use tools

46:47

like Adobe , firefly or Midjourney to

46:49

conceptualize content

46:51

and then I will take it into physical

46:54

, like charcoal drawings or paintings

46:57

or , you know , colored pencils

46:59

, whatever I'm working in . So I thought

47:01

this was a super , super fascinating

47:04

concept . So

47:06

, yeah , would

47:09

you guys want painted

47:11

AI art in your houses

47:14

?

47:14

I love that . I actually have a friend who

47:17

is in the digital art space

47:19

and she's been a little bit frustrated by

47:22

the takeover of AI and I think that's a great example

47:24

of how , again , people

47:27

will always want that human element . There's something about that

47:29

that I think is just really exciting . I think that really highlights

47:31

that and , yeah , I love that example and I'm going to

47:33

have to share that with her right after this . I think

47:35

she'll be really excited , cool .

47:39

All right , anything else for us today .

47:42

I think that's everything . I think we've covered it

47:44

.

47:45

Well , thank you , dr Emily Bailey

47:47

, for being with us today and having this

47:49

discussion . This was really fascinating to

47:51

me and I can't wait

47:53

to go check out some of the research you were talking

47:55

about . And you know , this perspective

47:58

on students and teachers and higher ed is

48:01

super valuable as we look at as

48:03

employers . You know this is the

48:05

next crop of workforce that we have coming , so

48:08

what are we getting ? So very

48:10

interesting , thank you .

48:13

Thank you so much for a great conversation Awesome

48:15

.

48:17

Well , thank you Next time , Tali . What are we talking about next

48:19

time ?

48:21

Next time we're talking about AI and HR

48:23

, so I can have a conversation

48:25

.

48:26

Right , awesome With Jay from HR

48:28

Geckos yes , so awesome

48:30

. So this is going

48:32

to be about HR automation

48:35

and , similar to our friends

48:37

at MeBeBot , hr Geckos has

48:40

a pretty cool AI capability

48:42

there . So , looking forward to that

48:44

discussion and , as always

48:47

, we will post all the links we

48:49

talked about in the content in the comments . Here

48:51

we are working on

48:53

some significant improvements

48:56

to the Boring AI Show delivery

48:58

post show , so stay

49:00

tuned for that and , as

49:03

we talked about last time , we're looking for

49:05

topics for our One

49:07

Boring Day concept , which is a day-long

49:09

Boring AI Show . So if anybody has thoughts

49:11

of what they'd like to see there , we'd love

49:13

to hear from you , post them in the comments below or

49:16

just send us a message with your thoughts . With

49:19

all that said , thank you everybody for joining us

49:21

today and we will see you in two

49:23

weeks .

49:24

Thanks guys .

49:26

Thank you all , bye .

Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features