Episode Transcript
Transcripts are displayed as originally observed. Some content, including advertisements may have changed.
Use Ctrl + F to search
0:06
Welcome to Demystifying Instructional Design, a podcast where I interview
0:10
various instructional designers to figure out what instructional designers do.
0:14
I'm Rebecca Hogue, your podcast host. If you enjoyed this podcast, please consider subscribing or leaving
0:20
a comment on the Show Notes blog post and consider
0:23
helping to support the podcast with a donation to my
0:26
Patreon account. Welcome Autumm and Lance to Demystifying Instructional Design.
0:32
This is a bit of a different episode of a
0:34
podcast. We're going to talk today a little bit about ChatGPT
0:37
the AI phenomena that is lighting up higher ed anyways,
0:43
if not other spaces as well.
0:45
And so the first thing I'm going to ask is if you could do a quick introduction and give us
0:49
a little bit of context. And I'll start with my context is I teach instructional
0:55
designers. So I'm teaching at the master's level and my students
0:58
are instructional designers. And so for me, I'm looking at it largely as
1:04
is this tool useful for my students?
1:08
That is the context that I have for it.
1:11
And I'll pass it over to Autumm and then Lance.
1:14
Hi. I'm Autumm Caines.
1:16
I am an instructional designer in a faculty development office
1:21
at the University of Michigan, Dearborn.
1:24
I always think it's important to add what office you're
1:27
in when you're talking about instructional design, because as Rebecca's
1:30
audience will know, the context in which you work is
1:34
an instructional designer can have a huge impact in the
1:38
type of work that you do. So coming from the perspective of a faculty development office,
1:44
it's more than just instructional technology and it's more than
1:47
just instructional design. It's also faculty development.
1:50
We can get into that a little bit more as we go forward.
1:53
But just to set up the context, I'm also instructional
1:57
faculty at College Unbound where I know Lance from, and
2:00
I actually know Lance from this podcast as well.
2:03
That's one of the reasons that I reached out to
2:05
Lance was because I heard him on this podcast and
2:08
then we ended up becoming colleagues at College Unbound,
2:11
But I teach two classes at College Unbound.
2:14
One is Digital Citizenship and the other one is Web
2:17
and Digital Portfolio. So that's just a little bit about my context in
2:22
terms of where I'm coming atthings from, in terms
2:24
of ChatGPT. I have been looking into large language models since
2:31
probably 2021 with the upset with the firing of Timnit
2:37
Gebru at Google from when she was working, she was
2:41
the head of Ethics and was working on the Lambda
2:43
large language model and started paying attention to some of
2:47
the advances that were happening in that technology around that.
2:51
So I do have a tendency to come at it from a little bit more of a critical and ethical
2:56
perspective. But I don't want to go for too long.
2:58
I want to turn things over to Lance and let him introduce himself.
3:01
Sure. Hi, I'm Lance Eaton. I am Director of Digital Pedagogy at College Unbound.
3:06
I would say what department I'm from, but we don't
3:09
really have departments because we're still a new enough college
3:12
and have that new college smell where there's lots of
3:14
different hats and ways that we try to work around
3:18
action teams as opposed to traditional departments.
3:20
But if I did, it would probably be Academic Affairs,
3:22
which is I think where I mostly sit. And my role is that mixture of working with faculty,
3:28
doing faculty development and kind of helping to support them
3:32
in the development of teaching and learning courses and using
3:37
different tools. And sometimes that tool is a pen and pad and
3:40
sometimes it's an LMS and sometimes it's artificial intelligence.
3:45
For me, I've had a lot of interesting thinking around
3:50
or just looking at AI for a couple of years
3:54
now. I just started to develop a more critical view of
3:57
technology over the 2010s. It started to pop up on my radar and then
4:03
really for much of the from the start of the
4:06
pandemic until bout a year and a half ago, I
4:08
was I guess that was halfway through the pandemic or
4:11
whatever phase we're in now. I was working at the Berkman Klein Center for Internet
4:15
and Society and was both helping to run programs and
4:20
programing around Internet and society, and a lot of that
4:24
focused on AI. And so I got to see a lot of different
4:28
people in the industry coming with those critical lenses.
4:31
And so that stuck in my head a lot, especially
4:33
as ChatGPT became like the pet rock of the 2022-
4:38
2023.
4:42
And people really started to pay attention to a AI
4:46
generative tools in a way that they certainly hadn't previously.
4:50
Thank you both very much for your introductions and a
4:53
little bit of context there. I'd love to ask you a little bit about what
4:59
guidance you're giving students regarding
5:02
the use of Chat GPT And if you could tell me what that stands
5:07
for, that would be really helpful because I think the
5:10
audience would find that useful.
5:13
It's a generative pre-trained Transformer ChatGPT
5:18
And I do think that Lance is the perfect person
5:21
to answer this. I'm going to answer really briefly and just say that
5:26
I was a little taken aback when this was first
5:29
opened up, and I really wasn't sure what to do
5:31
as a teacher. As an instructional designer.
5:33
I had some ideas, but as a teacher I felt a little lost.
5:36
And working at College Unbound, Lance is the person that
5:39
I go to. When I have questions, right, And I knew people were
5:43
going to be coming to me. So I went to him and the school just responded
5:47
in an amazing way that Lance is going to tell
5:50
you more about now.
5:51
Thank you. Yeah. So this is again, being a fairly new school
5:56
there's things we don't have always to fall back on.
6:00
And also a lot of our practice and thinking is
6:03
student centered. And so I was playing with it, thinking about it
6:07
as well along in terms of instructional design and students and
6:09
whatnot. And I get an email from Autumm saying, I think
6:14
I have a student that has used it. And so that generated a discussion between me and her,
6:19
and it was at the end of the semester and like there's so much else going on, it gave us
6:24
an opportunity to really think about in this moment what
6:26
is what is most useful to do.
6:30
And this is where I think both of our collaborative
6:33
nature and the way that CU structures itself is to
6:36
go after the student wasn't the right approach.
6:39
There was a part of our reaction that was just like a hot dang! Like, go student for being that quick
6:45
and figuring it out. We had that moment and we celebrated that for a
6:48
moment. I want to give this person like some kudos for
6:50
creativity. We had the moment of just having that frustration and
6:55
a mixture of just not... Sometimes it's a little ego driven, but like not being
7:00
happy that the student did it. And then we just also were like, What's behind this?
7:05
And I think that's where we really got our momentum
7:08
and what has gotten our momentum for our school as a whole is really understanding the students and their uses of
7:12
this. I go back to I want to say it's folks
7:16
like James Lange and when people are doing this type
7:21
of thing, which is framed in all sorts of deficit
7:24
language of they're cheating, they're stealing, they're what have you,
7:28
they're often indicating things aren't working for them.
7:31
They're often indicating this is more of a sign for
7:34
help, a sign for a lack of trust, lots of
7:36
different things. And so very quickly, me and Autumm realized like, why
7:40
don't we try to find out? And so our first goal was to craft an email.
7:44
And I sent it out to students saying, Hey, this
7:46
tool exists. And I think some students may have used it and
7:49
we are interested in learning more about it in this
7:52
non-punitive way. We just want to understand what led you to it.
7:55
What might we be able to learn about why you
7:58
found your way to this tool? Nobody responded to that.
8:01
We hoped... nobody did. That was okay.
8:04
My partner recommended, well what if you did an anonymous survey?
8:07
And so we put that out to our students at the very end of the semester.
8:10
We probably got, I think, four or five students, and
8:14
this was at the end of the semester and during
8:16
break that before everything really exploded, before you were seeing
8:20
references to it in podcasts and in mainstream stuff, we
8:24
had three or four students who were saying like, yeah,
8:27
like we started to use this and we're using it in these ways.
8:30
And so we thought that was interesting. And that kind of jolted me to think about the
8:35
semester for us started January 9th.
8:38
This is before most schools start their fall semester.
8:42
And so we needed something in place. And so we developed a policy that we felt was
8:48
like... recognizing we don't really understand the fullest implications of
8:53
these tools. We don't want to just do a blanket ban and
8:57
be like, Nope, you can't use it under any condition.
8:59
And we wanted to create safe conditions that if students
9:02
use it, then it can they can identify that they
9:06
use it. So that can also invite questions and understanding and things
9:09
like that. I think the potential for it can be is as
9:13
vast as some of the challenges and the concerns around
9:16
it. Right. So there's a lot of knee jerk reactions.
9:18
There's a lot of valid reactions about the ways this
9:21
interferes with how we demonstrate learning by students.
9:25
But I think there's lots of possibilities for us to
9:28
leverage it. If we can find versions of it that aren't steeped
9:33
in all sorts of exploitative practices. But I'll pass to Autumm for her take.
9:37
Yeah. So when I saw some responses that I thought potentially
9:42
could be synthetic text, my first thought wasn't like cheating.
9:48
My first thought. like Lance said, it was curiosity.
9:51
It was like, wow, if they figured this out at
9:53
the same time, I did worry.
9:56
It's so new. It's such a new technology.
9:58
This is December of 2022.
10:01
It dropped on November 30th of 2022. The technology has
10:06
been around for a while now. We can go all the way back to the sixties
10:09
if we're just talking about chat bots.
10:12
Eliza. was in the sixties. Weizenbaum's ELIZA.
10:15
But in terms of the transformer technology, the idea of
10:19
using neural networks around large language models to
10:23
be able to create text so smooth and so clean.
10:27
And so it sounds so convincing, right?
10:30
That's been around for probably about two years now, but
10:33
you always had to pay for it. It's always been behind some kind of paywall or
10:37
part of some type of product. I mean just to open it up to the entire
10:41
world right at the beginning of finals for higher education.
10:45
That's not insignificant.
10:47
Whenever I talk about this, I always think it's really
10:50
important to put the context around it.
10:52
Yes, it's a huge jump in terms of technology, right?
10:56
In terms of the tech that is going on, the
10:59
interface, the way that you use it, the smoothness, all
11:02
of that. But a big part of the hype, a big part
11:05
of everything that's going on around this has to do
11:08
with the fact that it's free and the timing in
11:10
which it was released. Those are just two really big parts of it.
11:14
And especially in that moment, I was hearing all of
11:19
these news articles and things coming out about people being
11:23
really punitive with their students.
11:25
I read an article about a student who failed an
11:29
entire class when it was discovered that they had used
11:32
this tool. And it's my understanding also, there's really not a way
11:37
to prove without any doubt whatsoever that a student used
11:41
this tool. You can run the text through some of the detectors,
11:44
but those are flawed. They're flawed in my testing of this.
11:49
And I don't think anybody even tries to pretend that
11:52
they can return a 100% positive or 100% negative.
11:57
There's tons of false negatives and false positives.
12:00
So I really... I guess I say all of this to say I'm not surprised that the students, when we
12:04
sent them that email and just said, hey, we're just curious, did you use this?
12:08
Have you heard of this tool that nobody responded?
12:11
Because like I said, there were tons of articles and
12:15
news releases out there with people saying that they were
12:18
punishing students for using this tech.
12:21
So if I were a student and I used it out of curiosity
12:25
and I also think it's a little bit crazy to say that students wouldn't use it if I were a
12:30
student, I'd be curious. I'd want to try it out at least, and I
12:34
don't know if I would actually submit that work, but
12:36
I'd be tempted to, especially if it was the end
12:38
of the semester. And I was really busy and I had a lot
12:41
of pressure. I don't know of any academic integrity statement from any
12:47
university that mentions AI generated anything.
12:52
I don't know of any classroom policy that mentions any
12:55
of this kind of stuff. So I think it really does challenge us to think
13:01
about what we mean by cheating and to critically evaluate
13:06
and take, retake stock in what we mean, what's valuable
13:10
in an education. I felt really lucky that I was working for
13:14
College Unbound during this time so that I could think
13:17
about these kind of things with an amazing partner like
13:20
Lance and with a school that is student centered and
13:23
student focused. That's where I'm at with it.
13:25
With students right now, I guess I'll throw in at the end. My class policy right now is actually a little bit
13:29
more broad than the college's
13:32
policy I actually say it's fine if you want to use
13:35
it. Just tell me that you used it and describe how
13:38
you used it. I just think it's way too early right now to
13:41
be punishing students for it.
13:44
You mentioned something about a one credit course, and I
13:47
would love to hear more about that.
13:49
Yeah, absolutely. To just build off of Autumm's point about her policy,
13:54
like we created a policy that we put out as
13:56
this is our temporary policy, folks, individual folks are welcome
14:00
to adjust as makes sense for their classrooms.
14:03
And I think that was, again, like we want to be both student centered.
14:05
We want to empower faculty to make the right decisions
14:09
on behalf of their students. That was another piece of this as I went into
14:14
the winter break in conversations me and Autumm had, I just
14:18
had this brain blast. That's Jimmy Neutron reference for folks that are interested.
14:22
I just had this brain blast of like quintessential way I
14:26
could help figure out this challenge at College Unbound
14:30
would be to do it in a way that was
14:32
student centered. And so like literally got out, I had this
14:37
idea, I got on my phone, I texted the provost
14:39
and I was just like, What about this for an idea?
14:42
What if we do a one credit class that is
14:45
filled with students who are going to play with, learn
14:49
about and really think about ChatGPT and other AI
14:53
generative tools. And through that class we can create a recommended set
14:58
of policies for institutional usage.
15:01
Instantly got back, thumbs up, let's do this. Which also meant that oh I have to figure
15:06
out this course. So that was my winter break.
15:08
And then I realized there was another iteration or rather
15:11
again, my partner in conversation came up with this really great insight of what
15:15
if you could also connect it to writing courses. And so we're doing this one credit course in Session,
15:21
One of 17 week semesters and we
15:24
do eight week courses in session one, and eight week
15:27
courses in session two, and then sometimes sixteen week courses.
15:30
So in Session one, I'm doing a one credit course
15:32
where we're going to develop a rough draft of policies
15:35
around usage for faculty and students. In session two, I'm
15:39
going to try to connect with students that are taking
15:42
our writing course and have them sign up for this
15:45
class, and it'll really be an opportunity to kick the
15:47
tires on the policy. So they'll be taking a writing class.
15:50
they'll be using this policy to inform how they're going
15:53
to use the AI generative tools.
15:57
And that will be a bit of like really trying
15:59
to figure out like, what are the holes in it, what
16:01
are the ways that it really works. And in conversation with the faculty teaching those courses as
16:05
well, so that by the end of the semester we'll
16:07
have had students had a central role in developing it
16:10
and testing it and putting forward the recommendations for it.
16:14
That's where we are with it, we're about three and
16:16
a half weeks into the one credit course and it's
16:20
just been this there's about eight students in it and
16:22
it's been this rich conversation around like them getting to use
16:26
it and them getting to like really start to see
16:29
the answers it comes back with. And then also them delving into other content, other things
16:33
that are helping inform their opinions. The week by week things change because in the second
16:38
week we got the Time magazine report about how in
16:41
order to do content moderation of all of the horrible
16:45
stuff on the Internet that they scrawled in order to
16:47
like, make this open, AI, was paying Kenyan workers $2
16:51
an hour for content moderation, which is just another way
16:54
of saying they paid Kenyan workers $2 an hour to
16:56
be traumatized by like the worst of the Internet.
16:59
So every week there's these new things that help to
17:02
flesh out our thinking about it and conversations that we
17:05
have. They hear things like this and they're like, this needs
17:08
to be on like the UN's agenda. This is not correct.
17:10
Like the ways to get like really invested and start
17:14
to challenge their considerations. One of the earliest points that was just great
17:18
was I had students read some of Autumm's work and
17:23
raising some of those questions around what does it mean
17:26
to sign up and get an account with Open AI
17:29
where it asks for your name, it asks for your email and your cell number?
17:33
And we got into a discussion around like digital redlining.
17:36
And our students are predominantly students of color.
17:39
And so this generated part of when I created the
17:43
course or part of when I started to create the
17:46
assignments and the goal was for them to use or
17:49
engage with these tools, I recognized they would have
17:53
to create accounts or they would have to have access.
17:55
And so I've offered my credentials for them to log
17:58
in and to use. And as a result of that conversation I had, I
18:02
probably at this point, I've had half the students ask
18:04
to use the credentials to use it so that they
18:08
don't have to give up their own personal information to
18:11
an entity that has 10 billion dollars invested by Microsoft and
18:14
is like gathering up all sorts of data on the
18:17
users. So yeah, that that is all where we are now
18:21
is we're moving into the point in the course where
18:24
like, besides playing with it, we're really thinking about what
18:26
would we recommend for usage. That's the next discussion we're starting to have.
18:31
Getting back to your question, Rebecca, in terms of how
18:33
we're using it with students, I have been pretty vocal
18:37
and I've written a couple of blog posts that you
18:40
can link in your show notes, really being critical of
18:44
the idea of using it with students. I'm really hoping that those faculty who do teach in
18:52
a discipline where it makes sense to use it, take
18:56
a pause, take a beat, and think critically about how
19:00
they're going to ask students to use it.
19:02
And I suggested some techniques that I could employ to
19:08
make it so that they weren't forcing their students to
19:11
use it. They weren't forcing their students to sign up for accounts
19:14
at the bare minimum. But I guess I just want to say that I
19:18
do recognize that. I think that's discipline specific and especially, of course, I
19:23
just love the idea of a course that is specifically
19:26
designed to gather student voice and get student input about
19:32
university policy, about college policy.
19:35
I can totally see it. That's a situation where, yes,
19:38
the students should be informed to the point where
19:41
they actually have experience with the tool so that they
19:44
can give informed input.
19:46
But I love the fact, Lance, that you created like
19:50
a shared account. So that way nobody's putting your personal information at the
19:56
account level, but also it muddles up and it creates
19:59
noise in terms of the questions that they're asking, right?
20:03
Because it's not just the creation of that account, but also the inputs that you're putting into it.
20:07
And so by having everybody share one account.
20:10
I think that does a Greater good in terms of protecting students.
20:15
That's your influence at hand.
20:17
Makes my day. I thank you for that.
20:20
My next question is a little bit about what do
20:22
we do for instructors? Right.
20:24
What advice are we giving instructors?
20:27
How can they? How should they? How should they not?
20:31
What do we tell instructors about this new tool?
20:34
I personally don't think there's anything wrong with waiting before
20:39
you use it. So I guess there is. What do we tell instructors?
20:42
It depends on what the instructors coming to us with
20:44
needs for, right? So if they're coming to me and they're saying I'm
20:48
worried about cheating, that's a different conversation than I'm intrigued
20:52
and I want to use it. Right? So if it's I'm intrigued and I want to use
20:57
it, my first response might be, Do you really need
21:00
to use it? Do you really need to use it right now?
21:02
What are you doing with it? What are you teaching?
21:04
Is it directly related to what you're teaching?
21:07
But is there a way that you could use it and demo it for the students rather than making the
21:11
students have accounts? If you do want the students to have an account, could
21:15
you have a shared account or could you talk to
21:17
the students about their understanding of privacy and digital literacy?
21:23
Like I would say, if you're teaching like a digital
21:25
literacy, digital citizenship course, where are your students at?
21:30
Is this like a level two, three, four kind
21:33
of course, if this is intro, they might not have
21:35
a good understanding. Most people don't have a good understanding of digital privacy.
21:41
I just think that before you dive into using these
21:43
tools, you should have a good foundation of data sharing
21:47
and data collection and you should have examined some cases
21:51
of where things have gone wrong, data breaches and things
21:54
like that. You should have an idea of what kind of things
21:57
could go wrong. If they're coming to me and they're asking me about
22:00
cheating because they're worried about cheating, I usually try to
22:04
do some damage control around helping them to move away
22:09
from punitive approaches because I don't think they really do
22:12
any good. At the end of the day, I think they just
22:15
degrade our students' trust in us and our students' trust
22:19
in higher education. And I guess I try to talk to the instructor
22:23
and remind them how much is really built on that
22:26
trust, how much of education comes from that place, and
22:31
help them to realize that they're sacrificing so much
22:36
more. If they take a punitive approach, they're sacrificing so much
22:40
more than they would be if they took a more
22:43
open approach, trying to understand where the students are coming
22:46
from and trying to figure out how this really aligns
22:50
with their outcomes and the things that they're trying to
22:53
do in their course. So I usually bring it back to some type of
22:58
helping them to articulate some type of policy for their
23:01
syllabus, because it all comes back to expectations, right?
23:04
It all comes back to them really thinking about in
23:08
their heart and what's going on with them, their expectations
23:11
for the course, the affordances, the tools like ChatGPT and
23:17
DALL-E and all of these other generative tools can afford
23:21
students and helping them to articulate to students why it's
23:25
important and what they learn in the process.
23:27
In terms of ChatGPT, it's not a matter of we
23:32
want to create essays. That's not the point.
23:35
If that's the point, we're doing it wrong.
23:38
Right? The point is thinking through and being nuanced in your
23:42
thinking. So nuanced in your thinking that you're putting it down
23:45
on paper or on a screen and you're scrutinizing and
23:50
evaluating every word, every paragraph, every sentence to make sure
23:53
that things fit together and flow together.
23:56
It's that process of writing. It's not the product of the essay and helping faculty
24:01
to have that conversation with their students and helping faculty
24:04
to articulate that meaning to their students in a way
24:07
that helps the student to understand, I think is so
24:09
much more powerful than wagging your finger.
24:11
And if you do this, we're going to punish you.
24:14
I'm going to turn it over to Lance. Let him talk about it a little bit.
24:17
100%. Everything Autumm just said.
24:20
I think the only areas I would add is I
24:24
think there's some great value in them using it to
24:27
enhance some of their own work. And I think that's great with the caveats of the
24:31
potential concerns around privacy that we've already mentioned.
24:33
I think if they are going to do that, I
24:36
think it is important for them to also be citing
24:41
and identifying where their course, where their thinking is influenced.
24:44
And I come from this having worked with faculty
24:47
who have done the things that they don't like, that
24:49
they don't like their students to do. And so just really demonstrating transparency again in how they're
24:55
using it with their students. The other thing that comes to mind in using it
25:00
is and this is something, again, from conversations with
25:03
Autumm, is really emphasizing, no matter what they think of
25:08
it right now, I'm going to, going to
25:10
take the quote from you Autumm. This is the, quote unquote, dumbest that AI is
25:15
going to be. So I see a lot of dismissal and I see
25:18
a lot of it's fine, like it's fine. I'll catch them anyways.
25:21
And there's a whole other discourse around that approach or
25:24
around those concerns. But I think undermining it or not recognizing like now
25:31
with this research preview, it is getting better because we're
25:34
all training it to be better. And I think within that is really highlighting what I'm
25:39
starting to see. And I want to say, is Anna Mills
25:43
and a couple other folks have, Maha Bali I think,
25:46
has done this as well where they they're sharing
25:48
their dialogs and you're seeing through the questions in the
25:52
further development in iterations of their questions like the actual
25:56
dialog, you get some really interesting cool things and I
26:00
think that's a thing that is powerful and interesting and
26:03
valuable for for faculty and for their students to be
26:07
thinking about. I'm influenced by Warren Berger, who's written a couple of
26:11
different books about questions, and I think this is one
26:13
of those opportunities for us to really think about the
26:15
power of questioning and what you need to ask good
26:18
questions. And so there's, there's something within this that I think is a possibility for
26:24
no matter the discipline to really think about how like,
26:27
how do we ask good questions, how do we ask
26:29
meaningful questions and how do we refine questions as a
26:32
means of seeking knowledge. But in order to do that, we also have to
26:35
demonstrate some understanding in order to ask those deeper questions.
26:39
And I think there's a really rich opportunity there to
26:42
explore within all of this as well, from both faculty
26:45
and student side.
26:46
Yeah, that was part of what I was thinking with
26:50
instructional design and with my students is it is even
26:54
in order to use this effectively as a tool, you
26:57
need to know what questions to ask. And it's the same thing when you're doing analysis in
27:02
instructional design. When you start, you need to know what the important
27:05
questions are, right? Is this a training problem?
27:08
Who are the students? What are the characteristics?
27:11
Like all of these different analyses things are the questions
27:14
you need to ask, and if you don't know what those questions are, the tools are not useful.
27:19
And so I think there is an inherent skill in
27:22
learning how to ask the tool the right questions.
27:26
I think there is. And it's also different even though it's very, very
27:32
smooth. Right. And it sounds very human.
27:35
There is a big difference between asking a question of
27:38
a human and asking a question of a large language
27:41
model like ChatGPT.
27:44
If anybody is more interested in this, I was blown
27:47
away with the prompt engineering stuff that's out there, which
27:51
is all about how to ask it questions and understanding
27:54
the different ways that you can ask it questions, which
27:57
can be. This was really interesting to think about how you interface
28:01
with it and how it's different than maybe interfacing in
28:04
a human conversation.
28:06
This is probably of no value, but this whole conversation,
28:09
just like it just had me flashback to Hitchhiker's Guide
28:13
to the Galaxy when they're waiting for the answer to
28:17
life, the universe and everything. Like, I feel like there's like there's something there for
28:22
this conversation as well. It's the same dynamic.
28:27
Looking for all the right answers are not necessarily the
28:29
right questions or realizing what the machine is really built
28:32
for.
28:33
Lance you mentioned something interesting that I think is caused
28:36
an interesting conversation on Twitter as well, and that's around
28:39
citing. And I actually brought it up a little bit in
28:44
a from a blog post that Maha Bali had put
28:47
together, challenging not to anthropomorphize a computer system, but that's
28:53
worth citing. Are we not just doing that?
28:56
How does that make sense?
28:58
I guess I would say I don't know that
29:01
the citing itself is what anthropomorphizes the tool.
29:06
I think there's lots of other ways we're doing that.
29:09
And so citing might feel like one part of that
29:12
conglomeration, I would say citing it in the traditional sense
29:16
of citing, it feels like the fix for now as
29:20
we're still trying to figure things out.
29:23
I think all of it hearkens to trying to make
29:25
explicit where information comes from.
29:29
I think citing is a good start.
29:32
I think if we were to lessen or try to
29:36
detract that or deanthropomorphize it, it makes me think
29:41
about, what is it... Phipps, the article again, it is something
29:45
Autumm has shared with me in our conversations.
29:47
Phipps and Lanclos citation approach, which is like you
29:52
identify that you were using this, and that you understand
29:56
the repercussions of using a tool that works, that is
29:59
in part built off of like the illegal copying or
30:04
use of copyrighted works and also the various exploitated labor.
30:09
I feel like that's a way of threading.
30:11
It's like you're citing it. You're citing where this information came from.
30:15
And because you cannot tie it to individuals, you also
30:18
recognize that tool is an exploitative tool of sorts.
30:23
I think that's I think in my head that's one
30:25
way I've been thinking about or would think about in
30:28
this context. They offer that citation approach, I think slightly, and just
30:33
to be provocative and it's something like, nope, I think
30:36
that's what I will be using and will be encouraging
30:38
others to use. We can't just hide behind a citation because I think
30:43
maybe that's what it is. It doesn't anthropomorphize necessarily, but it hides.
30:51
It hides what really goes into that answer, both technically
30:57
and human cost.
31:00
Yeah. So I do have a blog post pulled up and
31:02
I can read the citation example that Phipps and Lanclos
31:07
propose. So this is what they're proposing as a
31:12
potential citation. And they say we offer the following text not because
31:16
we think it, but the relevant people will actually use
31:19
it, but because we think that they should.
31:21
And so it's this presentation paper work was prepared using
31:26
ChatGPT, an AI chat bot.
31:29
We acknowledge that ChatGPT does not respect the individual
31:33
rights of authors and artists and ignores concerns over copyright
31:37
and intellectual property in the training of the system.
31:41
Additionally, we acknowledge that the system was trained in part
31:45
through the exploitation of precarious workers in the global South.
31:49
In this work I specifically use ChatGPT to...
31:53
then they have some ellipses where you would fill in
31:55
the way that you actually used the work.
31:58
It's powerful. It's really powerful.
32:01
It really makes you stop and think, Should I actually
32:04
use it? If I'm going to acknowledge all of these horrible things
32:07
about it? I'm not sure that people would.
32:10
Lance is saying, Lance is embracing it.
32:12
And I think that's amazing, right?
32:14
But I think.
32:15
It also means I'm probably an irrelevant person rather than
32:18
a relevant person.
32:21
Oh! I don't know if you are, though.
32:24
I don't know if you are, though. I think what they mean by that, people who would
32:28
use it. And so if
32:31
you're willing to use something like that, I think that
32:34
definitely says something. I proposed in talking with them about the article, so
32:38
there is probably I need to put this disclaimer out
32:40
there. And if you go to the article, if you go to the blog post, you'll see at the top they
32:44
acknowledge that I had some input on this, just I
32:47
didn't write a word blog post, none of it.
32:51
But we had some conversations. We had some conversations about citation and about
32:56
different approaches to citation. And one of the things I put forward is you
33:01
could use this as activism if you really felt very
33:05
strongly the ethics of these systems and you wanted to
33:09
make a point, you could add a citation like this
33:11
to your presentation or your paper or whatever, and then
33:16
just have a use of ChatGPT that is so minimal,
33:22
right? Like I used it like a thesaurus and I changed
33:25
out this one word. So I used it just so that you could use
33:30
the citation to point out the abuses that can happen
33:33
through it. It's really powerful. It could be used in lots of different kind of
33:38
ways. It's also so powerful because it's using the very thing
33:42
that ChatGPT obscures. So ChatGPT obscures citation, it fabricates citation.
33:47
We don't know what is inside of these language models.
33:50
More than likely, there is copyrighted work in there.
33:54
We don't know that for sure. It's just we really don't know what's inside of the
33:57
language that it's trained on. And so this in a way, weaponizes the idea of
34:02
citation to really speak back to and bring some light
34:07
and bring some acknowledgment to some of the darker things
34:10
around this, around this particular tool.
34:15
I think that's fascinating. I like the power in that citation, that opening citation.
34:22
One of the ways that I've been using it, which
34:25
is totally not using it as a data lookup, I
34:30
have text that's in past tense that I need to
34:33
move to present tense and it's making it a whole
34:36
lot faster for me to write it because I can
34:39
take the stuff that I wrote in past tense, plug
34:42
it in and say, give me this in present tense,
34:44
and then I can work with the present tense version
34:47
that it gave me. And because I wrote the past tense, I own the
34:51
present tense. It's just saving me time.
34:54
So again, I'm using it strictly as a tool.
34:57
Now I'm questioning how it is now taking that data.
35:02
And I found it did do that to me because
35:04
I had asked it a question about a town that
35:07
we visited called Harrington Harbor.
35:10
And originally it said that this place didn't exist and
35:14
there was no knowledge of it. And then later, when I asked it about Harrington Harbor,
35:18
it came back and quoted what I had written about
35:21
it. Right. So it's like that.
35:24
Oh, now your database thinks Harrington Harbor is the information
35:29
that I just fed you about Harrington Harbor, which actually
35:33
shows you where it's getting the information from and it
35:38
isn't even fact checking it.
35:41
You can't argue like Wikipedia has systems in place where
35:45
things can be at least somewhat quality controlled in some
35:49
way, or at least say whether it's been quality controlled
35:52
or not. And fair because ChatGPT is a black box
35:57
there's no looking inside to see where the information came
36:01
from. I think that's fascinating.
36:03
Just on going back to the point that it's changing
36:08
all of the time, right?
36:11
And like it seems like every single day there's a
36:13
new thing coming out, speaking to what you're talking about
36:16
Rebecca Amazon is now warning their employees to stop using
36:23
it because they're seeing some of their, like trade secrets
36:26
in terms of coding, showing up in answers from ChatGPT
36:32
because they're using ChatGPT to help them code.
36:36
And so the ChatGPT is not keeping those secrets and
36:41
is then absorbing that and then sharing that with other
36:45
users who are out there. It just put a link in the chat that you
36:50
can maybe link in the show notes, but yeah, yeah,
36:54
because it's a learning machine.
36:56
We've been talking about learning technologies and it's coming back
36:59
to ID stuff and instructional design and technologies.
37:02
For years we've been talking about learning technologies.
37:05
We haven't been talking so much about technologies that learn,
37:07
which is a little bit different.
37:10
This makes me so happy in some ways. I'm not going to like there's a lot there that
37:15
there is some Schadenfreude going on right now.
37:18
I'm not going to lie to the point about the
37:21
responses. This is it.
37:24
If it gets edited out, I completely understand.
37:26
But I dropped this line this week when we were having
37:30
a discussion about it. This just came to my head like, like fluid
37:34
is that ChatCPT responds with all of the
37:40
self-confidence of a mediocre white dude.
37:43
It just does and I just ever since that came
37:47
to mind, that's all. Okay. I can only hear ChatGPT responses and the voice
37:52
of somebody named Brad.
37:57
Yeah.
37:58
Sorry to all of the Bradleys out there, right?
38:00
That's right. That's right. Disclaimer...
38:02
I think it's Chad, actually.
38:03
I'm sorry.
38:03
ChadGPT.
38:05
Because we were talking about it in the context
38:08
of its errors and things like that.
38:10
And that came to me.
38:13
And it just it sticks with me now. And that's all I can think about now is.
38:17
I love it.
38:21
I'll ask one more sort of question in this area
38:24
and then I think we'll close off. And that is we've talked about guidance for students and
38:30
guidance for instructors. What about instructional designers?
38:33
What do you think instructional designers should be doing with
38:36
this technology?
38:37
Well, the first blog post that I wrote on this
38:40
topic was actually me wondering if it could be an
38:43
instructional designer. There's going to be labor implications of this technology, I
38:48
think everybody. Everybody is clear on that, right?
38:52
OpenAI is doing research right now to try to better
38:57
understand the labor implications because their mission, they say, is
39:01
to make sure that AI is basically a net positive
39:05
in the world rather than a negative in the world.
39:08
And so they recognize and realize this is going to
39:10
have labor implications and trying to foresee and understand that
39:14
is really important if we're going to be responsible or
39:16
something like this. So I was just curious.
39:18
I was just curious if it could, like, do my
39:21
job. So I went in and pretended that I was a
39:25
physics professor, which I know nothing about physics.
39:28
I couldn't ask any discipline specific kind of questions, but
39:31
I just asked really general questions about my department chair
39:35
wants me to take my learning online and I'm skeptical
39:38
about it. I don't know if this is really for me.
39:41
And just tried to see what it was, what kind
39:45
of answers that it would give me. So going back to the idea of prompt engineering, one
39:49
of the fun things you can do is role play with it. You can ask it to role play with you.
39:53
And so I started off by saying, You are an instructional designer and I'm an instructor and you, if anything,
40:00
I think that's an interesting approach because I do think
40:03
that there is a lot of very standard answers in
40:07
instructional design. There's a lot of best practice that it's fine, I
40:13
guess, but I've always pushed back against it because I
40:16
think I'm not convinced that it is best practice.
40:19
I don't know who really makes that decision.
40:21
There's just a lot of it out there. And so I think it's kind of interesting just to
40:26
see what kind of vanilla answers it gives you to
40:29
instructional design problems and then say, okay, well can we
40:34
be more creative than that? Can we go beyond this knowing that these are maybe
40:40
some of the most generic answers that are out there?
40:44
So for ID folks, I mean, I think it's one
40:48
of those like really having to spend some time playing
40:53
with it, whether it's ChatGPT or really any of
40:56
these tools, both to understand their limitations and possibilities and
41:01
evolution and to be prepared for the questions that they
41:05
will get from faculty where I see it impacting potentially
41:09
ID the most is probably in a lot of the
41:14
OPMs. You're thinking about OPMs.
41:16
OPMs I can see the large scale online institutions.
41:22
Your SNUs, your ASUs, your Western Governor's looking to leverage
41:27
this in a way that reduces the amount of instructional
41:32
design folks that they rely upon or that they contract
41:36
out and using this as more systematic means of updating
41:40
their courses. I can see the pathways to that because in some
41:44
ways it feels very much like textbooks, textbook publishers in
41:50
their methods. We've got to sell more books, and so we've got
41:52
to come up with a new edition every two years
41:55
and we'll just switch the chapters around. There's some ways I can see some of the larger
41:59
scale ones being like, We've got to refresh our courses.
42:01
So hey, ChatGPT update it with this kind of flavor.
42:07
Like I could... I can see a series of APIs and plug-ins
42:11
being used to like do a lot of that.
42:14
I think that's and so I would say anybody that
42:17
is in that space just understanding it and seeing where
42:21
that starts to pop out because I just have trouble
42:25
not believing the ways in which those larger entities work
42:29
on this like assembly line.
42:31
And efficiency is always the better answer than anything else
42:36
that they aren't going to start thinking about it and
42:39
using it.
42:39
I think another big role for instructional designers right now
42:42
too, and I say this on January 30th of 2023
42:48
because it's changing all the time, like tomorrow, like it's
42:53
going to be different. I think your administrators need guidance on top of your
42:57
instructors. Your instructors are going to be coming to you and
43:00
asking you for help with this. Just all the different roles of the instructional designer.
43:05
There's the role of you as a content creator and
43:08
as a designer who's creating stuff for people.
43:10
There's the role of you as somebody who gives advice
43:12
to faculty, but then most of us have either interactions
43:18
with some type of administration or we have a director
43:21
who has direct access to administrators.
43:24
And I think all of us are very busy.
43:27
We have a lot going on, right.
43:29
And we don't have necessarily all the time in the
43:32
world just to concentrate on generative AI.
43:36
So I think they're going to be coming and asking
43:38
for advice and asking for guidance in terms of what
43:43
to do with all of the social change that's going
43:46
to be happening around what the impact of these tools
43:49
is going to mean, so just educating yourself and keeping on top
43:53
of that stuff that's changing, even if you can't stay
43:55
on top of it for every single day because it
43:57
is changing every single day. If you can stay on top of it once a
44:01
week, once every two weeks, whatever, to kind of keep
44:04
your eye on what is happening and how it's changing,
44:07
I think that it's going to be really important for
44:10
those who will be coming to you and asking you
44:12
for advice, which will be faculty, but more than likely
44:15
it will be administrators, too.
44:16
And I think there's an interesting point about whether or
44:19
not it could replace instructional design.
44:23
I don't think it can, but I do think it
44:26
does some things very interestingly, and that's I've been asking
44:29
it all of the questions, like I've generated some chapters
44:32
from my textbooks, but I'm using it saying, okay, here
44:37
are the questions I want this chapter to answer.
44:40
What do you say? And then it gives me an answer. And I'm like, actually pretty good.
44:43
I'll take that.
44:45
Hey, I will say, and should I say this, and
44:49
let's say this, just say it.
44:51
I will say I wrote that first blog post trying
44:55
to figure out if it would, if it could replace
44:58
me with a little bit of a little bit of
45:01
tongue in cheek, little bit of haha thing.
45:03
But then I realized the larger discourse of people who
45:07
really are looking at the potential impacts are not thinking
45:12
about that. They are thinking about whether it could replace teachers.
45:17
So instructional designers, without teachers, that's a really scary prospect
45:23
if you ask me. And that's the bigger thing to be concerned with.
45:28
Yeah, 100%.
45:31
Do you think that's where it's going? I think the thing I see is and this is
45:36
the it's the thing that with whether it's AI or
45:40
it's robots, that that seems to be sometimes missed or
45:44
under considered is like I hear folks being like there's
45:48
some jobs, that robots or AI won't ever replace that's not
45:52
they'll replace them en masse.
45:54
What it will mean is that you as an
45:57
individual instructional designer or plumber or whatever your profession, you
46:03
will be able to do a lot more, more quickly.
46:08
And so if previously you were only able to work
46:11
with ten faculty, you can now work with 20 or
46:14
you can work with 100. That's the thing that I'm seeing.
46:18
So it's not that you replace all instructional designers, but
46:21
it means one like the expectation of one instructional designer
46:26
will be much more expansive or be much more expected
46:29
than what it was previously.
46:32
And like, we see that within our technologies, every tech
46:35
like we have over the last 50 years, almost all
46:38
workers have become increasingly way more productive.
46:42
But we keep asking more and more of them and
46:44
of course, paying less and less of them. And so I think that's the thing, is what this
46:49
shows, or what this creates the opportunity to, is like now instead of and I've
46:54
seen and was playing around with this in December as
46:56
well when I was doing some advising, I was like,
46:59
What if I use the machine to help me get started? So rather than spending 5 hours of like hemming and
47:04
hawing, I used it as a like, let's see what
47:06
I can get here and start running more quickly.
47:08
And I think that is where the challenge is.
47:12
It's not going to be replacing all. It's just going to make the need for this fewer and
47:16
fewer. And I think when we talk about replacing teachers, like
47:19
to me in many ways, the big scale instructional
47:23
places that have 130,000 students, they pretty much replace teachers
47:28
there. Right. If you teach at those schools, you're not doing curriculum.
47:32
You are grading and you are doing discussions.
47:35
And I think they're doing that because they need to
47:37
at least justify that you actually have an interaction with
47:40
a human. Well, yeah, they will replace those, but meanwhile they can
47:43
also much of their staff is not faculty, but is
47:46
actual staff like instructional designers.
47:48
And now it's like, Oh, we can maybe even do
47:51
less. And so I think I don't want to be like,
47:53
Oh, we're all doomed because of robots and AI
47:55
But I think that is the threat that they pose.
48:00
And I think in industries we can start to see
48:02
that happening.
48:04
That's right. And yeah, I don't know if it I also don't
48:10
want to jump on the robots are coming for our
48:13
jobs pedestal like we know that doesn't end well.
48:17
And it's never 100% true. But what you just talked about, Lance, is job loss,
48:22
right? It's job shifting. If we have 50 instructional designers right now and each
48:28
one of them are working with ten instructors, if each
48:32
one of them can work with 50 instructors.
48:34
Yeah, that's a different situation.
48:38
It will be interesting to see how it plays out.
48:41
I guess the big takeaway is nobody really knows.
48:44
But one thing that we do know is that there will be some kind of labor implications.
48:49
There could also be job creation around this as well.
48:52
Right. Knowing people who know prompt engineering or who know how
48:55
to train a language model, those kind of things could
48:58
end up being something that are marketable skills that we
49:01
might need going forward. So I don't want to paint it as all gloom
49:05
and doom, but there will probably be upset and there
49:08
will probably be some reskilling that is needed and there
49:11
will probably be big changes.
49:13
I want to say thank you to both of you
49:16
for coming in and joining this little thought experiment around
49:20
ChatGPT and what it might be doing this semester.
49:24
I am really curious to see how all of this
49:26
is going to play out and where we're going to
49:30
be at come September, right when we go into another
49:35
semester after having had the summer as well and a
49:39
little bit more time for things to play out.
49:42
But thank you very much for your willingness to come
49:44
on and chat with me about Chat.
49:47
Thanks so much for having us, Rebecca.
49:49
Thank you.
49:51
You've been listening to Demystifying Instructional Design, a podcast where
49:54
I interview instructional designers about what they do.
49:57
I'm Rebecca Hogue, your podcast host. Show notes are posted
50:01
as a blog post on Demystifying Instructional Design dot com.
50:05
If you enjoyed this podcast, please subscribe or leave a
50:07
comment in the show notes Blog post.
Podchaser is the ultimate destination for podcast data, search, and discovery. Learn More