I always appreciate hearing about ChatGPT experiments and the thoughtful analysis of process and outcomes. I have to say that I was maybe looking for a part 3 that would discuss more in depth the question posed about authorship and perhaps some other issues with ChatGPT as novelist. Firstly, as you noted, how do we sort out the authorship/editor roles? This concerns me as we have already seen students submitting ChatGPT's work as their own but also in how we may possibly completely flip roles where ChatGPT is the creator and we as humans only serve as editors. How does this play out in the future of master's theses and PhD dissertations? Which brings me to a second related concern - a standardization of writing. The comment below about a different person providing a different prompt with a result of stories and titles being similar begs a question about the limits to ChatGPT's creativity and style. A third concern is trust. Perhaps a student is tasked with a writing assignment and asks ChatGPT to write them a 5,000 word creative story in the style of John Wyndham. The student didn't actually read the assigned novel by Wyndham and so cannot adequately determine whether the style is correct. The student submits ChatGPTs creation without evaluation. I find that we are putting a lot of emphasis on experts being able to drive and evaluate ChatGPT responses but we may not always have that capacity. Thank you.
These are all things that I know a lot of educators (myself included) are grappling with. One of the more intriguing and, I think, important things to emerge is questions around how we teach, what the nature of learning is, what the role of evaluation in learning is, and how we approach these in a world where AI is capable of producing extremely convincing human-like responses -- and one where we can't turn the clock back.
Clearly, if we are setting assignments that are premised on no access to generative AI and some students are tempted to find shortcuts, we have a challenge -- especially if there's no way to tell whether the shortcut has been taken. I still haven't seen a good way to approach these types of challenge despite being part of conversations addressing them.
Similarly, if we are setting assignments that are designed to stretch and develop a student's understanding and abilities and they are able to use AI as a proxy in undetectable ways, we are failing as educators.
But then the question is, where does responsibility lie for ensuring effective learning -- with the student or the educator? My own view is that much of it is on the shoulders of the educator, meaning that we have to find ways to continue to support students and add value to society through learning and education where the landscape around what works and what does not has irreversibly changed.
And sadly -- or perhaps excitingly -- there are no easy answers here!
This is so fun! We did something similar when I challenged ChatGPT to a story battle and had a friend wrangle something out of it using the same plot (you can read her experience with it here: https://shonistar.substack.com/p/behind-the-scenes Scroll down to Heidi's section). I feel like you got better results, maybe because it was an earlier version, or maybe thanks to your brutally honest feedback, but the style of the stories is actually quite similar, and the titles!
Why wouldn't you give feedback like that to a student, out of interest? Too personal? Do you think it would have the opposite effect on a human than it did on a machine? Maybe put them off writing and iterating? That's interesting in itself.
Ha - funny you picked up on that! There's an art to providing feedback that both affirms and encourages learning and progress while not creating more barriers to learning (such as offending or crushing the student) -- this is especially important in an environment where there's a very wide range of abilities, learning styles, and interpretations of modes of communication! Plus, as soon as you move away from the "master to be obeyed" professor and the "acolyte who obeys" model of education there's an important onus on being able to explain and justify feedback.
This is all good and important -- but it also takes a lot of energy and internally working through the consequences of different modalities of feedback -- hence the comment :)
I always appreciate hearing about ChatGPT experiments and the thoughtful analysis of process and outcomes. I have to say that I was maybe looking for a part 3 that would discuss more in depth the question posed about authorship and perhaps some other issues with ChatGPT as novelist. Firstly, as you noted, how do we sort out the authorship/editor roles? This concerns me as we have already seen students submitting ChatGPT's work as their own but also in how we may possibly completely flip roles where ChatGPT is the creator and we as humans only serve as editors. How does this play out in the future of master's theses and PhD dissertations? Which brings me to a second related concern - a standardization of writing. The comment below about a different person providing a different prompt with a result of stories and titles being similar begs a question about the limits to ChatGPT's creativity and style. A third concern is trust. Perhaps a student is tasked with a writing assignment and asks ChatGPT to write them a 5,000 word creative story in the style of John Wyndham. The student didn't actually read the assigned novel by Wyndham and so cannot adequately determine whether the style is correct. The student submits ChatGPTs creation without evaluation. I find that we are putting a lot of emphasis on experts being able to drive and evaluate ChatGPT responses but we may not always have that capacity. Thank you.
Thanks Marcella
Maybe there will be a part 3 at some point :)
These are all things that I know a lot of educators (myself included) are grappling with. One of the more intriguing and, I think, important things to emerge is questions around how we teach, what the nature of learning is, what the role of evaluation in learning is, and how we approach these in a world where AI is capable of producing extremely convincing human-like responses -- and one where we can't turn the clock back.
Clearly, if we are setting assignments that are premised on no access to generative AI and some students are tempted to find shortcuts, we have a challenge -- especially if there's no way to tell whether the shortcut has been taken. I still haven't seen a good way to approach these types of challenge despite being part of conversations addressing them.
Similarly, if we are setting assignments that are designed to stretch and develop a student's understanding and abilities and they are able to use AI as a proxy in undetectable ways, we are failing as educators.
But then the question is, where does responsibility lie for ensuring effective learning -- with the student or the educator? My own view is that much of it is on the shoulders of the educator, meaning that we have to find ways to continue to support students and add value to society through learning and education where the landscape around what works and what does not has irreversibly changed.
And sadly -- or perhaps excitingly -- there are no easy answers here!
This is so fun! We did something similar when I challenged ChatGPT to a story battle and had a friend wrangle something out of it using the same plot (you can read her experience with it here: https://shonistar.substack.com/p/behind-the-scenes Scroll down to Heidi's section). I feel like you got better results, maybe because it was an earlier version, or maybe thanks to your brutally honest feedback, but the style of the stories is actually quite similar, and the titles!
Why wouldn't you give feedback like that to a student, out of interest? Too personal? Do you think it would have the opposite effect on a human than it did on a machine? Maybe put them off writing and iterating? That's interesting in itself.
Ha - funny you picked up on that! There's an art to providing feedback that both affirms and encourages learning and progress while not creating more barriers to learning (such as offending or crushing the student) -- this is especially important in an environment where there's a very wide range of abilities, learning styles, and interpretations of modes of communication! Plus, as soon as you move away from the "master to be obeyed" professor and the "acolyte who obeys" model of education there's an important onus on being able to explain and justify feedback.
This is all good and important -- but it also takes a lot of energy and internally working through the consequences of different modalities of feedback -- hence the comment :)
We should all accept feedback like a computer! We'd probably get further, faster.