This is not your "traditional" prompt engineering!
Even though the ChatGPT is just a few months old, it's already shaking up what it means to "prompt engineer."
I’m putting the finishing touches to ASU’s first ever course on ChatGPT. And I must confess, I have some angst over it.
The course — Basic Prompt Engineering with ChatGPT: An Introduction — is designed to develop professional skills in using ChatGPT and other AI chatbots. But as I’m discovering, what “professional skills” means here is a shifting target.
Back in the halcyon days of pre-November 2022, “prompt engineering” was primarily the domain of coders and computer scientists. It was part of the toolkit for designing and developing Large Language Models (LLMs) and interfaces like ChatGPT so that they gave desired responses to natural language inputs.
In other words, prompt engineering was all about making platforms like ChatGPT better.
Then everything changed. As ChatGPT and other AI chatbots swept the world, there was a clear shift from prompt engineering being about creating better chatbots, to being better at using the chatbots.
The result was … confusion!
We’re still in a transition period where, to some, prompt engineering means getting under the hood of LLMs, while to others it means developing the skills to use them effectively.
I suspect though that it’s the latter definition will ultimately win out.
Employers are increasingly going to want to hire people who can use ChatGPT and other AI chatbots for everything from basic admin to developing and implementing business plans. They’re not going to want LLM builders, but smart and capable LLM users — users who can “engineer” the prompts and the resulting interactions to get the most out of them.
This, of course is the thinking behind the ChatGPT course we’re offering. We wanted to ensure that students from any major, discipline, area of expertise, or background, could develop professional ChatGPT abilities and demonstrate their skills to future employers.
But the road to developing the course in such a shifting market of ideas has not been easy. And perhaps my first mistake was to ask ChatGPT to create the bare bones of the syllabus!
Actually, I take that back. This has been an incredible experience in working hand in hand with ChatGPT to create a unique and high-impact course. We are teaching skills that didn’t exist a few months ago while using teaching and learning techniques that are just as new. We’re actively using ChatGPT as a brainstorming partner, a co-designer, a personal tutor, a learning environment, and a teaching assistant. And I’m constantly being blown away by what is now possible.
The problem was, when we first developed the course, ChatGPT created an amalgam of traditional and new versions of prompt engineering, and I’ve been spending the past few weeks untangling this.
The good news is that we’re almost there. We have a working definition of prompt engineering as “the art and skill of crafting, optimizing, and employing every-day language to effectively harness the power and capabilities of Large Language Models and AI chatbots, leveraging their potential and making them useful and accessible across a wide range of professional and personal situations while ensuring accurate and valuable outcomes.”
This, to me, captures a growing understanding of prompt engineering being a broad set of essential professional skills rather than a domain of computer science or AI expertise. And yes, ChatGPT did have a hand in crafting the definition!
We also (finally) have a set of learning objectives that reflect this definition. As a result, students completing the course should (amongst other things) be able to:
Describe the benefits and limitations of LLMs and AI chatbots in professional and personal settings.
Apply ambiguity reduction in formulating prompts.
Apply constraint-based prompting to specific tasks.
Demonstrate comparative prompt formulation.
Develop simple prompt templates that can be applied to multiple questions or situations.
Develop multi-step prompt templates.
Train ChatGPT to respond to prompt templates in a generalizable way .
Apply the RACCCA approach to evaluating prompt responses (Relevance, Accuracy, Completeness, Clarity, Coherence, and Appropriateness).
Provide a metrics-based evaluation of prompt responses.
Iteratively improve prompts to provide better responses against a set of criteria.
Comparatively evaluate different prompts focused on similar outputs.
Describe emerging developments around the capabilities and uses of LLMs and the technologies that are making use of them.
Describe and utilize an array of innovative prompt ideas and structures.
Explain the potential benefits and risks/ethical concerns associated with LLMs and AI chatbots.
Describe the concepts of responsible innovation, responsible AI, and principled innovation, and how they apply to LLMs and AI chatbots.
Apply ethical practices to the use of LLMs and AI chatbots.
We’re still finalizing this list of competencies — the course is still a few weeks away after all! But it does mark a big shift from thinking about prompt engineering as part of developing LLMs to a skillset associated with using LLMs. And this is a shift that I think will increasingly come to define prompt engineering as we move forward into a brave new world where LLMs are used everywhere.
In the meantime, I am still angst-ridden. Even with the shift in meaning, there remains no established set of professional prompt engineering skills, which means that this is still a moving target.
And I worry that we’ve missed a trick here, or that someone else will come up with a vastly better set of skills. Or maybe in six months time the term “prompt engineering” will have been relegated the trash can of bad ideas.
Who knows. In the meantime, what is clear is that our students need to be adept in using AI chatbots like ChatGPT — no matter what we call these skills.