By: Jeff Cain, EdD, MS
ChatGPT is prone to factual errors.
ChatGPT may portray biases.
ChatGPT may become a crutch to students who will lose the ability to think.
Those are just three of the many concerns expressed by educators since the emergence of ChatGPT to the general public. Some have gone as far as declaring the onset of a crisis for academia that will forever change higher education and how we approach teaching. While there are certainly implications of a tool that is unlike any other tool of the past, I would argue that ChatGPT and other large language models (LLMs) don’t change the fundamental nature of teaching. They simply shine a spotlight on facets that we’ve always deemed important, but now may need to pay extra attention to.
Most educators will say that the heart of a college education is not merely an accumulation of facts and knowledge, but learning to navigate a world of competing ideas, uncertainties, and unknowns. Doing so revolves around the act of questioning.
- What questions to ask
- How to ask the questions
- How to evaluate the answers to those questions
Those three bullet points have always been fundamental to reasoning and problem-solving, and they may become even more important in the future as artificial intelligence platforms like ChatGPT and other LLMs become increasingly prevalent.
- Knowing what questions to ask
In many fields, the ability to formulate the right question is crucial for making evidence-based decisions. Price and Christenson highlight why questions are at the center of this process.1 Concerns about ChatGPT’s tendency to provide incorrect information are legitimate, but ChatGPT doesn’t have a list of facts. It merely knows which arrangement of words make plausible sentences. It is foolish to ask ChatGPT a question it is unequipped to answer and expect a satisfactory result. You may ask it to generate a summary from a transcript or make a paragraph more concise, but don’t ask it to do something another digital platform can do better.
- Knowing how to ask the questions
Pharmacy schools teach basic patient interviewing and communication skills for a reason.2 Knowing how to phrase questions in a manner that elicits the pertinent information is just as important when using LLMs as it is when interviewing patients to understand conditions and treatment options.3 Ask better questions to get better answers is a simple paradigm to follow.
The two figures below illustrate how the response from a broad question differs from one with specific details. In this particular example, Figure 2 shows the precision that is somewhat lacking from the response in Figure 1.
Figure 1: Example of ChatGPT’s response to a broad question
Figure 2: Example of ChatGPT’s response to a detailed and more specific question
- Knowing how to evaluate answers to the questions
Ultimately, we want to equip students with the skills to make quality, evidence-based decisions. While that starts with asking the right questions in the right ways to gather the right data, the crux is being able to vet the information and evaluate the quality and usability of it. ChatGPT is known to make all types of errors – logical errors, reasoning errors, math errors, syntax errors and factual errors. It can also provide biased and ethically questionable responses.4 However, it is ultimately our responsibility to develop the foundational knowledge required to recognize those errors. Moreover, it is an opportunity to teach students the critical thinking skills necessary for evaluating information. I believe ChatGPT will not reduce our students’ ability to think and reason because the responses it provides can be unreliable and hence is more likely to force students to seriously examine them. That’s what we want them to do. Right?
There are a variety of ways of teaching students how to “frame questions for AI and evaluate responses with professional skepticism.” Although it is beyond the scope of this article to delve into all the pedagogical details and nuances, a few examples include:
- Modeling ChatGPT during class by asking a course-related question, refining the query, and evaluating the response
- Asking students to generate multiple versions of a question to enhance accuracy of the output
- Providing students with a sample ChatGPT response and have them evaluate and revise the response
While ChatGPT may be a challenge to some disciplines (eg, writing), it can also serve as a valuable tool for pharmacy educators seeking to bolster their students’ capacity to ask, answer, and evaluate questions. As an Academy we need to use and experiment with ChatGPT so that we can guide students toward effective and responsible use. Is your institution preparing you for this? Have you incorporated ChatGPT into your courses? Please share your innovative ideas and examples so we can all learn together!
- Price CP, Christenson RH. Ask the right question: A critical step for practicing evidence-based laboratory medicine. Ann Clin Biochem. 2013; 50(4): 306-314.
- Trujillo JM, McNair CD, Linnebur SA, Valdez C, Trujillo TC. The impact of a standalone, patient-centered communication course series on student achievement, preparedness, and attitudes. Am J Pharm Educ. 2016; 80(10): Article 174.
- Lipkin Jr, M, Quill TE, Napodano RJ. The medical interview: A core curriculum for residencies in internal medicine. Ann Intern Med. 1984; 100(2): 277-284.
- Borji A. A categorical archive of ChatGPT failures. arXiv preprint arXiv:2302.03494 (2023).
Jeff Cain, EdD, MS is an associate professor and vice-chair in the Department of Pharmacy Practice & Science at the University of Kentucky College of Pharmacy. Jeff’s educational scholarship interests include innovative teaching, digital media, and contemporary issues in higher education. In his free time, he is dad to a pole-vaulting daughter, an obstacle racer, an extreme trail ultramarathoner, and is president of For Those Who Would, a 501(c)(3) charity in the adventure and endurance racing communities.
Pulses is a scholarly blog supported by Currents in Pharmacy Teaching and Learning