Asking a question is often thought of as a way to solicit information, or in some situations, assess what one already knows—as in taking a test. Research from the learning sciences, however, has shown that questions can benefit learning in yet other ways. For instance, we can use questions to practice recalling important information or to help us think more deeply about a topic.
Another useful purpose of questions is to engage in a little-known learning technique called pre-questioning. It involves guessing the answers to questions about an unfamiliar topic and then learning the correct answers. For example, imagine you are learning about the theory of relativity. Before a lesson on the topic, you might try to guess the answers to questions such as, “What is the speed of light?” or “How does time change for an object moving very fast?” You make your guesses, then proceed with the lesson, during which you learn the correct answers.
A growing body of research has found that pre-questioning can substantially improve one’s ability to pay attention to, learn from, and remember the content of textbooks, videos, and lectures. Given this phenomenon—which is formally known as the “pre-questioning effect”—it might seem sensible for teachers to implement pre-questioning in their classrooms, or for individual learners to use pre-questioning on their own.
Yet pre-questioning requires a crucial ingredient: the questions themselves. In many situations, such questions are not readily available. Busy teachers often lack the time to devise suitable questions. Individual learners can find that creating questions is extremely difficult, if not impossible, when one knows very little about a given topic. Even the practice questions found in textbooks or on websites are often limited and may not focus on the exact topics one is trying to learn.
In a new study published in the Journal of Applied Research in Memory and Cognition, my research group turned to generative AI as a potential solution for these problems. We reasoned that the latest-generation large language models (LLMs), with their impressive capability to analyze and produce meaningful text, are now creating practice questions that are appropriate and effective for pre-questioning. That possibility would represent a major breakthrough in automated question-generation research—after all, in years past, researchers have found computer-generated questions to be wanting. We investigated these possibilities across four experiments.
First, we gave ChatGPT an encyclopedic text passage on the function and different types of brakes. Using both painstakingly designed and simpler prompts, we asked the LLM to generate practice questions that learners could attempt before reading the passage. In response, ChatGPT generated coherent, factually accurate, and appropriate questions such as “What distinguishes hydraulic brakes from mechanical brakes in automobiles?” And, “In what ways do hydraulic brakes and air brakes differ in the mechanisms they use to apply pressure to the brake shoes?”
Next, we had research participants engage in pre-questioning with the AI-generated practice questions, read the text passage, and then take a comprehension test. The test assessed their knowledge of brakes using a variety of questions, including never-before-seen questions developed by humans. Their comprehension test scores were higher than those of participants who either read an outline before reading the passage or proceeded directly to reading the passage. Attempting to guess the answers to AI-generated practice questions prior to reading the text passage generated a pre-questioning effect, as evident in higher test scores.
We also compared the effectiveness of AI-generated versus human-generated practice questions at producing pre-questioning effects, and found no significant differences. Both types of questions were equally effective. Further, when we asked research participants to guess whether a human or a computer wrote the AI-generated questions, most participants could not tell the difference. Even AI-generated questions generated with simple prompts—those virtually any student could use—did not appear any less effective than those generated with more detailed prompts. Together, these findings suggest that AI-generated practice questions are as useful as those created by humans and virtually indistinguishable from them.
As this study demonstrates, a lack of suitable practice questions is no longer a barrier to engaging in pre-questioning and potentially other learning techniques. Generative AI now offers a readily available and seemingly limitless supply of practice questions that, in many situations, can be produced with minimal prompting. It can quickly generate questions that target virtually any set of learning materials. Consequently, instructors and students can spend less time creating practice questions from scratch—albeit a potentially beneficial activity in itself—and focus on using such questions to improve learning.
