Do Better Study Techniques Mean Better Exam Scores?
TLDR; it makes less of a difference than you might think, but more research is needed.
There is a lot of good research on study tips. The problem is that many of them prove their points by changing teaching methods.
This is great for telling you “how to teach” - it’s less useful for “how to study”.
There’s some overlap between the two, but basically:
Students are not perfect (source needed). It is one thing to say "I'm going to study optimally for 5 hours today" and another to actually do it. A study showed that even when students know about effective study strategies, they may still cram due to lack of time: so the best learning technique might actually just mean the one that works best for them.
Many studies do not take into account how hard students work when using the "optimal" learning strategies. For example, researchers found that although mind-maps improved factual recall when adjusted for motivation levels, motivation was lower when students were forced to use mind-maps instead of their own study techniques. Since motivation was directly linked to how much a student learnt, they didn’t get as much benefit from the technique as they would have otherwise.
A lot of research uses simple tests so they can control variables. This means that they miss out on a lot of real-world chaos: not actually a bad thing when you’re doing a study, but it means you can’t capture things like “how would this work in a coding project” or “how much more effort is making flashcards?”.
Do these study techniques still have a big impact when you factor in different levels of procrastination, effort, or student ability?
What One Study Investigated
As I was researching this, I realized that there are very few papers that pull together different studying techniques in the classroom. Then again, the question itself is hard to answer, especially because you have so many factors to account for. The findings below come from one study, so we can't draw any definitive conclusions yet, but they provide some interesting results to consider.
This study focused on the techniques outlined in the top 10 study techniques paper. Participants were asked to record how often they used each technique, and then were surveyed on the following topics:
GPA (Grade Point Average): college major GPA, cumulative GPA, and previous term GPA
Perseverance: using the Grit Scale to measure how easily they give up
Procrastination: 10 questions on whether students tended to avoid work
Distraction: how often they went on social media, texted, listened to music, etc.
Metacognition: using the Motivated Strategies for Learning Questionnaire (MSLQ) to measure how “good” of a student they are with questions such as “Compared with other students in this class I expect to do well” and “I work hard to get a good grade even when I don’t like a class” (take it yourself here!)
Course quality: how well they thought the course was structured and professor ratings
(The full survey is here if you'd like to take a look.)
To see how the effects varied depending on the situation, the study was conducted in an introductory psychology course (Psych) and an advanced computer science course (CS). The Psych course had only 16% psychology majors, while the CS course had 79% CS majors.
What Worked
Students who scored highly on GPA, perseverance, and metacognition were shown to use a wider range of study techniques, which correlated with better exam performance.
GPA was correlated with the following factors, with a short explanation below:
Elaboration: "Elaborative Interrogation" is a method of learning where students ask "Why?" when presented with a fact. They then try to generate their own explanations for it, such as "Why would this fact be true of [X] and not of [Y]" or "This is true because [X]".
Mental Images: After reading or listening to content, try to visualize it in your mind. For example, picture how a water molecule looks after reading about its components.
Self-Explanation: Explain your thought process while working on a problem, or explain learned content to yourself.
Practice Testing: Low-stakes testing that students can do on their own, such as flashcards, practice problems, or practice tests.
For the Psych course, higher GPA was correlated with those who scored highly on the the “good student” metric, or had grit - this effect wasn’t seen in CS.
Final Exam scores were correlated with the following factors:
Keywords: Linking familiar words with unfamiliar ones is an effective technique for learning foreign languages or technical jargon. For example, you can remember that la dent in French means “tooth” by picturing a dentist holding a tooth. The keyword here is “dentist” to create the link.
Spaced Practice: Studying the same content over multiple sessions, with longer gaps between them, can lead to lower performance in the individual sessions but better results in the final exam.
These results suggest that the right study technique should be chosen for the right situation.
What Didn’t Work
A notable factor missing from the exam score predictors was active recall, despite it being the best study technique across almost any topic. The authors speculated that this may be due to students not using it correctly, or mislabeling repeated study as “retrieval practice”, which could have affected results.
For the exam itself, all of the factors combined explained less than 25% of the variance in results. In Psych, GPA predicted 15% of the variance, with no other significant factors. In CS, GPA predicted 8% of the variance - but the used of spaced practice explained a further 15%. Yes, this is still significant - but it was probably less than I (and maybe you) were expecting.
Limitations
Before we get carried away, remember that the usefulness of these findings is limited. The study identified three key issues: all measurements were collected at one point in time, students self-reported data, and there were many other factors that weren't controlled.
Additionally, this study doesn't find the same effect as this review which showed that testing helped students learn better in Psychology classrooms. This suggests that more evidence is needed, and the review itself claimed that there was a “remarkable” number of no-effect cases, meaning that further research on implementation was necessary.
It could also be said that the results don’t tell us much: the authors themselves mention that “knowing that a student with a high GPA uses more study techniques and gets a higher exam score does not promise many avenues for interventions”.
However, there is increasing research into translating findings into the real world: a promising pilot study looked at how an intervention teaching students about effective learning could sustainably help them improve. The value of this study is in opening up avenues for further research on this level - if we can find ways to conclusively boost overall performance (or just prove that the relationship goes both ways), we can get that much closer to improving the education system.
Thanks for reading! If you have any further research or information on this I’d love to hear from you at abouttolearn@substack.com. Otherwise, you’ll find my sources listed below as usual.
If you want to know about “how to teach” instead, here’s a recent article I wrote about making video learning effective:
Predicting Learning: Comparing Study Techniques, Perseverance, and Metacognitive Skill (sagepub.com)
2022, Journal Impact Factor - 1
“The rating of study techniques, while helpful, leaves many questions unanswered. For example, there are inconsistencies between student knowledge about how to study, their intentions for the term, and their actual behavior (Blasiman et al., 2016). Students report using many of the high and moderate utility techniques (Morehead et al., 2015) but the use of high utility techniques do not predict large portions of variance in learning (Bartoszewski & Gurung, 2015). We focus on key factors that may influence student use of study techniques such as perseverance, metacognition, and distractions.”
“Results: Student use of specific study techniques varied between the two courses and a high utility technique, practice testing, was well used. Students reported low levels of spaced practice. Perseverance and metacognitive skills both correlated significantly with many of the study techniques. While no study techniques predicted exam scores in Introductory Psychology beyond variance predicted by GPA, the use of spaced practice predicted a significant portion of variance in students in Computer Science. Conclusions: Students’ use of study techniques varies between their courses and while related to GPA and exam scores, are not unique predictors of variance in learning. Additional moderators and mediators of learning need to be identified.”
Comparing the relationship of learning techniques and exam score. - PsycNET (apa.org)
2015, Journal Impact Factor - 0.9
This study examined the use of different learning techniques and how they relate to each other and to exam scores, with a focus on two techniques with high utility - practice testing and distributed practice. It also looked at other factors such as student ratings of classroom lectures, the professor, procrastination, effort regulation, and self-efficacy to see how they relate to learning. The results suggest that in some cases, student perceptions of the instructor and their sense of self-efficacy may be stronger predictors of exam scores than how the student studied.
“Students probably use more than one study technique and use of one technique may relate to the use of others, but we could not find any research to examine this issue. In this study, we measure how much students use different learning techniques and how use of techniques are related to each other, and we examine which techniques best predict exam scores.”
“We had three major research questions: (a) which study techniques do students utilize the most?; (b) how does the use of one study technique correlate with the use of others?; and (c) how do these techniques, along with other factors, influence learning as measured by exam performance?”
“The lack of additional study aids being significant may also suggest that in some cases, student perceptions of the instructor and their sense of self-efficacy may be stronger predictors of exam scores than how the student studied. Furthermore, it may also be an artifact of the high correlation between professor rating and exam score limiting variance available for Step 4. As seen by the numerous zero-order correlations between techniques and exam score, it is clear that study techniques relate to exam scores. In this sample (these classes and instructors), study techniques seem to be overshadowed. Perhaps this is good news for the power of a well-perceived teacher.”
2015, Journal Impact Factor - 3.4
This study examines the role of study habits in academic performance. It also explores a novel method of measuring students' study habits, using an ESM to randomly text them twice a week, and finds that students who study more regularly perform better on the cumulative final exam. The findings suggest that there is ample ground for promoting effective study habits, and that differences in academic performance may be due to differences in student motivation.
“In a recent meta-analysis of the work on study habits, Crede and Kuncel (2008) described the empirical and theoretical literature on studying behaviors as fragmented. They organized studying behaviors based on the constructs: study skills, knowing how to study, study habits, the frequency and type of actions taken toward studying and study attitudes, the motivation toward studying.”
“In the meta-analysis, the researchers identified 40 studies relating study habits to college GPA and found correlations that average approximately 0.33 with a 90% interval between 0.09 and 0.51. Relationships between study habits and individual course performance was lower, averaging 0.26, which the authors attribute to not being able to correct for reliability in individual course grades. Study habits also featured a weak relationship with established measures of general cognitive ability such as high school GPA or college admissions tests. This suggests that the relationship between study habits and academic college performance is unique from the well-established relationships between measures of cognitive ability and student performance. Further, it helps to rule out the explanation that stronger students exhibit better study habits and that this is responsible for the observed correlation; instead it suggests that students can benefit from effective study habits regardless of incoming ability.”
“The finding that students who study more regularly perform better on the cumulative final exam may not be surprising. However, the findings that approximately one-third of the sample study regularly, which matches the baseline observed in Fig. 2, is of importance as it suggests that there is ample ground for promoting effective study habits.. That the students who study regularly are also not distinguishable from the other groups based on SAT scores also partially rules out the competing explanation that these students were more academically prepared prior to the semester. Another possible explanation for differences in academic performance may include differences between clusters in student motivation to succeed in the course; in particular, it is plausible that differences in motivation may manifest themselves in more frequent studying.”
Fostering Effective Learning Strategies in Higher Education – A Mixed-Methods Study - ScienceDirect
2020, Journal Impact Factor - 4.6
The intervention program had positive effects on knowledge about effective learning strategies and increased the use of practice testing. Qualitative interview results suggested that to sustainably change students’ learning strategies, we may consider tackling their uncertainty about effort and time, and increase availability of practice questions.
2021, Journal Impact Factor - 1.9
“There are a few studies that have examined teaching-to-learn as a part of a classroom curriculum; however, they tend to lack the control needed to ensure the teaching assignment and not other factors led to the increased learning. For example, in several studies, participants in the teaching group did not engage with the material in a similar method as the participants in the control group—such as comparing learning between teaching assistants and students (e.g., Fremouw, Millard, & Donahoe, 1979) or tutors and tutees (e.g., Sharpley, Irvine, & Sharpley, 1983).”
“Although these results suggest the value of teaching-to-learn, it is less clear whether these benefits may appear when comparing students with similar roles in the course and who spend more similar amounts of time with the material. Additionally, studies examining the benefits of teaching-to-learn have relied on between-subjects designs, which makes it more difficult to control for individual differences such as knowledge of the content and motivation—both important factors in the classroom.”
2017, Journal Impact Factor - 0.9
“We investigated the extent that the testing effect can also be observed and effectively used in psychology classes. Inspection of the research literature yielded 19 publications that tested the effect in the context of learning and teaching psychology. A total of 72 effect sizes were extracted from these publications and subjected to a meta-analysis. A significant overall effect size of d = 0.56 demonstrated that testing was beneficial to the learning outcomes.”
“Early research focusing on the underlying memory processes of direct testing effects was more conducive to laboratory studies. Numerous experimental studies have demonstrated the testing effect as a robust phenomenon across a wide variety of samples, learning materials, test formats, criterion tasks, and retention intervals”
“The central result of our analysis is that testing between the acquisition phase and a final test enhanced performance in the final test. Feedback on the result of the intermediate test increased this effect, although the moderator effect of feedback was not significant after controlling for dependencies among the individual effect sizes.”
“The testing effect is one of the most often cited phenomena in the context of evidence-based teaching and repeatedly recommended to be adopted in the classroom. Whether researchers can support this recommendation by proving effective applications of this idea in their own teaching of psychology, is still an open question.”
“In the current data, we were unable to control variables potentially influencing the learning outcome in addition to study design, type of control condition, and feedback. For example, future studies should investigate how practice tests are implemented in the psychology classroom. Although the current results add to earlier findings indicating the testing effect to be a robust phenomenon that is also effective in real classroom situations, the high number of effect sizes not significantly different from zero is remarkable.”
Multiple-choice testing as a desirable difficulty in the classroom - ScienceDirect
2014, Journal Impact Factor - 4.6
This study found that students who took multiple-choice quizzes throughout the course performed better on the final exam than those who did not, and also that related information was learned better as a result of the quizzes.