One of the more interesting debates of the past 30 years has been the efficacy test prep programs. In one corner you have College Board (CB) citing a seemingly vast number of studies that support the idea that test prep is at best minimally effective. Many of us have likely heard or read College Board reports that suggest that SAT scores are near immutable and that from one test to the next scores will only change by negligible amounts. In the other corner you have a billion dollar test prep industry making score claims that fly in the face of all information given by College Board. Additionally, many of us also have friends (or have friends who have friends) whose children attended prep classes and improved by 200 or 300 points.
Reading the papers and following the blogs, one is led to believe that the only choice families and schools have is whether to believe the “evil test creator” who seemingly exists to torture kids with 4-hour long tests or the “greedy test prep companies” who are bilking families out of billions of dollars by making them pay for prep that doesn’t work. Faced with this conflicting information one could easily be confused about who’s right, what the real story is, and what to do to help your child or student. How do parents, educators, administrators or students sort through the noise and determine how to put their child in a position to succeed on these tests (and more importantly get into college). Let’s explore the factors that lead to the disagreement and shed some light on the issue.
Any prep vs Specific prep
A core issue that separates CB findings and test prep industry claims is one of defining preparation. Traditionally, studies that attempt to evaluate the impact of test preparation have done a poor job of understanding the test prep industry and defining preparation. College Board cited research generally makes little distinction between a course offering 30 instructional hours and 4 practice tests and a course offering 12 instructional hours and 1 practice test. Even the Briggs Discussion Paper Preparation for
College Admission Exams presented at the 2009 NACAC conference, which was one of the few studies that attempted to quantify and distinguish the various prep methods, failed to make some key distinctions in test preparation programs. One question in the survey that informed this paper asked respondents to define their method of preparing by choosing between options including “take a course” and “study from test preparation books;” however, since all courses involve test preparation books how can anyone validly answer this question?
Test prep companies (at least the reputable ones) on the other hand are very specific in defining (usually only internally) what they consider preparation before calculating their impact. Most companies will only include students in their calculations if those students have completed a “full program”, which generally means more than 18 hours of instruction and more than 3 full-length practice tests. These instructional requirements are coupled with homework and practice requirements that tend to significantly increase the likelihood that those who participate achieve score improvements (we’ll table for now any discussion of quality and cost).
Survey vs Experiment
Another significant factor in the conflict between College Board and test prep is that CB research is almost also done by conducting and evaluating surveys on test preparation and has little direct knowledge of what students in most of these programs are actually doing in the classroom and outside. As we all know teens are famously inconsistent their responses to surveys. How do researches ensure that the response to “how many hours have you spent in preparing for the SAT?” is even relatively accurately answered when asked weeks or months after that preparation has ended? How do researchers properly ensure that that question is interpreted the same way by every respondent?
Test prep companies on the other hand typically have live data and can conduct de facto experiments. Test prep companies have student results from practice (and often official exams), the exact parameters of the instructional content and components, and a record of attendance and homework completion. So rather than relying on what a student recalls or defines as hours of preparation test prep companies generally know exactly the number of hours (at least instructional and to a lesser extent homework) devoted to preparation and the exact resources used. In effect, test prep company studies use data from controlled experiments while the College Board relies on surveys and interviews. This difference in research methodology likely contributes significantly to the differences in the findings.
Big Data vs Small Data
A final potential discrepancy that leads to the conflicting reports is differences in the data groups. CB generally reports big data (meaning the statistical trends and results from analysis of performance of large groups). The first page of each College Board Total Group Report each year summarizes the performances of all test takers for the year. For the past decade that number has been more than a million students per year. The sheer weight of numbers would naturally make the SAT resistant to change from year to year unless there is a sudden massive change in the inputs (were, for example, the SAT to suddenly change 90% of the test or were there suddenly to be 50% more 9th grade girls taking the test).
Conversely, test prep companies report on small data (the average test preparation company will likely serve between 1000 and 3000 students per year and conduct studies that include some fraction of those actually in those served). The substantial difference in data sets for CB and test prep companies is likely a huge contributor to the different findings about effectiveness. Test prep companies are reporting on small groups of homogeneous students in the hundreds not the thousands, while College Board is reporting on massive heterogeneous groups.
So what’s all this mean?
This all means that in all likelihood both the College Board and test prep companies are telling their versions of the truth. Both are right, though both are manipulating the data in a manner that supports the story they want to tell (isn’t that always the story with statistics). For an individual family or school, what’s more important is to understand that anything that can be tested can be learned, the only question is how much time and/or money will it take to get the amount of improvement you are looking for. For a school of 1000 juniors the expectation that the average score would change quickly from year to year is unrealistic, however for a parent with one child who can sign that child up for 40 hours of one-on-one tutoring there is a very high likelihood of improving scores significantly in a few months.
Good luck and good prepping!