The Use and Abuse of the SAT

This post is a collaboration between myself and James Murphy. James Murphy, right, is the director of tutoring for the Princeton Review in New England and a freelance writer with almost two decades of experience getting students ready for the SAT.

Are New York City’s teachers as smart as their students? John Sexton, the ex-president of New York University, thinks not.  During a talk he gave on the future of American universities at the Library of Congress last week, he claimed that in the past five years, New York City public schools have been hiring “teachers that have lower SAT scores than the students you are graduating. That’s a ticket for failure, because you’re hiring from the bottom half of the existing class.”

One might point out that it would be almost impossible for NYC to hire teachers with scores lower than NYC student unless NYC solely hired from NYC (not even NY state where average scores are significantly higher. Sexton’s desire that New York’s students get the best education possible is not to be doubted, although the denigration of thousands of dedicated teachers is unlikely todafad help improve matters.  What is to be doubted, however, are both the factuality and the logic of his assertion.  Sexton’s remarks are emblematic of a troubling misuse of the SAT to predict outcomes the test was never designed to make.


The facts are not in Sexton’s favor. The source of his claim that New York teachers are not “as good as the best students” in the city is almost certainly based on a 2010 McKinsey & Company report that report ranked teachers according to their SAT/ACT scores and GPA. It argued that top-performing nations recruit teachers form the top third of graduating classes, while American schools tend to recruit from the bottom two-thirds and poor schools often are made up by teachers from the bottom third of their cohort.


Even if the premises of the McKinsey report were correct (and, as we will explain, we do not think they are), it’s findings are less true today than they were six years ago. Sexton frets that things have gotten worse in New York in the past five years, but quite the opposite is true. Susanna Loeb, a Professor of Education of Stanford, has conducted studies of New York and the nation that show a rise in the number of teachers performing at higher levels academically. Her studies also suggest, as you would expect, that high schools teachers tend to have stronger high school transcripts than do primary school teachers.  Other research suggests that the SAT scores of teachers have been trending in the opposite direction than Sexton claims they have.


Facts aside, is it fair or correct to make claims about teachers based on their performance one a single standardized test taken over a decade ago (the median age of a first year teacher in 2011 was 26 years old)?  There are good reasons to think that the correlation of high school and college performance to teaching success is not straightforward.  Teach for America, for instance, draws from precisely the pool of teachers that Sexton wants for New York, but that organization has weathered years of criticism that its teachers provide smaller benefits than hoped for and tend to stay in the profession for only a few years.


The problem at the root of Sexton’s claim is the assumption that an SAT score will predict how talented a teacher is.  This assumption is apparently shared by the Florida legislature which this past year instituted a program that would pay teachers who scored in the top 20% of their graduating high school class on their SAT of ACT a $10,000 bonus (would could cost the state up to $44 million).


This assumption that the SAT and ACT have validity for predicting anything beyond first year GPA (often termed college readiness) is highly questionable. The fact that there are a growing number of schools that are test optional raises questions about the usefulness of these tests in accomplishing their limited stated purpose and raises huge questions for those attempting to use these tests for functions well beyond their intended use.


Even the College Board and ACT do not contest the claims that the tests have limited predictive validity, and neither of them suggest that the tests be used for predicting long-term success in any career, let alone education.  A few years ago, Google stopped asking applicants for their SAT scores and GPA, because it discovered that after two or three years the correlation between them and performance on the job vanished.


The irony is that Sexton knows the SAT does not tell the whole story, as indicated by his decision to make NYU a test-optional school in 2010.  The College Board and ACT have worked hard for many years to change the perception that their tests are just IQ exams; hopefully people like Sexton and states like Florida will take them at their word.


This post is a collaboration between myself and James Murphy. Murphy, right, is a freelance writer whose work has appeared in the Atlantic, WaPo, and even on this blog. He’s also spent decades getting students ready for the SAT. More of Murphy’s work can be found at his site.







Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

This site uses Akismet to reduce spam. Learn how your comment data is processed.