As the new school year begins, I am anxiously awaiting (read: dreading) the forthcoming SAT and ACT annual reports and with them the inevitable exaggerations, hand-wringings, misinterpretations, and statistical paralogisms that will follow. The College Board’s Total Group Reports and ACT’s Condition of College and Career Readiness Reports (or Profile Reports) will not only spark the annual “sky-is-falling because district scores have dropped .005 points” responses but will also likely lead to an uptick in the “SAT/ACT scores show students not ready to succeed in college, career, life, liberty or the pursuit of happiness.”
I’ve already bellowed into the void about the sky-is-fallingness of it all, so this time my windmill is the College and Career “Readiness” Benchmarks. More specifically, the lazy language surrounding and incomplete interpretation of those benchmarks that may be doing harm to the most vulnerable students in our schools. If you think I might be being a bit hyperbolic consider this particularly egregious example of “journalism” (which I will not link to):
This unfortunate article, and others like it, tells the readership that based on a single test (taken two years before these students start college and 6 to 8 years before launching careers) the fates are sealed for these unfortunate 16 year olds who are “not reading or writing well enough to … start a career.” Whatever that means. This is despite the fact that neither ACT nor the College Board (CB) has directly or even indirectly made those assertions. This is also despite the fact that that is not what the benchmarks are purported to indicate.
The terrible assumptions that led to that craptastic article seem to be a consequence of the growing trend in education (and standardized testing) to want to tie all things to college and career, and often the even more vapid and less justifiable “future success.” A trend that seems to have been sparked by No Child Left Behind and has worked it way through state legislatures, into schools, and now is being mirrored back from the ivy halls of admission testing.
In the early aughts, spurred on by No Child Left Behind, states began rebranding their curriculum standards as “college and career standards” (whether or not there was any research to support the assertions and often without acknowledging that “careers” are not a monolith). In 2005, in response to the changes in state curriculum branding (and as they began to be awarded more contracts for state testing), ACT created its “College Readiness Benchmarks” seemingly giving state Departments of Ed. additional reporting metrics to meet NCLB requirements. By 2010, ACT had shifted the branding to “College and Career Benchmarks” (though I can’t find evidence of research at the time to support the career addition). In 2013, ACT published this white paper to show the alignment of ACT to Common Standard State Standards. Not to be outdone, in 2011 College Board reported its first College Readiness Benchmarks (again they branded the benchmarks “college readiness” and did not attempt to connect them to life after college). However, soon after the initial launch of the benchmarks, College Board had shifted from the label “College Readiness Benchmarks” to “College and Career Readiness Benchmarks” with nary a study or research report to validate the assertion. Fortunately, in the early days of the benchmarks College Board gave strong and reasonable guidance as to the use of the benchmarks, explicitly stating that teachers, families, and students shouldn’t care about the benchmarks but since that 2011 document nothing I’ve seen offers even a hint that not all stakeholders should consider benchmarks relevant.
As the language of the “college and career ready” trope gained steam with educators the media wasn’t long in picking it up and throwing it around willy-nilly without regard for meaning, impact, or harm. Of late these benchmarks have started to be described as if they are determinitive of who should consider college or who will be “successful” in careers. This interpretation brings a danger of conveying the benchmarks as binary indicators of “ready” or “not ready” for college or for the undefined future “career” (is blogging a career? is test prep? carpentry?).
So what do the benchmarks really indicate?
The college readiness benchmarks were developed to
Studying performance of college students, ACT and SAT found what score predicted “success.” ACT has defined success in college as have a 75% probability of earning a C or better in the corresponding college class. The CB initially defined success as a 65% probability of earning an overall first-year college GPA of B– or higher, but in 2015 revised that definition to match the definition used by ACT. It’s important to note that CB and ACT initially had differing definitions of what constituted college success. In fact the college readiness studies do not produce a singular data point of ready or not but instead provide a correlation graph like that shown above. The 75% probability is one point on that continuum. Had ACT and CB chosen to they could publish the score that predicts a 13% probability of earning an A or a 32% probability of a earning a C in Algebra (that numbers is an 11, which is about what you’d get by guessing randomly on the entire ACT Math).
A benchmark score is one point on that scale. Two students with marginally different scale scores may be placed on either side of a benchmark. The CSDE views these new SAT benchmark scores as a useful but preliminary measure. – Connecticut State Department of Education 5/9/2016
Sadly, many discussions of benchmarks do not have the nuance that the CSDE gave in its statement above and CB has seemingly abandoned even attempting to define who should use the benchmarks and who shouldn’t, opening the door for them to be misused. CB has further contributed to this over the last year or so by reporting the benchmarks with traffic light colored iconography. The educator portal and student reports are full of reds and greens (with a smattering of yellow).
The implicit finality of a red exclamation “college readiness” score creates an unnecessary additional problem for the students who are already overburdened with roadblocks on the way to higher education. In an environment in which the financial risk of going to college is increasing and the value of a college education is under constant attack, students and families are also getting mixed signals from the admissions tests that were once touted as the tools of educational access and equity.
If requiring college entrance exams can convince a student to enroll in college, then it stands to reason that it can also do the converse. Given the real and perceived importance of the tests in the application process, it’s actually far more likely that a test score would convince someone debating whether to apply to college not to apply than it would convince them to apply. The students to whom these subtle signals matter are not those who are well above the benchmarks, not the children of college educated parents, not the children of the wealthy. Those students who are academically excelling or have families of means and knowledge to advise, direct and support them will be fine regardless of the kinds of subtle signals the media or the benchmarks send, it’s the borderline students, the poor family, or the under-served groups that will be hurt by these signals to that they are not ready for college and will never succeed in a career.
“While it may not be our fault, it most certainly is our problem.” – David Coleman, CEO College Board
And while I appreciate that CB and ACT have put in some measures to dissuade the use of benchmarks as a “yes or no” judgement on fitness for college, they have concurrently created a tool that is being misused and that they are not doing enough to stop the misuse of. For example, on the paper version of SAT Student Score Report, tucked quietly in the lower right corner, is the “Am I on Track for College?” paragraph and buried within that paragraph is this encouraging sentence “If you score below the benchmark, you can still get back on track by focusing on areas where you didn’t perform well.” This one sentence to counter all the messaging being communicated by yellow and red warning lights.
As if all of the above isn’t enough to point out the flaws in the usage of benchmarks, let’s consider the following specific examples of real problems with the benchmarks. All of test takers below have been flagged as in the red for math (so clearly not prepared to take Algebra 1, work at McDonalds or successfully marry Rush Limbaugh), but a quick assessment of their test shows that they have all left multiple math questions blank. If Karma is even remotely on their side than random guessing would give them 1 in 4 questions correct (assuming of course all the skips aren’t grid-ins). The SAT First Year College, College Completion, and Career benchmark in math is a 530 and in EBRW it’s a 480. An SAT Math raw score (number of questions correct out of the total of 58 math questions) of 29 typically converts to a scaled score of 530. Each wrong answer reduces the raw score by one point which most often results in a 10 point loss in the scaled score. Each blank answer is the same as a wrong answer. So for all the test takers below there is a good chance to randomly guess their way to college, career, carpentry, and cremation ready. For example, Jon Wit’mo had a 530 and thus needs 1 or 2 more questions right to reach the benchmark. Unless Jon has offended Vishnu, the odds are remarkably good that had he randomly guessed he would have gone from “not college ready” to “college, career, and fancy car ready.”
Where is the connection to career?
You may have noticed that I’ve not addressed how the benchmarks are “career” benchmarks. The reason for that is there is little if any evidence that there is any connection to the skills tested on the ACT and SAT and “career readiness” (which I still don’t know what the heck that is). The base assertion in labeling these “college and career” seems to be that jobs require reading and math. I’m sure we can all give a hearty “well duh!” to that sentiment and even remember times during our own schooling when we were told that all instruction we received was designed to get us ready for our future jobs, now it has a fancy name.
ACT has gone further than CB has in defining and researching the career part of this equation. They issued a report in 2006 and in the past year revised how they measured “career ready.” In that report ACT defined a career as jobs requiring less than a bachelors.
I’m not totally convinced in these measures addressing the totality of the vagueness of defining the skills needed for an ill-defined career but at least it’s a start. ACT has recently revised the way they define the career readiness standard and created their own certificate of career readiness. Unfortunately, CB has yet to publish about the issue.
Hopefully, before long the use of career in these converstaions will more clearly answer the questions that most people think of when we hear career readiness:
- What careers does this mean?
- What skills are required?
- Why is a college admission tests trying to predict life after college?
What does this all mean?
The upshot of all of this is as Dr. Dre put it “benchmarks ain’t…” or more formally translated very few stakeholders should put much onus on the benchmarks. High school counselors, families, and students certainly shouldn’t concern themselves with them as they will not play any role in admissions at the vast majority of colleges and the benchmarks give no useful information about careers. Perhaps the only parties that should give any merit to benchmarks are K12 administrators and district level data crunchers. The benchmarks might be another way to compare cohorts against a somewhat arbitrary cut line (Why not set the benchmark at the score that indicates a 80% probability of earning a D, since as the old adage says “Ds get degrees?).
What can ACT and College Board do?
While I’m sure there are challenges to ensuring that a tool is used the way it’s intended, Both CB and ACT could do a better job in making their tool less gameable. Here are a few recommendations I’d love to see happen:
- Remove benchmarks from student and counselor reports.
- Instead of “college ready” say something like “75% of students in your score range who didn’t improve their skills got C or better in first year college Algebra”
- Don’t use traffic light colors.
- Show the student on the graph relative to grades rather than pushing an isolated point.
- Make benchmarks more college specific (i’m sure the probability at Harvard is different than that at Hunter.
- Recreate the usage guidance reports and recommendations and distribute them widely.
- Give talking points.
- Address misuse of the benchmarks with school boards and the media.
- Stop saying “career readiness” or “college success” or “success” or any other thing not supported by research or impacted by so many variables that it’s insane to pretend to predict.
My college did further research based on what i started here and posted it at The Score (Princeton Review’s Blog), its worth checking out.
Feel free to gif me a response or comment to let me know what I’ve missed.
Finally, the only binary Ready or Not that should ever be discussed is this one: