When the headline Parents trust report cards more than test scores — with consequences for kids crossed my feed it inspired heavy eyerolls. While I love new research, the headline here didn’t bode well at all, rife with assumptions: trusting report cards is bad, test scores is good, and fear mongering: ConSeQueNCes FoR KidS.

After reading the article, my frustration grew. But stay with me while I dive into the problems with the framing, the questions about the research, and what psychometricians should do to to solve their 100-year-old problem.

TL;DR: Three Things the Study and the Article Get Wrong
  • The study assumes test scores are the correct signal. It documents that parents prefer grades. It does not establish that tests are more accurate measures of a child’s actual skills or future outcomes. That assumption is load-bearing and never examined.
  • Parental “distrust” of tests is conflated with parental ignorance. 40% of parents in the study said tests are biased. Nearly 30% said scores reflect family income more than ability. These are not obviously wrong beliefs. The article treats them as cognitive failures rather than reasonable skepticism of a deeply flawed system.
  • The study measures stated advice about fictional children, not real parental behavior. Parents on a paid survey platform were advising hypothetical parents about hypothetical fifth graders. The leap from that to “parents are underinvesting in their children’s human capital” is not supported by the data.

The Study, The Article, and the Assumption Buried Inside Both
Derek Rury (Oregon State University) and Ariel Kalil (the university where fun goes to die) asked over 2,000 self-selected, paid parents on an online survey platform to advise other parents on how much time and money to invest in hypothetical fifth graders, given varying combinations of grades and test scores. They found that when grades were high but test scores were low, parents didn’t invest more. When grades were low but test scores were high, they did. Parents weighted grades over test scores consistently, across a sample that was 68% white.

The finding is real and worth discussing. What bothers me is the conclusion. Both the researchers and the reporter, Jill Barshay, conclude that parents are being misled. That grade inflation has created a cognitive bias preventing rational investment. Barshay writes that “inflated grades may feel encouraging, but they can send false signals both to students, who may study less, and to parents, who may see less reason to step in.”

The assumption baked into both the study and the article is that standardized test scores are the correct signal, and that parents who weight grades over tests are making a mistake. Neither stops to ask the obvious question: why would parents trust test scores?

Of Course Parents Trust Grades More
Study after study shows that parents generally trust their children’s teachers, despite ongoing political rhetoric trying to sow distrust of public schools. They know the teacher. They’ve seen the homework come home. They’ve watched their kid struggle or breeze through the chapter on fractions. When the test comes back with a 78, they can see which questions were wrong. When the report card shows a B+, they have months of context for what that means. Grades are transparent. They’re personal. They’re local. They speak to a known child in a known classroom with a known teacher who can be called or emailed. Parents may not always agree with a grade, but they can interrogate it.

Now let me tell you what a parent knows about standardized test scores.

When my sons were in 2nd and 5th grade, their school gave them the Iowa Assessment. We weren’t told it was coming. Months later, scores arrived in the mail with no explanation, no context, no conversation from the school. The report gave me national percentile rankings in categories like “reading: informational,” “domains: author’s craft,” and “cognitive levels: essential competencies.” The explanation said scores “summarize data by the different levels of cognition required by the items.” Whatever that means.

At that point I had spent over two decades reading score reports from approximately 30 different tests. I was still stumped.

One of my sons scored above average in almost every category in 5th grade and below average in almost every category in 6th grade. Same school. Same house. Same parents. Did that mean his teacher was ineffective? Did he blow off the test? I don’t know. The 6th grade report didn’t tell me. It didn’t even acknowledge the 5th grade scores. When I asked the school what the scores meant and what they would do with them, I got nothing useful. The only thing those test scores allowed me to do was rank my child against children in schools I’d never visited, taught by teachers I’d never met, in towns I’d never been to. The entire apparatus was designed to produce a comparison. Not information. Not guidance. A ranking.

This isn’t unique to elementary assessments. I wrote last year about the useless PSAT score report that promises personal feedback but delivers a bar graph that even I, with decades of experience in this industry, find useless. If we can’t make the score reports for one of the most widely-taken tests in the country useful or informative, why are we surprised that parents reach for the thing they can actually understand?

Test scores tell you who your child beat, not what your child knows. So yes. Of course parents trust grades more.

The Irony in the Article
I’ve written before about how the media covers admissions testing and the patterns that keep showing up. This article fits the mold. Grades are inflated and misleading. Tests are objective and meaningful. The skepticism flows in one direction only.

What’s notable is that Barshay is capable of the other kind of skepticism. In this same column she has questioned hypothetical survey experiments, flagged the absence of replication in education research, and been skeptical of popular beliefs that outrun evidence. Good journalism about test scores would apply that same rigorous questioning here. Are the test scores referenced falling nationwide or just on NAEP? What percent of kids actually have high grades and low test scores simultaneously? When parents say tests are biased, are they talking about state assessments or the SAT — because those are very different things? And most fundamentally: are test scores themselves a reliable enough signal to serve as the benchmark for parental decision-making? Those questions don’t appear.

The article puts “the burden” on parents to “read report cards with a critical eye.” It never once asks them to apply that same eye to test score reports. It never asks whether those reports have earned the trust being demanded of parents.

What Would Actually Fix This
I don’t raise these criticisms to defend grade inflation. The research on the divergence between rising grades and falling test scores is real and worth taking seriously. But the solution to one imperfect signal is not to blindly elevate another one. The solution is to figure out what is signal and what is noise. Right now, test score reports are generating a lot of noise.

We’ve been lionizing test scores and other metrics for a long time, often without asking whether the tools actually serve the people they’re supposed to serve. If we want parents to trust and act on standardized test scores, here’s what has to change:

  • Test reports must connect to the classroom. Telling a parent their child scored in the 34th percentile in “reading: informational” means nothing. It becomes information when it connects to what their child is actually doing in reading class, what specific skills are lagging, and what a parent can do about it at home. Distractor answers in multiple choice tests are coded by skill — that data exists within every testing system, and almost none of it ever reaches parents. There is no good reason for that except that sharing it would make the test’s limitations visible.
  • Test reports must be written for parents, not statisticians. Scaled scores, standard errors of measurement, normal curve equivalents — these are tools for researchers. A parent receiving a score report is not a researcher. The report should answer three questions in plain language: Is my child on track? Where specifically are they struggling? What should I do about it?
  • Test scores must be more than defensible rankings. The entire architecture of most standardized test reporting is built around telling parents who their kids are better than (the core of norm-referencing). This is almost entirely useless for a parent trying to make decisions. What a parent needs is criterion-referenced information: can my child do this specific thing? Does my child understand this concept? Has my child mastered this skill (and why it matters)? The comparison to a national norm produces a ranking. Rankings are not parenting advice.
  • Test reports must acknowledge their own limits. Benchmarks mark one point in time and lose predictive value rapidly as you project forward. A test given in 5th grade is not a reliable indicator of “college and career readiness” — not because tests are inherently bad, but because human development is complex, nonlinear, and massively influenced by what happens between 5th grade and 12th grade. Every score report should say this clearly, the way a financial advisor is required to disclose that past performance is not indicative of future results. Instead, we get the opposite: language that implies these scores are stable, predictive, and definitive.

The Upshot
The researchers conclude that “combating grade inflation may be more consequential for parental behavior than expanding test score dissemination.” Maybe. But there’s a third option they don’t consider: making test score dissemination worth something.

Parents aren’t irrational for trusting what they can understand, verify, and act on. They’re doing exactly what we’d want any careful person to do with imperfect information. If we want them to weight test scores more appropriately, we have to give them test score information that actually helps them parent. Right now we’re handing them an opaque number, telling them it’s more objective than the teacher who knows their child, and then expressing dismay when they put it in a drawer.

The problem Rury and Kalil have identified is real. The solution isn’t to blame parents for not trusting, undermine trust in teachers/grades, argue for spending money they don’t have, or convince parents that test scores deserve their trust. The solution is to build test reporting systems that actually earn trust.


Further Reading and Cutting Room Floor

  • Cutting room floor: My mark-up of the article you can see it here.
  • Matt Barnum at Chalkbeat on Grade Inflation (its a great read. Its also a Twitter thread from the before times). There is also this article he wrote in the WaPo, which isn’t as fun but Matt is always worth reading.



Payment due!

The cost of the information age is subscriptions and likes. Don’t be a freeloader, pay your own way, subscribe, like, and share today!

Leave a comment