Ideas + Data

Frameworks, evidence, and analysis.

Not a blog. A curated exhibit of original concepts, data interpretation, and myth-busting — the intellectual infrastructure behind three decades of public work on testing, admissions, merit, and educational opportunity.

Signature Concepts
Highly Rejective Colleges

The term Akil coined to reframe the admissions conversation. "Selective" implies careful, considered curation. "Highly rejective" describes what actually happens: most applicants are turned away. The distinction matters because it shifts the question from "how good is this school?" to "how in-demand is this school?" — and those are not the same question. The term has entered the Congressional Record, New York Times, Washington Post, TED Talks, and the Urban Dictionary.

Deceptive Precision

Standardized tests present numbers with a specificity that implies accuracy they don't have. A score of 1200 is not meaningfully different from 1180. A "benchmark" score is not a threshold backed by solid predictive validity. The precision of the measurement is real. The meaning assigned to it is manufactured. As Alfred Binet, father of IQ testing, put it: "a simple, brutal number, which can have only a deceptive precision."

Tests of Accumulated Opportunity

Standardized tests don't measure raw ability. They measure a student's access to preparation — tutors, test prep courses, well-resourced schools, and the time and stability to study. A student from a wealthy family who scores 1400 and a student from a low-income family who scores 1200 may have similar underlying ability. The difference in scores reflects accumulated opportunity, not merit. We measure, and call it merit.

Filter:
Graduation rate vs transfer rate vs Pell data
Data · Rankings
What Makes a Good College?
Graduation rate ignores transfer success. Add Pell rate and the picture changes entirely — it's essentially arranged by the money of the students who attend.
National Merit scholarship data
Data · Merit
National "Merit" Started as a Pepsi Marketing Campaign
120 of ~130 corporate sponsor programs are limited to children of employees. It's corporate benefits laundered through a prestige competition forced on 1.3 million kids.
College Admissions Paradigms
Analysis · Admissions
Colleges Don't All Play the Same Game
Most families apply as if every college uses the same criteria. They don't. Understanding which paradigm a school operates in changes every decision.
GPA and SAT score correlation data
Data · Testing
GPA and SAT: What the Correlation Actually Shows
The relationship between GPA and SAT scores is real but weak — and it's mediated by income and access to preparation, not raw academic ability.
NACAC State of College Admissions data on test score importance
Data · Testing · Admissions
The Actual Weight of Test Scores in Admissions
SAT/ACT scores dropped from 56% to 4.9% "considerable importance" among admissions officers between 2005 and 2023. The industry never updated its marketing.
Should you take the SAT or ACT
Analysis · Testing
Should You Take the SAT or ACT?
The answer depends on where you're applying, what your score looks like relative to the school's range, and whether test-optional actually means test-optional at that school.
25
colleges in the U.S. admit fewer than 30% of applicants
out of 3,768 total
Data · Rankings
The Media Covers 25 Colleges. There Are 3,768.
The entire narrative around "getting into college" is built around less than 1% of institutions. The other 99% admit the vast majority of students who apply.
GPA by SAT score bands — College Board data showing deceptive precision
Data · Testing
Deceptive Precision: The College Board's Benchmark Problem
The College Board calls a score a "benchmark." Benchmarks imply thresholds that mean something. The research on what those numbers actually predict is far more complicated than the marketing suggests.
US News · Robert Morse, Chief Data Strategist
"Each factor is assigned a weight that reflects our judgment about how much a measure matters."
Said quietly in 2004. Still true today.
Analysis · Rankings
The Reckless Rankings Game
US News measures what's measurable, not what matters. Columbia made up data to keep its ranking. US News kept them anyway. That tells you what the rankings are actually for.
What Makes a Good College?
akilbello.com · March 2025

There are 3,768 colleges in the United States. Only 1,595 are non-open-admission. So there are about 1,600 colleges that don't admit everyone — with vastly different missions, academic focuses, sizes, and student populations. "Good/best/top" is a terrible way to describe this wide array of institutions.

Since the 1980s, private businesses stepped in to fill the void. They selected the factors they believe define a "good college" and published rankings they claim are objective. Those factors measure what's measurable — not what matters.

On graduation rate: "Graduation rate is a terrible statistic because of the way it's calculated." It ignores transfer students entirely. If a student leaves for another school and graduates there, they count as a failure. Add transfer outcomes back in and a very different picture emerges.

On Pell rate and money: "When we're beating up on a place for graduation rate, are we ignoring the fact that there is an intricate connection between graduation rate and Pell rate? College costs money. Full stop. Therefore if you have less money — harder to get in, harder to get out."

"You know what makes it really hard to graduate college? Being poor. So of course this chart makes sense — it's essentially arranged by the money of the students who get in."

Better questions: What percentage of students receive Pell Grants? What's the transfer-out rate and where do those students go? What's debt at graduation? What does the institution actually do for students who need the most support?

National "Merit" Started as a Pepsi Marketing Campaign
Bluesky thread · April 2026 · with data from FairTest report, 2023

The National Merit Scholarship Program didn't start as a scholarship program. It started as a Pepsi marketing campaign. That origin tells you everything about what it still is.

In the 1950s, corporations discovered scholarship programs were great publicity. Pepsi used one to compete with Coca-Cola. Other companies used them to signal patriotism, burnish the brand, and recruit future employees. Many of those programs got consolidated into one organization: The National Merit Scholarship Corporation. College Board handed them the perfect gatekeeping tool: the PSAT. NMSC got prestige. College Board got 1.3 million new test takers a year.

What NMSC advertises: 7,590 awards worth $33M+.

What they don't advertise: Less than one-third of those awards are open to any student regardless of who their parents work for. Of ~130 corporate sponsor programs in NMSC's guide, 120 are limited to children of employees.

National "Merit" isn't scholarships. It's corporate employee benefits laundered through a prestige competition forced on 1.3 million kids a year. It's not a scholarship — it's a sock puppet.

The college awards aren't much better. Schools like Alabama (115 awards), UT Dallas (100), and Arizona State (85) use "National Merit" to recruit high-scoring students and move up US News rankings. It's a tuition discount strategy wearing a merit badge.

Kids who score well on the PSAT earned those scores. That's real and rare. But turns out the merit is mostly in the marketing and the benefit is mostly for the sponsors.

Colleges Don't All Play the Same Game
Word In Black · September 2024

Most families approach college admissions as if every institution evaluates applications the same way. They don't. The criteria, priorities, and tradeoffs vary dramatically — and misunderstanding that leads to wasted applications and avoidable disappointment.

Paradigm 1 — Admit Everyone Academically Qualified: Larger institutions. What can this college do for you? Aid is designed to maximize the likelihood of filling the class.

Paradigm 2 — Prioritize State Needs: Public institutions. Residency, major, income, and expanded enrollment are the priorities. Tuition costs are balanced by state funding and scholarships.

Paradigm 3 — Prioritize Institutional Needs: Most selective public and fairly selective private colleges. Strong national name recognition. They prioritize students with qualities that align with institutional goals. Financial viability relies heavily on tuition and family contributions.

Paradigm 4 — Prioritize Institutional Needs and Wants: Internationally recognized institutions. Same as Paradigm 3 but with more market power to be selective about it.

Understanding which paradigm a school operates in tells you what they're actually optimizing for — and whether your application strategy makes any sense for that school.

GPA and SAT: What the Correlation Actually Shows
akilbello.com · data analysis

The relationship between GPA and SAT scores is real — students with higher GPAs tend to score higher on the SAT. But the correlation is weaker than the testing industry's marketing implies, and it's mediated by factors that have nothing to do with academic ability.

Students from high-income families have access to tutors, test prep courses, well-resourced schools, and the time and stability to study. Students from low-income families often take the test once, cold, on a Saturday morning. The score gap between those two students reflects accumulated opportunity — not raw ability, not "college readiness," not merit.

A test that correlates with GPA and with income is not necessarily measuring academic ability. It may simply be measuring access to preparation. Those are very different claims.

The industry knows this. The technical manuals say this. The marketing doesn't.

The Actual Weight of Test Scores in Admissions
NACAC State of College Admissions data · 2005–2023

According to NACAC's State of College Admissions survey, the percentage of institutions rating SAT/ACT scores as "considerably important" in admissions decisions dropped from 56% in 2005 to 4.9% in 2023.

Grades in college prep courses remain the top factor at 76.8%. Strength of curriculum is at 63.8%. Test scores have fallen below essays, counselor recommendations, and demonstrated interest at most institutions.

The testing industry's marketing didn't update. Parents and students are still making decisions based on the assumption that test scores are the primary factor in admissions — an assumption that was questionable in 2005 and is demonstrably wrong in 2023.

The result: families spend billions on test preparation for a factor that most admissions offices weight far less than the tutoring industry claims. The beneficiaries of that misunderstanding are test publishers and test prep companies. Not students.

Should You Take the SAT or ACT?
akilbello.com · May 2025

The answer is: it depends. And the thing it depends on is not what most people think.

It's not about whether you're a "good test taker." It's about where you're applying, what your score looks like relative to that school's published range, and whether test-optional actually means test-optional at that institution.

Some schools say test-optional but have institutional research showing that submitted scores correlate with yield — meaning they use scores even when they say they don't. Others genuinely don't weight scores. The only way to know is to look at the data, not the marketing.

The broader question — whether the SAT and ACT are valid measures of anything worth measuring — is separate from the strategic question of whether you personally should submit a score. Those are two different questions, and conflating them leads families to bad decisions in both directions.

The Media Covers 25 Colleges. There Are 3,768.
akilbello.com · April 2025

"College admissions isn't hard — it's stressful. It's stressful the same way any process that involves waiting on others to render judgment can be stressful. Like applying for jobs, applying for loans, promposals. All stressful but not hard."

There are 3,768 colleges in the United States. Only about 25 admit fewer than 30% of applicants. "For the vast majority of American high school graduates finding a good college to admit them is as easy as finding a good restaurant in NYC."

"College in America is complex. In part because there is no real system of higher education, and so every college does admission slightly differently. We've got 6,084 colleges, more per capita than anywhere else. There are trade schools, community colleges, liberal arts colleges, and research universities." But when media covers college admissions, it covers the 6% that are highly rejective — and families make decisions based on that distorted picture.

"Don't hyperfocus on one brand. College is a waystation along the way — it's not the be-all end-all of existence."

Deceptive Precision: The College Board's Benchmark Problem
akilbello.com · May 2020

Every fall, the College Board releases its annual reports. Every fall, journalists write that students are "not college ready" based on benchmark scores. Every fall, that interpretation is wrong — and every year the students who can least afford to be written off are the ones most likely to be written off by it.

The College Board's own technical documentation says something far more limited than the marketing: students who score at or above the benchmark have a 75% chance of earning a C or better in a first-year college course. That's it. A 75% chance of a C. As Akil put it: "Why not set the benchmark at the score that indicates an 80% probability of earning a D, since as the old adage says 'Ds get degrees?'"

As Alfred Binet, the father of IQ testing, warned: "a simple, brutal number, which can have only a deceptive precision." The precision of the measurement is real. The meaning assigned to it is manufactured. The result is sixteen-year-olds told their futures are sealed based on a test taken two years before they start college and six to eight years before they launch careers.

What should actually change: Remove benchmarks from student and counselor reports. Instead of "college ready," say something like "75% of students in your score range who didn't improve their skills got a C or better in first-year college algebra." Don't use traffic light colors. Show the student on a graph relative to actual grades rather than an isolated point. Make benchmarks more college-specific. Stop saying "career readiness" or "success" — anything not supported by research.

Want to go deeper?

Akil speaks on these frameworks for audiences from conference stages to policy briefings.

See Speaking Topics