College Rankings Are Rigged
The biggest blocker of innovation in higher education is also its biggest driver: college rankings. This post dissects the US News and World Report’s ranking methodology and offers a better, more equitable and modern way to rank.
There is a legend of a divinely powerful magnet. It is so powerful that it could attract a paperclip thrown in mid-air. Nothing could escape its powerful force.
A similar force exists in higher education: the college ranking system. Highly ranked colleges enjoy a self-fulfilling prophesy — high rank attracts top students and faculty already predisposed to be successful, growing an ever more elite network of graduates and brand value.
I’ve been thinking about how we can break out of this system of entrenched players. The more I explored how college rankings work, the more I realized that the ranking system is rigged. It is structured to benefits schools at the top and makes no use of millions of data-points about colleges and their graduates that exist on the web.
This post dissects the US News and World Report College Ranking methodology and suggests alternative measures.
A change to this system can give the true innovators their rightful spot on top.
Dissecting College Rankings of the US News & World Report
1. Graduation and Retention Rates
This graduation metric is situated in academia’s definition of success rather than the students’. A more meaningful metric would be recent graduate employment or achievement of their desired next step in their academic path. This data can be inferred by scraping alumni LinkedIn profiles or end of program exit surveys.
Retention rates are clearly important, but they are problematic for a few reasons. First, for many students, leaving school early can be a successful outcome. For example, we had a student leave Make School after a rewarding first semester because he determined he wanted to be an academic. He transferred to University of Virginia. Was that a bad outcome? Or what about our student who left early because he convinced his internship company to hire him before he graduated. Bad for retention?
Many employers are downgrading the importance of degrees, with companies like Apple, Google, IBM dropping their degree requirements. Alternatives like BootCamps and 21st century trade schools like On Deck, ShiftUp, and other models tempt students away from 4 year degrees. These dynamics have made it more socially acceptable to leave school early (thanks Steve Jobs!). Degrees are important, especially if you want to work in academia — but in most cases, your skills and who you know trumps where you learned it or even if you finished your degree.
Second, retention is not the most equitable metric. The retention metric rewards schools who enroll the student from a 7-generation lineage of college-goers with hundreds of dollars in tutoring. They are less likely to drop out. Schools are penalized for enrolling students at a higher risk of dropout, who may need side jobs to earn money to support their family or care for siblings and parents.
Don’t get me wrong — all schools have an obligation to ensure success of all students and need to provide robust programs and staff. For that, retention is an effective metric. But perhaps a better metric of a college’s ability to deliver life changing education is social mobility.
2. Social mobility:
US News determines this metric by the graduation rate of Pell grant recipients and their ability to graduate on par with non-pell grant recipients. As a review, Pell grants are awards given to students of great financial need. The US News metric also comprises a comparison of graduation rates of Pell Grant students compared to their more affluent classmates. These metrics are a good proxy for the strength of the college’s auxiliary support: resources for first generation students, students struggling academically, etc. What’s missing from the US News calculation is the initial cost of the school…
The other factor to consider is the cost of living in the college town. The cost of housing, etc, is enough to make you cry.
Costs should be included in the Social Mobility college ranking calculation.
3. Peer Assessment Score:
This US News metric is derived “by surveying presidents, provosts and deans of admissions … at institutions in the school’s ranking category.” Why are these administrators the best people to ask? I run a Bachelors degree granting college and I spend no time evaluating the work of other schools. Further muddied in this metric is the discordance between the metrics of success for a student versus an administrator. Student success involves earning remarkable education to landing a job. But to an administrator, a student’s success is just one of many priorities. Administrations also earn revenue by taking a cut of research grants and through earned interest off of endowment investments (another thing entrenching our system). An administrator tasked with evaluating a peer schools is likely more aware of the competing school’s grant-pulling star faculty than the competing school’s in-class experience. This prioritization on research might harm students. For example, when I was a student at University of Michigan, the economics department hired a “star” TA who brought in a $1M grant to research happiness, yet he frequently showed up unprepared for lecture. Another TA in his cohort (who didn’t rack in grant money) was such an effective teacher that they moved his recitation to a lecture hall 4 times the size so other sections could join. I don’t trust an administrators’ opinion of other institutions because of these mixed incentives. And trust me — I am an administrator myself!
A better metric might come from “rate my professor” or another direct student satisfaction metric.
4. Faculty resources
“U.S. News uses five factors … to assess a school’s commitment to instruction: class size (8%), faculty salary (7%), faculty with the highest degree in their fields (3%), student-faculty ratio (1%) and proportion of faculty who are full time (1%)”
- Class size index & student-faculty ratio: remote instruction is decreasing the importance of this metric — Instructors can run large zoom rooms, with small breakouts led by TAs that make courses at once more intimate than a lecture hall and more effective pedagogically. Today’s ideal course involves more than just faculty/student interactions. It’s a mix of async, synchronous instruction, curriculum, tutorials, tailored feedback, lecture, class time use, project and peer learning, and other online teaching tools. These are, of course, hard to measure. A better proxy for the measurement could be rate-my-professor reviews.
- Faculty compensation: US News’ research shows that faculty compensation correlates with student success. But maybe it is a correlation of schools with greater resources attracting students with more resources — students who are predisposed to succeed. Maybe they are simply compensated more because their salary is tied to grant funding.
- Percent Faculty with Terminal Degree in their field: would having college dropout Mark Zuckerberg on faculty harm a ranking? This metric, again, advantages rankings for schools with an academic focus, leaving behind trade schools or coding schools who have perfectly qualified instructors (some even more qualified because they come from industry). An equally important metric is “percentage of faculty with industry experience in their taught field within the previous 5 years.” Using both metrics may be the way to go. I can’t think of a better metric for faculty quality though, other than what’s been shared above.
- Percent faculty that is full time: what about the tech entrepreneur who teaches one class in the evenings? Or the state senator who teaches one course a year? Part time doesn’t mean lower quality, or less faculty (3 part time might equal one full time). The best schools in the future will create pathways for industry professionals and companies to teach students. Additionally, there are many cases where having more part time faculty could be better; i.e. if instead of having two full time faculty, you had one full time and 3 part time for the same price, offering more engagement and richer feedback for the students.
- Financial resources per student — this is a good proxy. I’d love to see what the resource spend is on programs that actually have a direct impact on students.
My biggest frustration: this metric over-emphasizes the importance of faculty.
As schools move online, students have the power to cobble together more current and relevant resources than any static textbook. A medical student, for instance, could get a better sense of the future of her profession by compiling an ever evolving curriculum of healthcare technology medium articles, newly published research, thought leader twitter threads, and free Coursera courses. Same goes for anthropology, sociology, and many other disciplines. Higher Education should focus on creating empowered students ready to self-directed in their own learning in this way.
The major value of Higher Education should then be to create a student culture. Are students supportive and collaborative? Are structures made for cross-pollinating ideas and innovating? Colleges underestimate their potential to influence these aspects of learning — aspects that become increasingly harder to foster online.
Faculty are no longer gatekeepers of knowledge. They need to give students the keys to play in the garden.
6. Student Selectivity
Most of this score is determined by a school’s student performance on the ACT or SAT. These tests, however, have drawn a flurry of lawsuits and scrutiny for benefiting wealthy students who can afford tutors to raise their score. For this reason, it should be taken off. A better metric could be some sort of “average portfolio” of students (amount of clubs, projects, publications, etc). Many schools are shifting to a portfolio style portal for students to highlight their work products. These are often made accessible to employers (as is the case for Make School). Capturing this data will also draw employment partners who could hire students.
I do like the second metric of “students in the top 10% of their high school class”; this is a good proxy for for the ambitions and rigor of classmates.
Thankfully, acceptance rates were taken off this past year. It should stay that way.
5. Average Alumni Giving Rate
This, again, favors schools with graduates who tend to have more money to give. This metric is likely more indicative of how effective the development office is at sourcing money rather than if alumni actually think favorably of their college.
6. Graduate indebtedness
This metric clearly matters to the student. But schools without wealthy students who can self-fund their education get a lower rank. A more equitable metric would be average cost per student.
New Rankings
Coincidentally, the college rankings are written by another Morse: Robert Morse. Since my name is Dan Morse (no relation), I’m going to name my ranking system “The Other Morse’s College Ranking”.
Here is my summary of a college ranking system that is more equitable, less entrenched, and more accurate to student experience.
The Other Morse’s College Rankings:
- 👩🏽🔬 Student career outcomes — student employment and/or grad school enrollment rates and salaries, inferred from LinkedIn data 6 months after graduation and surveys of alumni’s college. This is a lag indicator for effectiveness of pedagogy and career services.
- 😃 Student satisfaction — measured from data sourced from college, 3rd party student surveys, and rate-my-professor. This is a lag indicator of a student’s subjective experience and school culture. It also might infer if the school’s marketing of experience matches the actual experience.
- 💸 Resources used per student — money directly put towards eduction and auxiliary programs benefiting students. This is a lead indicator of school-provided opportunities and support for students.
- 📈 Social mobility & Finances— average total cost per student after completion of the program, given standard tuition minus average financial aid with adjustments made for average cost of living. This is a more accurate indicator of what students pay.
- ⭐ Student Engagement — how many students participate in clubs, extracurriculars, mentorship and make connections with alumni.
These new metrics will create fertile ground for innovation and equity across our country.