For most people, early Fall brings happy thoughts of a new school year, cooler temperatures and the return of an infamous pumpkin-spiced beverage. For those of us in higher education, this season also brings the pain and anguish that can only come with the annual release of college rankings.
Few things unite education policy experts quite like the shared hatred of lists that suggest there’s a “best” college or university. They rightly complain that the methodology includes — or excludes — the wrong components and quickly point out that the term “best” is highly subjective. They’ll also probably tell you that rankings rarely, if ever, drive funding or accountability, or that students almost never use them to make decisions about which schools they apply to.
So it’s surprising when rankings do get released how quickly university marketing departments trumpet where they land. I mean, if you’re number one or two, do you really need to tout that? If you’re number 37 or 53, do you really want to tout that?
The quest for better rankings is a quest to find a better way to flag good schools and steer clear of the “bad” ones — because suggesting that there’s some meaningful difference between being ranked fifth and sixth is just silly. Is there a way to give schools a quantitative tool to tout how great their offerings are, all while still giving consumers meaningful data to make thoughtful enrollment decisions?
How about crowd sourcing and aggregating former students’ quality reviews for different aspects of the college experience into something that consumers will value and use?
Think Yelp! but for higher education.
Imagine a website where schools are measured on six to eight indicators that rate the students’ experiences, rather than a researcher-selected combination of inputs or outputs. Think things like affordability, availability of night/weekend offerings, availability of student support services like childcare or tutoring, learning experience, overall customer service and job placement.
Obviously only currently enrolled students with a school-based email account would be allowed to rate their school, but they could also be given the flexibility to continuously adjust their ratings up until they are no longer enrolled. Have a bad experience? Let students leave a gripe but give institutions the ability to respond to these comments publicly.
In the end, a Yelp!-like solution benefits both institutions and prospective students. Colleges still get a composite and individual ratings to tout (we’re a 4.9-star institution for affordability!) while students and families get access to the kind of practical information that actually drives choices around the colleges people attend.
More than anything it’s just a less controversial and more reliable measurement. Who are you more apt to trust; a researcher’s opinion on what a good college is OR the aggregated responses from thousands of current and former students’ experiences in a simple, 5-star rating mechanism the public already knows and uses everywhere else?
Obviously, any solution will still have its own set of challenges to manage: Shoppers care about a wide range of service experiences, which means there will still be critics who feel the “right” categories are not being included. We also know there’s likely to be selection bias in the results since people who have negative experience are more likely to leave reviews and complete ratings like these.
These are real issues, but they are not insurmountable, if for no other reason than we have Yelp! already — and consumers use it! For all the institutionally-collected data that finds its way into the public domain, from magazine rankings to the US Department of Education’s own college scorecard, that people overwhelmingly prefer to complain about where they fall flat says a lot about how impractical existing rankings and ratings really are.
Let’s create tools that allow institutions to tout the quality and value they provide, all while moving beyond being told what the best institution is. It’s time to give stakeholders the right data to make that distinction themselves.