How social media pressure is watering down higher education analysis
Scroll through any education outlet’s news feed and you’ll likely read about a survey asking students about enrolling in college or paying for college or staying in college or even dropping out of college.
I’d also be willing to bet the survey results it describes — tucked into a longform piece about higher education’s future — suggest the entire industry is teetering on the brink of catastrophe.
As someone trained in research methods, I can tell you that a disproportionate share of these more hastily put together surveys are not well designed. The questions are poorly worded, and others are leading, and sampling sometimes seems to get forgotten altogether.
I saw survey results once showing students overwhelmingly saying loan debt was “crushing.” Only in the fine print could someone find that respondents had all opted into a website that actively advocates for student loan debt cancellation.
The ballooning of low-grade analyses isn’t coming primarily from scholars. Academic research involves processes that have well-defined steps that take time and peer review actually slows the process. Instead, it’s coming mainly from advocacy groups and education-related businesses that often lack professionally-trained research staff.
In an effort to be first as the world rushed fully online during the pandemic, there were the big misses. Some surveys showed fall enrollment supposedly potentially crashing, until it turned out it wasn’t (unless you were a community college). Others said students were going to rebel against paying brick-and-mortar tuition rates for online training until it turned out they didn’t do that, either.
Then there are the persistent myths like loan balances that continue to break borrowers’ backs, even as research shows defaulters often carrying the lowest balances, not the highest. Billions of federal grant dollars are reported as being “left on the table” until you dig into the math and realize the calculations wrongly assume that every high school graduate in America would go to college (in fact, only about 69 percent do).
Social media amplifies and rewards eye-catching data, and people re-share content with mixed intentions. Social media also tends to democratize bad analysis, so individuals with large followings can share distorting facts — and even outright falsities– and see those opinions treated as fact in public discourse; even when questions about the data’s validity come to light.
To be clear, there is good data out there and experts know the sky isn’t falling on the higher education sector. And none of this is to say higher education doesn’t have some very real and very pressing problems: Too many students are starting college but not finishing, and the price families must pay is absolutely a burden for the folks who need the most support.
But bad data (and bad analysis) does impact consumer choices and policymakers’ actions, and it’s a big problem.
How many students today see findings like these and wrongly think higher education is a scam or too expensive and opt out of enrolling?
And it’s not just students and families. How many schools adjusted their enrollment management recruiting or financial aid discounting last fall because they believed students were unwilling — or unable — to show up?
What about members of Congress who actively support hundreds of billions of dollars of student loan forgiveness, when studies increasingly show current proposals would overwhelmingly benefit the wealthiest Americans?
At the end of the day, the biggest losers are students and good data.
The arms race to shock and awe on social media only hurts colleges’ efforts to find and retain best-fit students. It will continue to hinder consumer decision-making for those who probably need higher education most.
Unfortunately, it’s a practice that’s not going away anytime soon. We are conditioned to amplify seemingly shocking content shared by people whose opinions we value or believe can be trusted. Whether it is for the sake of convenience or a sincere belief, there is a remarkable slant towards willingly trusting that the person who shared it first has vetted its accuracy.
A good example: a recent piece on forgiving student loan debt reports the average student loan payment today is $393 and links to not the source, but another article from 18 months earlier that lists the same $393 payment. That article also doesn’t link to the source, but to another article that lists the same $393 number, which thankfully and finally does get to the actual source of the federal reserve.
Of course, the story doesn’t end there. That eye-catching $393 average monthly payment in the source? In the exact same sentence it also shows that the median monthly payment is just $222.
Combating this is going to be an enormous challenge, but not an insurmountable one.
We researchers have an ethical responsibility to publicly call out bad data and bad analyses, even if it will almost always occur after the fact.
Consortiums of experts that can serve as fact-checkers would do much good as a first stop for the journalists who are barraged with requests to highlight these results.
Brightly visible labels on posts that easily flag for lay readers that experts dispute what’s being said or reported would be just as valuable.
I’m not the first person to observe problems like these. Sites like Twitter, for example, have begun nudging users to read articles before just re-sharing based on a headline.
Small steps like these matter, especially when we know slick graphics, catchy headlines and the promise of a good read about a higher education catastrophe is just a click away.