We're number 1,2,3,or 4, or at least we were in 2005, we're reasonably sure.
September 28, 2010 11:48 AM   Subscribe

After five years of number-crunching and methodological controversy, the NRC's rankings of US graduate programs were released today, three years after the target date and fifteen since the previous ranking. Peruse the results at phds.org. Instead of numerical ratings, the NRC released two rankings, the "R-ranking" and the "S-ranking", each one with a wide error bar around it. Confused yet? Brian Leiter thinks the philosophy rankings "qualify as somewhere between "odd" and "inexplicable."" The University of Washington's CS department says their ranking of 15-32 is "clearly erroneous." Obviously, the only appropriate response is to compute asymptotic formulae for the number of possible fuzzy rankings.
posted by escabeche (40 comments total) 15 users marked this as a favorite
 
So basically you need to have gone to graduate school in order to figure out which graduate school you should go to.
posted by exogenous at 11:53 AM on September 28, 2010 [2 favorites]


Anyone care to take a crack at which letter ranking the school of piracy earned?
posted by Christ, what an asshole at 12:19 PM on September 28, 2010 [1 favorite]


And it takes twice as long to finish!

(Gosh, other countries must think we're stoopid.)
posted by iamkimiam at 12:20 PM on September 28, 2010


My department is simultaneously ranked 1-7 for the NRC's mysterious regression-based quality rating, and 72-110 for student outcomes (i.e. graduating and getting a job). That makes sense.
posted by oinopaponton at 12:20 PM on September 28, 2010


I'm not really having a problem understanding this. It seems like a useful tool if you're willing to extract the information you want from it.
posted by lizarrd at 12:22 PM on September 28, 2010


Ha ha, academics now have their own version of the BCS to fight over.
posted by Xoebe at 12:22 PM on September 28, 2010 [6 favorites]


It seems like a useful tool if you're willing to extract the information you want from it.

That's assuming that it doesn't act in garbage in, garbage out mode, as UW CSE is claiming.
posted by grouse at 12:23 PM on September 28, 2010


Who cares what the Nuclear Regulatory Commission thinks of my library school?
posted by Pope Guilty at 12:29 PM on September 28, 2010 [1 favorite]


I almost thought the Inside Higher Ed article was a joke, but everyone knows academics aren't funny.
posted by goodglovin77 at 12:30 PM on September 28, 2010


It seems like a useful tool if you're willing to extract the information you want from it.

That's assuming that it doesn't act in garbage in, garbage out mode, as UW CSE is claiming.


Yeah, my bad. I should have made that clearer that I was assuming it wasn't doing that. I meant more the system itself being workable as opposed to the data being non-erroneous.

Although, yeah, that does seem worrying.
posted by lizarrd at 12:32 PM on September 28, 2010


Well, I can already tell you that there are some major factual errors in these reports.

For example, the instate tuition listed for my university and program is incorrect (off by several hundred dollars) as are the fee rates and per hour cost for part time students. They aren't even LAST year's data, let alone this year's. They also appear to be conflating student information for the actual department with our combined track with another department. That is seriously skewing a lot of our numbers as well.

Also, some of the very core data is wildly out of date. They are pulling average publication per faculty information from 2006 for my department. WTF? Some of the information is even more out of date than that!
posted by strixus at 12:38 PM on September 28, 2010


Anyone care to take a crack at which letter ranking the school of piracy earned?

Need you ask? It's R.
posted by Kirth Gerson at 12:39 PM on September 28, 2010 [2 favorites]


Also, some of the very core data is wildly out of date. They are pulling average publication per faculty information from 2006 for my department. WTF? Some of the information is even more out of date than that!

The data collection was done in 2005 and 2006 and the ranking is meant to apply to that point in time.
posted by escabeche at 12:43 PM on September 28, 2010


escabeche, if that is the case, then why is some of the other ranking data being pulled from anywhere between 1999 and 2009? They seem to be very haphazard about their time frame. The tuition and funding data listed for my department is supposedly from 2009 - the job placement data are from 1999-2005 for it. If you are going to rank a department by how they were in 2006, then only data from 2006 should be used.
posted by strixus at 12:48 PM on September 28, 2010


The quality of this data is embarrassing. Ranking academics using substandard and poorly executed processes is not going to end well.

Thanks for the link escabeche.
posted by onalark at 12:57 PM on September 28, 2010 [1 favorite]


I should say thanks for the well-researched post, this is the kind of post that makes Metafilter Best of the Web for me.
posted by onalark at 12:57 PM on September 28, 2010


Excellent job on the post. I've been poring over this data for the last week, and just got the entire report in my hands (we chairs got our departmental data a week early).

The report is full of absurdities in my field, although my department does quite well and wound up *exactly* where I would have (and have) predicted we would on the R ranking. Even the range is reasonable. The S ranking is totally nonsensical. But worse than that, the programs that rank highest in my field include several that in no way deserve to be there by easily available objective data already searchable on phds.org. And plenty of very good programs were very obviously underrated. Plus the report compares apples and oranges rather absurdly even in my small field. Departments are really quite different, but lumped together in odd ways.

In my opinion, rating phd programs in the humanities and social sciences, at least, depends on only two variables, and perhaps that's because the program I helped build would rank number one on both of them relative to our competition:

1) What percentage of your grads are working, and at what quality of jobs?
2) How much external grant money have you taken in relative to size?

Both of those are simple, hard numbers that are only good if everything else is working.

Over, done, we're number one. Booyah. What else do you need to know?
posted by fourcheesemac at 1:09 PM on September 28, 2010


Who cares what the Nuclear Regulatory Commission thinks of my library school?

So long as you librarians aren't building any nucular power plants upwind of me, it's all good.
posted by Devils Rancher at 1:20 PM on September 28, 2010 [2 favorites]


Hold on. Can anybody explain exactly why they're upset about these numbers?

It seems like most people are angry about the presence of error bars (gasp!), or that their ivy league schools didn't perform particularly well under these criteria.

Fourcheesemac's got an excellent point too. If graduate programs are a means to an end (research results, employment, happiness), it makes the most sense to research how its students perform in these areas.

However, because grad school is alternatively used as a cash cow or a source of cheap (and indefinite!) labor for universities, we'll gloss over those facts.
posted by schmod at 1:22 PM on September 28, 2010


So long as you librarians aren't building any nucular power plants upwind of me, it's all good.

I can neither confirm nor deny the rumours regarding a nuclear power plant and public library near your house. Don't drink the coffee, though.
posted by Pope Guilty at 1:24 PM on September 28, 2010


Update: the Chronicle of Higher Education has posted an interactive tool for messing around with some of the NRC data.

fourcheesemac: One problem with relying too much on measures of external funding is that, for example, a social science department which is stronger in NSF-fundable areas will do better than one that's stronger on the history/philosophy side of the subject.
posted by escabeche at 1:27 PM on September 28, 2010 [1 favorite]


schmod, the data appears to be very poorly collected and potentially erroneous enough to be useless. See the criticism from the University of Washington CS department.
posted by onalark at 1:31 PM on September 28, 2010


I feel really dumb looking at this. I have no idea what the researchers are trying to say (scales, descriptive titles and axes would help), can't find my department (or figure out if my subsection is included in another group), clicked around for way too long to figure out how I could link to my school.

Maybe when I finish my PhD it will make sense (in either 3.7 or 5.6 years).
posted by hydrobatidae at 1:33 PM on September 28, 2010


Any criticism that the multivariant ranking is too complex is laughable. I can see arguing that the actual variables themselves are incorrect, but the complexity of having multiple criteria is entirely valuable and useful.

The ranking of large computer science PhD programs seems reasonable to me. Top 5 are CMU, MIT, Stanford, UIUC, and Princeton. I'd quibble that maybe Berkeley belongs above Princeton, and I'm surprised not to see UMich higher, but it's roughly correct. Anything more subtle requires digging into details, which is what this ranking is all about.
posted by Nelson at 1:35 PM on September 28, 2010


Who cares what the Nuclear Regulatory Commission thinks of my library school?


You'll understand why when you get a job at the Regenstein.

/UofC in-joke
posted by TheWhiteSkull at 1:36 PM on September 28, 2010 [2 favorites]


The data here are suspicious, the methods are inscrutable, and the panel themselves will not endorse their conclusions. However, my program does surprisingly well on this, so therefore, it must be correct.

I remember when I was looking at grad schools and visiting a couple that had accepted me. Sitting down with a professor from university X for lunch, she asked me where else I was looking.

Me: "Oh, university Y and Z."
Her: "Then you're definitely coming here."
Me: "Why is that?
Her: "Oh, we're higher in the such-and-such ranking this year."
Me: "Yeah, but only marginally. And I don't know how much stock to put in those anyway. I mean, you're taking dozens of variables there - who's going to be on the faculty, who's going to work with you, how well they're going to work with you, how much financial support you and the department can expect - and then they're pretending you can crunch all of that down to a single number that determines a course of action for everyone. I just don't think much of the such-and-such ranking. We ought to take all that stuff with a grain of salt."
Her: "But we're ranked marginally higher in a couple of those things."
Me: "... So, a bunch of things that don't reliably tell me anything useful are all telling me the same thing?"
Her: "Yes!"

I did not go to University X.
posted by el_lupino at 1:48 PM on September 28, 2010 [3 favorites]


An odd thing occurs to me.

In looking at the top bracket rankings for a few subjects, the order seems to be what one would expect. As one looks lower, the rankings get less ... predictable.

I wonder very much if somehow either the data or the methodology has been fudged to produce very desirable top ranking predictions - with little or no inclination on how those would affect the rankings of less brand recognized schools and departments in various fields.
posted by strixus at 1:49 PM on September 28, 2010


Nelson, I suggest you read the University of Washington Computer Science link.
posted by onalark at 1:50 PM on September 28, 2010


Hold on. Can anybody explain exactly why they're upset about these numbers?


I've found them intriguing, not upsetting, but they are definitely odd in some cases. For instance, the "R"-based rankings and the "S"-based ones correlate pretty well in some fields and badly for some fields or some institutions; the same goes for the R-5 and R-95 numbers. (See, e.g., this in-depth University of Colorado-centric analysis.)

The NRC site summarizes this all pretty well, but basically the difference between the "R" and "S" rankings has to do with the relative weights assigned to each of a number of different metrics of faculty, student, or department success. The individual metrics are things like how many publications each faculty member has, what percentage have grants, how long students take to graduate, average entering GRE scores, etc, etc. So if you care about one of those things specifically, this data really can be ordered quantitatively (with the caveats that it's mostly from before 2006, plus whatever real errors crept in). The trick to assigning a more general ranking is in how much weight you give each of these partly-independent metrics of success. The "S" weights were basically determined by surveying faculty and asking them to rate how important each metric was in determining the quality of a program; for the "R" weights, the NRC team instead asked people to subjectively rate the quality of various programs in their field, and then did some kind of regression to assign weights to each metric that correlate well with the subjective rankings.

To take a specific example, the University of Colorado astrophysics program has an "R"-based ranking somewhere between 4 and 21 (5th and 95th percentile confidence interval) and an "S"-based ranking between 17 and 33. What, exactly, do you make of that? There were only 33 graduate astrophysics programs ranked, so depending on which of these you take you could either conclude that Colorado is great (only 11 programs had a "best" R-ranking of 4th or higher!) or terrible (it's the worst in something!). Mostly, I conclude that the notion of forming a one-dimensional ranked list is not terribly meaningful for astrophysics, but that seems like a disappointing conclusion after a study that took so long.

That said, there's enough wiggle room in these ratings for most institutions to spin them however they'd like.
posted by chalkbored at 3:14 PM on September 28, 2010


I don't know whether to be outraged or amused that my own spectacularly disfunctional department, from which I am frantically trying to transfer, is ranked anywhere above dead list. I've managed to keep a number of prospective students from attending but I can only do so much.
posted by LastOfHisKind at 4:07 PM on September 28, 2010


It seems like most people are angry about the presence of error bars (gasp!), or that their ivy league schools didn't perform particularly well under these criteria.


Did you read the articles? Even the people involved with putting the report together think the methodology is not the best way they could have done it.
posted by LobsterMitten at 4:16 PM on September 28, 2010


My graduate program is ranked 72-135 in its field (with a bullet!). I'm sort of amused by this since, since the vast majority of the Ph.Ds from there belong to people who get actual academic jobs (even in the last few years) or go do some other sort of interesting job that is still relevant to their degree. Then again, it's known for being a pretty chill, low-pressure department (at least for the students - intra-faculty relations are something else entirely), which at times makes it come across as a little Not Serious Enough to outsiders.
posted by heurtebise at 4:42 PM on September 28, 2010


The "Outcomes" figures for my field (history) are pretty much useless, since they come from the height of the economic boom. Since then the job market has tanked completely, which has caused "blue-chip" programs to be more successful at landing the few positions that remain. Anyone choosing where to go based on these numbers would likely be making a bad decision.
posted by nasreddin at 4:46 PM on September 28, 2010


Why does phds.org require a username and password from me?
posted by phliar at 5:48 PM on September 28, 2010 [1 favorite]


fourcheesemac: One problem with relying too much on measures of external funding is that, for example, a social science department which is stronger in NSF-fundable areas will do better than one that's stronger on the history/philosophy side of the subject.

You say this like it's a bad thing. Or like such variables couldn't be controlled for (among the significant variables NRC did not control for in my field, subdisciplinary offerings and balance of subfields is the big one. My field has 5 major divisions. They accounted for one that was broken out of the comparison (because it's not research driven), and then curiously didn't bother to break out those departments (half a dozen or so) where one subfield has become its own department, let alone distinguish between departments strong in one or two or three and those (like mine) that have relatively uniform strength (that's a lot of work to maintain) across all four.

I'm of the belief that research which can't be externally funded needs to be replaced by research that can, or else identifying new sources of external funding needs to be first priority in those fields. As it is, universities maintains the non-entrepreneurial humanities departments in a very weak condition, dependent almost fully on the blessing of the administration and the transfer of wealth from professional and grant/royalty funded programs.

Sure, colleagues say there is no hope of a major external funding increase in the humanities because the sources aren't there. But I'm a humanities professor whose research is funded by the NSF. It is indeed possible to recalibrate our research so that it does more directly useful things in the world, things people are interested in paying for.

A real ranking of grad programs in any field would try to identify the ones that are thinking ahead of the current malaise. The model is shifting rapidly, and I predict that on the basis of imperfect data such as this report, we are about to see a wave of humanities departments (and some social science departments) being closed or sharply reduced in size, especially at public universities. Top 10 or 20 programs at private (elite) universities won't be dropped as readily, but who do you think employs many if not most of our PhD graduates?

Hey, they said the New York Times would never feel the internet at its throat either.
posted by fourcheesemac at 7:50 PM on September 28, 2010 [1 favorite]


My department said that median debt at graduation was $0.


Bwhahhhhhhahahahha.
posted by k8t at 11:41 PM on September 28, 2010


I hope it wasn't the statistics department.
posted by smackfu at 6:04 AM on September 29, 2010


I'm of the belief that research which can't be externally funded needs to be replaced by research that can, or else identifying new sources of external funding needs to be first priority in those fields.

There is valuable knowledge that should be supported even if it happens that no external grantmaker wants to fund it. Knowing how to read ancient dead languages, etc. Why build a new structure where you have to convince some new agency/private group that those things are valuable? Universities are the way we support that kind of knowledge. It's not terribly costly compared to scientific research - the main thing we need is books, and happily universities are also the main way we maintain major libraries. There is a system that works for supporting basic humanities research, why build a new one that's less well-suited? I agree the humanities should be producing fewer grad students, but that's neither here nor there on the question of who should fund humanities research.
posted by LobsterMitten at 9:08 AM on September 29, 2010


The people saying they don't understand what the problem is need to read the linked articles. The starting data is quite error-prone, in mysterious and less-mysterious ways. (For instance, citation data is from web of science, which has different degrees of quality and relevance in different fields -- they chose fairly arbitrary heuristics to try to counter some of this stuff.) I haven't heard of anyone who has actually looked at their department's data and hasn't found an error, really. The calculation is completely inscrutable. People saying "oh it's just a multivariate analysis" or whatever clearly haven't tried to figure out the precise details. Here's an overview of some of the problems with the analysis, by a statistician, written for a general audience; there's a bunch more issues raised in the leiter reports discussion and links. In philosophy, where there is real, reasonably useful ranking system that is updated regularly, you can see clear and inexplicable mismatches between them and the NRC rankings (also discussed on the leiter blog). I don't think anyone would expect an exact match between the rankings, but what you see is a few top programs ranked in the NRC system just wildly out of line (much too low) with their PGR rankings. In my field I can see a few variables that are orthogonal to quality skewing the rankings (basically, how many experimental faculty vs theoretical faculty a department has, which drastically impacts publication rate and grant availability, and also how well the web of science covers publication venue), with similar effects.

I'm not even upset at the particular result for me, as my department did spectacularly better than we would have imagined (I don't believe we had a ranking in the 1995 version, as the department was barely in place then). But to believe that these numbers have a reliable bearing on program quality is clearly a joke.
posted by advil at 2:20 PM on September 29, 2010 [1 favorite]


This whole thing has been blowing up in academic circles.

See Leither's deranker and the links Leiter posted to yet more criticisms.
posted by onalark at 11:51 AM on September 30, 2010 [1 favorite]


« Older We put the "S" in "struck down"   |   If you're serious, now's exactly the time that... Newer »


This thread has been archived and is closed to new comments