Misperception of incentives for publication
There's been a lot of conversation lately about negative incentives in academic science. A good example of this is Xenia Schmalz's nice recent post. The basic argument is, professional success comes from publishing a lot and publishing quickly, but scientific values are best served by doing slower, more careful work. There's perhaps some truth to this argument, but it overstates the misalignment in incentives between scientific and professional success. I suspect that people think that quantity matters more than quality, even if the facts are the opposite.
Let's start with the (hopefully uncontroversial) observation that number of publications will be correlated at some magnitude with scientific progress. That's because for the most part, if you haven't done any research you're not likely to be able to publish, and if you have made a true advance it should be relatively easier to publish.* So there will be some correlation between publication record and theoretical advances.
Now consider professional success. When we talk about success, we're mostly talking about hiring decisions. Though there's something to be said about promotion, grants, and awards as well, I'll focus here on hiring.** Getting a postdoc requires the decision of a single PI, while faculty hiring generally depend on committee decisions. It seems to me that many people believe these hiring decisions comes down to the weight of the CV. That doesn't square with either my personal experience or the incentive structure of the situation. My experiences suggest that the quality and importance of the research is paramount, not the quantity of publications. And more substantively, the incentives surrounding hiring also often favor good work.***
At the level of hiring a postdoc, what I personally consider is the person's ideas, research potential, and skills. I will have to work with someone closely for the next several years, and the last person I want to hire is someone sloppy and concerned only with career success. Nearly all postdoc advisors that I know feel the same way, and that's because our incentive is to bring someone in who is a strong scientist. When a PI interviews for a postdoc, they talk to the person about ideas, listen to them present their own research, and read their papers. They may be impressed by the quantity of work the candidate has accomplished, but only in cases where that work is well-done and on an exciting topic. If you believe that PIs are motivated at all by scientific goals � and perhaps that's a question for some people at this cynical juncture, but it's certainly not one for me � then I think you have to believe that they will hire with those goals in mind.
At the level of faculty hiring, the argument is similar. I have never sat on a hiring committee whose actions or articulated values have been consistent with finding the person with the longest CV. In fact, the hires I've seen have typically had fewer publications than other competing candidates. What we are looking for is instead the most exciting, theoretically-deep work in a particular field. In the committees I've been on, we read people's papers and argue about them in depth; discussing things like whether we got excited, whether they were well-written, or whether they made us fall asleep. Could we read more? Definitely. But we do read, and that reading is the basis for our decision-making. That's because the person we hire will be our colleague, will teach classes for our students, and will be our collaborator. Again, the incentives are towards quality.****
A critic here could argue that the kind of exciting work that we respond to is more often false or wrong. I'd reply that evaluating the soundness of scientific work is precisely what we are trained to do when we watch a talk or read a paper by a candidate. Are there confounds? Are the constructs well-identified? Are the measures good? Is the work statistically sound? All of these are precisely the kinds of questions I and others ask when we see a job talk. When was the last time you walked out of a talk and said, that research was terrible, but I love that there was so much of it? Quality is prerequisite.
Now, the critical point about misperceptions: while hiring decisions are (often) made by people with deep stakes in the outcome, e.g. the potential postdoc advisor or colleagues, observers of the decision almost always have less at stake. So whatever level of engagement the original decision-makers have with the substance of the research � reading papers, reading the literature to get context, asking experts � observers will have less. But observers can still see the CV, which will have some correlation with the actual record of achievement. Hence, observers will likely be biased to use the knowledge the have � the number of publications � to understand the PI or committee's decision, even if the decision was made on the basis of independent, causally-prior criteria (namely the actual quality of the work).
In sum: from an external viewpoint, we see publication numbers as the driver of professional success, but that is � at least in part � because CVs are easy to observe, and scientific progress is hard to assess. But in many cases decision-makers tend to know more and care more about the candidate's actual work than external observers, and so tend to decide more on the substance.
Could we do better? Of course. There are plenty of biases and negative incentives! And we need to work to decrease them. For example, there was a recent twitter discussion of "n-best" evaluations. Such evaluations (considering only n papers) might help committees more explicitly focus on reading a few papers in depth and assessing their impact. What I've tried to argue here, though, is that counteracting the perception that quantity matters more than quality may be important as well. Quality really does matter; it's a shame more people don't know that.
---
* I'm not trying to suggest that scientific publication is perfect. It isn't. I'm not even arguing here that it's unbiased. Just that there is some signal in publication relative to scientific success. Hopefully, that shouldn't be a controversial claim, even for people who are quite skeptical of our current publication model.
** Actually, on this model, grants and awards might be much more biased by CV weight, since A) the consequences for the person doing the granting/awarding are more limited, and B) they are less likely to be expert in the area. And to the extent that these grants and awards are weighed in hiring decisions, this could be an additional source of bias. Hmm...
*** There's plenty to say here about people's ignorance of what good work is. That's a problem! But let's assume for a second that at least someone knows what good work looks like.
**** I actually think it's more or less a threshold model. If you can publish more than N papers (where N is small), then the focus of the committee is, "are they solid and exciting work?" If fewer than N, that typically means the person is not far enough along in their work to be able to evaluate their contributions.
Let's start with the (hopefully uncontroversial) observation that number of publications will be correlated at some magnitude with scientific progress. That's because for the most part, if you haven't done any research you're not likely to be able to publish, and if you have made a true advance it should be relatively easier to publish.* So there will be some correlation between publication record and theoretical advances.
Now consider professional success. When we talk about success, we're mostly talking about hiring decisions. Though there's something to be said about promotion, grants, and awards as well, I'll focus here on hiring.** Getting a postdoc requires the decision of a single PI, while faculty hiring generally depend on committee decisions. It seems to me that many people believe these hiring decisions comes down to the weight of the CV. That doesn't square with either my personal experience or the incentive structure of the situation. My experiences suggest that the quality and importance of the research is paramount, not the quantity of publications. And more substantively, the incentives surrounding hiring also often favor good work.***
At the level of hiring a postdoc, what I personally consider is the person's ideas, research potential, and skills. I will have to work with someone closely for the next several years, and the last person I want to hire is someone sloppy and concerned only with career success. Nearly all postdoc advisors that I know feel the same way, and that's because our incentive is to bring someone in who is a strong scientist. When a PI interviews for a postdoc, they talk to the person about ideas, listen to them present their own research, and read their papers. They may be impressed by the quantity of work the candidate has accomplished, but only in cases where that work is well-done and on an exciting topic. If you believe that PIs are motivated at all by scientific goals � and perhaps that's a question for some people at this cynical juncture, but it's certainly not one for me � then I think you have to believe that they will hire with those goals in mind.
At the level of faculty hiring, the argument is similar. I have never sat on a hiring committee whose actions or articulated values have been consistent with finding the person with the longest CV. In fact, the hires I've seen have typically had fewer publications than other competing candidates. What we are looking for is instead the most exciting, theoretically-deep work in a particular field. In the committees I've been on, we read people's papers and argue about them in depth; discussing things like whether we got excited, whether they were well-written, or whether they made us fall asleep. Could we read more? Definitely. But we do read, and that reading is the basis for our decision-making. That's because the person we hire will be our colleague, will teach classes for our students, and will be our collaborator. Again, the incentives are towards quality.****
A critic here could argue that the kind of exciting work that we respond to is more often false or wrong. I'd reply that evaluating the soundness of scientific work is precisely what we are trained to do when we watch a talk or read a paper by a candidate. Are there confounds? Are the constructs well-identified? Are the measures good? Is the work statistically sound? All of these are precisely the kinds of questions I and others ask when we see a job talk. When was the last time you walked out of a talk and said, that research was terrible, but I love that there was so much of it? Quality is prerequisite.
Now, the critical point about misperceptions: while hiring decisions are (often) made by people with deep stakes in the outcome, e.g. the potential postdoc advisor or colleagues, observers of the decision almost always have less at stake. So whatever level of engagement the original decision-makers have with the substance of the research � reading papers, reading the literature to get context, asking experts � observers will have less. But observers can still see the CV, which will have some correlation with the actual record of achievement. Hence, observers will likely be biased to use the knowledge the have � the number of publications � to understand the PI or committee's decision, even if the decision was made on the basis of independent, causally-prior criteria (namely the actual quality of the work).
In sum: from an external viewpoint, we see publication numbers as the driver of professional success, but that is � at least in part � because CVs are easy to observe, and scientific progress is hard to assess. But in many cases decision-makers tend to know more and care more about the candidate's actual work than external observers, and so tend to decide more on the substance.
Could we do better? Of course. There are plenty of biases and negative incentives! And we need to work to decrease them. For example, there was a recent twitter discussion of "n-best" evaluations. Such evaluations (considering only n papers) might help committees more explicitly focus on reading a few papers in depth and assessing their impact. What I've tried to argue here, though, is that counteracting the perception that quantity matters more than quality may be important as well. Quality really does matter; it's a shame more people don't know that.
---
* I'm not trying to suggest that scientific publication is perfect. It isn't. I'm not even arguing here that it's unbiased. Just that there is some signal in publication relative to scientific success. Hopefully, that shouldn't be a controversial claim, even for people who are quite skeptical of our current publication model.
** Actually, on this model, grants and awards might be much more biased by CV weight, since A) the consequences for the person doing the granting/awarding are more limited, and B) they are less likely to be expert in the area. And to the extent that these grants and awards are weighed in hiring decisions, this could be an additional source of bias. Hmm...
*** There's plenty to say here about people's ignorance of what good work is. That's a problem! But let's assume for a second that at least someone knows what good work looks like.
**** I actually think it's more or less a threshold model. If you can publish more than N papers (where N is small), then the focus of the committee is, "are they solid and exciting work?" If fewer than N, that typically means the person is not far enough along in their work to be able to evaluate their contributions.
Comments
Post a Comment