Showing posts with label reviews. Show all posts
Showing posts with label reviews. Show all posts

Friday, June 14, 2013

Drowning in Reviews, Part 2

Should we crowd source the review of research proposals as a way to solve the peer review system? This question may seem absurd. After all how does one even begin to implement such a scheme? The first problem involves the fact that releasing proposals to the commons would kill any protection for the ideas they contain and release them to your competitors. Even if you solve this problem, then you have to identify a mechanism through which the crowd would participate, perhaps motivated to learn about untested hypotheses. Likely what it would mean is that proposals would include even more ready-to-publish material than they do now. Regardless, these obstacles make it seem unlikely that granting agencies are headed down this path soon.

Or are they? Research grants are funded in part according to past accomplishment as measured, for example, by the level of publication of the investigator. Those principal investigators who publish in online journals are building capital through crowd sourcing. Moreover, through citation indexes and other network measures, weights to the impact of the work—whether published through traditional or open access mechanisms—are already being assigned and used by all sorts of parties interested in the so-called impact of their investments. Thus renewal proposals could, in principle, be awarded based on how the crowd has weighed in. Trouble is that the crowd is larger in some fields than others, and such a funding mechanism would tend to make large subfields more crowded. Therein lies the problem with crowdsourcing reviews: whether directly or indirectly, the sparsely populated research areas of today might not receive the attention necessary to remain alive to solve tomorrow's problems.


Wednesday, June 12, 2013

Drowning in Reviews, Part 1...


During the past few weeks, I've reviewed over twenty grant proposals for three different agencies and several journal articles. It's enough work that it reminds me of Yuan Lee's advice, "you can read articles or you can write them; it's your choice."  The problem is that we're not paid to review articles or grants, but the system of peer review falls if we don't. That is the social contract among scientists rests on all of us volunteering as reviewers. As a rough measure, we should review in proportion to what we submit. Since the reviewing task is not blind to the editors or the grant program officers, we build some kind of social capital that may affect the reviewing process of own submissions. However, the system is presumably "fair" because other reviewers aren't aware of our contributions to the reviewing process. So, as in the tragedy of the commons, not all scientists are motivated to contribute their fair share. This, in turn, has driven journals to create mechanisms of recognition for reviewers—like advisory board memberships or awards. But the scofflaws win because they spend more time writing and publishing.

One solution is to publish nearly everything, just like a blog, and let the crowd decide what's important. That's what the arXiv does. Trouble is that it creates an enormous volume that could lead to everyone drowning in articles all the more. Another solution being taken by many online journals like PLoS ONE is to review articles at a minimal standard such as that it be valid and novel. This still leads to article creep but perhaps not much more than what we are already seeing in standard journals. However, if the validation is crowd sourced without being manicured in some way, there is the danger that it will exacerbate fads, and obscure advances outside of them. So far I have published in traditional journals, and paid my dues by just keeping my nose above the waterline of reviews! But is the grass greener on the other side?