Friday, June 14, 2013

Drowning in Reviews, Part 2

Should we crowd source the review of research proposals as a way to solve the peer review system? This question may seem absurd. After all how does one even begin to implement such a scheme? The first problem involves the fact that releasing proposals to the commons would kill any protection for the ideas they contain and release them to your competitors. Even if you solve this problem, then you have to identify a mechanism through which the crowd would participate, perhaps motivated to learn about untested hypotheses. Likely what it would mean is that proposals would include even more ready-to-publish material than they do now. Regardless, these obstacles make it seem unlikely that granting agencies are headed down this path soon.

Or are they? Research grants are funded in part according to past accomplishment as measured, for example, by the level of publication of the investigator. Those principal investigators who publish in online journals are building capital through crowd sourcing. Moreover, through citation indexes and other network measures, weights to the impact of the work—whether published through traditional or open access mechanisms—are already being assigned and used by all sorts of parties interested in the so-called impact of their investments. Thus renewal proposals could, in principle, be awarded based on how the crowd has weighed in. Trouble is that the crowd is larger in some fields than others, and such a funding mechanism would tend to make large subfields more crowded. Therein lies the problem with crowdsourcing reviews: whether directly or indirectly, the sparsely populated research areas of today might not receive the attention necessary to remain alive to solve tomorrow's problems.


No comments:

Post a Comment