During the past few weeks, I've reviewed over twenty grant proposals for three different agencies and several journal articles. It's enough work that it reminds me of Yuan Lee's advice, "you can read articles or you can write them; it's your choice." The problem is that we're not paid to review articles or grants, but the system of peer review falls if we don't. That is the social contract among scientists rests on all of us volunteering as reviewers. As a rough measure, we should review in proportion to what we submit. Since the reviewing task is not blind to the editors or the grant program officers, we build some kind of social capital that may affect the reviewing process of own submissions. However, the system is presumably "fair" because other reviewers aren't aware of our contributions to the reviewing process. So, as in the tragedy of the commons, not all scientists are motivated to contribute their fair share. This, in turn, has driven journals to create mechanisms of recognition for reviewers—like advisory board memberships or awards. But the scofflaws win because they spend more time writing and publishing.
One solution is to publish nearly everything, just like a blog, and let the crowd decide what's important. That's what the arXiv does. Trouble is that it creates an enormous volume that could lead to everyone drowning in articles all the more. Another solution being taken by many online journals like PLoS ONE is to review articles at a minimal standard such as that it be valid and novel. This still leads to article creep but perhaps not much more than what we are already seeing in standard journals. However, if the validation is crowd sourced without being manicured in some way, there is the danger that it will exacerbate fads, and obscure advances outside of them. So far I have published in traditional journals, and paid my dues by just keeping my nose above the waterline of reviews! But is the grass greener on the other side?