These are the old pages from the weblog as they were published at Cornell. Visit www.allthingsdistributed.com for up-to-date entries.

March 30, 2005

Don't give me an excuse

On his weblog Steve Vinoski mentions some experiences that are too familiar for me to let simply pass. Like Steve I have been reviewing a stack of research papers in the past 2 weeks (ICWS, HotDep) and just like him I am enjoying the good stuff, and I am very interested in helping the works that are "almost there" get better. But I also was once again surprised by the number of papers that doesn't even come close to being acceptable by any standard.

Last year I wrote in posting on "Evaluating systems papers" that I estimated that about 70% of papers that I vote to reject are not meeting the bar of communicating basic concepts to its intended audience. I also gave some advice about which things I look for in a systems paper. Steve quotes a few other sources of good advice.

Over the years I have become very good spotting fake, bad and incomplete papers. I think it is the first tool most experienced reviewers use to sift through their assignments. So Steve's generalized version of Armando Fox' advice resonates very strongly with me:

Good reviewers are overloaded and are looking for an excuse to stop reading your paper. Don't give them one.

Soon my next stack of papers (Middleware 2005) will arrive, please force me to read all your papers by delivering very high quality material.

PS. After this I won't be available for reviewing for a while, so no need to ask.

Posted by Werner Vogels at March 30, 2005 01:41 AM
TrackBacks

Comments

I have a comment for the other side of the lens: a recommendation for reviewers, rather than authors.


As a program chair of a few conferences, I've occasionally gotten back feedback from authors saying a reviewer got a point wrong, or misunderstood, or couldn't possibly have meant to give a particular score. Sometimes they're right, and sometimes they're not. It would sure make life easier for both authors and program chairs if reviewers made sure their reviews are internally consistent. For instance, don't give something a score of "don't even think about accepting this paper" while you simultaneously give it decent marks for technical merit, originality, and so on, unless you also make it clear in the comments to authors what the "fatal flaw" in the paper is. Otherwise you're inviting the authors to scratch their collective head, and maybe probe the program chair to confirm that the review is correct as it stands.

Posted by: Fred on March 31, 2005 07:36 PM