The PoPETs Experiment (2019)

After many months of conversations with members of our community regarding the health of the reviewing process in security and privacy conferences we think that it would be a good idea to repeat the famous NIPS consistency experiment.

For those who are not familiar, the NIPS experiment was carried out in 2014 at the NIPS conference with the goal of quantifying randomness in the review process. They split the program committee into two independent program committees. Then, 90% of the papers were assigned to either one of the PCs, and 10% were reviewed by both. This enabled them to observe how consistent the PCs were. The results were (not?) surprising: of the 166 papers, the PCs disagreed on the decision in ~25%.

We feel that PoPETs is in a great position to repeat this experiment and get insights about the randomness within reviews in the security domain.

We will organize the experiment as follows:

To ensure that the duplication of papers does not impose a high load on the PC, we have composed a larger PC than in previous years. Also, if the number of submissions grows too much we will suspend the experiment or reduce the number of papers reviewed by both committees.

Authors will be informed of the experiment upon submission so that they can withdraw if desired. They will be aware of whether their paper will get two sets of reviews or not (as they will have to do the rebuttal twice anyway). We thank the authors for participating in the experiment, and for tolerating the extra work caused by the double reviews.

We hope that you find this experiment as exciting as we do. Please do tell us your feedback and questions. We want to make this a great experience for everyone.

Looking forward to a successful PoPETs 2019!
Kostas and Carmela