Monday, March 14, 2016

Spinning the Wheel of Fortune: Some quasi-humorous consequences of endless conference deadlines..

Not too long ago, there used to be a couple of real conference deadlines per year in AI.  You work on your papers through the year, submit them,  wait for their reviews, and  if those don't work out, revise and resubmit for the next cycle that is a year or at least six months away.

These days, clearly, things have changed--especially in AI--where there are a whole variety of conferences with endless and sometimes overlapping deadlines.

There are many things that can be said about this brave new world, but I want to use this post to share a couple of quasi-humorous consequences..

[Withdrawal after author response]: The  author response period for IJCAI-16 ended this Saturday, and we have been getting a  steady trickle of mails from authors to help them withdraw their paper. Interestingly most of them seem to have suddenly realized a "lethal error" [sic] in their experiments and thus want to withdraw the paper, as urgently as possible! The fact that a new set of conference deadlines are around the corner (e.g. ACL on 3/18 ) is just a mere coincidence.

Interestingly, submitting to another conference, in the middle of the review period of the current conference (perhaps because of the poor reviews) is considered a violation; but if the authors just send a mail requesting withdrawal of their paper, it magically becomes "legal" (but, does it become ethical?..).


[Reviews ready before papers!]: On the night we sent the assignments to the 2000 program committee members, I knew I still had to fine tune a couple of features of the IJCAI review form. However as it was an exhausting weekend, I thought I will do that in a couple of days and went to sleep. After all, reviewers will need time to read the papers right? And all I need to do is to finalize the review form before they start submitting reviews. Little did I know!

By the time I woke up from a brief 4-hour nap, my mail box already had some 30 messages from Easychair about newly submitted reviews! While scrambling to finalize the review form pronto,  I was curious as to how this superhuman feat of near instantaneous reviewing was possible. Turned out that several of those papers were being re-submitted from a couple of conferences whose cycles just got over, and the papers turned out to have overlapping reviewers! So reviewers can play the game as well as the authors: send the same papers and get the same reviews!  ;-)

To some extent the conferences are all complicit in this, given the excessive interest in talking about the number of submissions a conference receives. After all, the bigger the denominator, the lower the acceptance rate, and thus higher the perceived selectivity. Apparently ivy leagues are not the only ones running this racket..

We collectively bemoan the quality of submissions and reviews. May be we need to put the money where our mouths are, and work on designing mechanisms that don't incentivize the wrong things.. For starters, I hope we start caring more about the impact of a conference (measured by citations, for example), than its "selectivity".

Enough chit chat--I hear another conference deadline approaching... time to give the wheel another whirl..

Rao

8 comments:

  1. Rao, at ECAI (European Conf on AI) we have decided to take the issue of the frequent deadlines. ECAI, deadline April 15th, will happily consider submissions that include not only a revised paper rejected from IJCAI, but also a reasoned response letter explaining what, and how, the revised paper addresses the IJCAI reviewers' concerns. Alas, we use a different conference management system (confmaster), so could not just simply pass the reviews directly. But this is where it is heading -- conferences passing papers from one to the next, so that a paper is shepherded through the process of
    publication, improving with time until, perhaps, it is ready.
    This will reduce load on the reviewers and remove some of the randomness that is ever-so-present in conference reviews.

    ReplyDelete
  2. Hello ECAI ;-)

    I was a big proponent of such paper passing system, and supported its trial between AAAI-14 and AAAI-15. It was also used between AAAI-15 and IJCAI-15. Despite this, I decided not to use it for IJCAI-16 partly because I wasn't convinced it is working right. The statistics on the acceptance rates of such "revise-resubmit" papers have consistently been lower than the rate for the conference as a whole.

    Further, since it is just too hard to ensure any overlap in reviewers (given our current burning man style program committee formation; see my other post http://ijcai-16-pc.blogspot.com/2016/03/burning-man-mandala-ijcai-style-or.html ), the authors have also realized that revise/resubmit option is not really all that appealing since it mostly gives additional ammunition to the reviewers ;-)

    Rao


    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. Does Prof. Rao mean that just because a paper is rejected once the authors are supposed to give it up and stop working on it? Most of your reviewers are very poor in technical knowledge. The negative reviews on my papers either have no technical substance, or they contain something technically wrong (I can prove that). If that is not enough, let me show how low your reviewers can stoop. I submitted a paper in AAAI 2016, and then submitted a revised version in AI & Web track. Although the two versions are significantly different (you can ask the track chairs regarding to this), the reviewer simply copy pasted his review based on the title of the paper. And this is what the review says, both times "The second section of the paper describing the derivation of the model according to the method of moments, is totally unclear to me. Although I have read it several times it doesn’t make sense to me at all. So no matter if it is correct or not it is probably very hard to follow for the average reader. Therefore, I must recommend to reject it in spite of the nice performance lifts reported in the evaluation section."

    I can understand the review if it happens once. But if the reviewer does not understand my paper, why is he bidding to review it again? It simply shows he is biased against my paper, and wants to get it rejected by hook or crook. Although I do not know the source of his bias, it is very likely that he is racially prejudiced. And this is going on without any repercussion; rather I can see that Prof. Rao is encouraging these things.

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
  5. Sayantan -- I don't know anything about your specific paper, of course. However, my experience as a frequent reviewer is exactly the reverse. More than once I have reviewed a conference paper where I and the other reviewers gave detailed, specific suggestions for improvement; and then at the next conference I get the same paper to review, character for character identical, with some font fiddling to fix the page lengths, with not even trivial misspellings corrected. - Ernie Davis

    ReplyDelete
  6. Hello Ernest, my papers are not identical, the second version has a completely new section added to it. Also, the kind of review I posted the excerpt from,has nothing constructive to add to my paper, except the fact that the reviewer actually acknowledges the results. There is several different fields in AI/ML, and a reviewer may not understand a paper that his well beyond his area of research. Clearly the reviewer does not understand anything in my paper. But still the reviewer decides to literally stalk and hunt my paper down.

    ReplyDelete
  7. I was looking for this type of post for writing an abstract for a conference paper.your post helped me a lot.

    ReplyDelete