ACL 2010: The 48th Annual Meeting of the Association for Computational
Linguistics

Review form for LONG SURVEY papers

This review form is appropriate for papers that present a survey
of either an area of computational linguistics or an area that
is relevant to computational linguistics research.


APPROPRIATENESS (1-5)

Does the paper fit in ACL 2010? (Please answer this question in light
of the desire to broaden the scope of ACL.)

5: Certainly.
4: Probably.
3: Unsure.
2: Probably not.
1: Certainly not. 


CLARITY (1-5)

For the reasonably well-prepared reader, is it clear what area is
being surveyed why? Is the paper well-written and well-structured?

5 = Very clear.
4 = Understandable by most readers.
3 = Mostly understandable to me with some effort.
2 = Important questions were hard to resolve even with effort.
1 = Much of the paper is confusing. 


ORIGINALITY (1-5)

Does this paper address an area that has not been adequately addressed
in previous survey papers?

5 = Novel: A high-quality survey of this area does not exist.
4 = Noteworthy: Although surveys of this area do exist, they are
    either outdated or this survey takes a different approach that is
    enlightening.
3 = Respectable: A nice survey of an area that provides some extra
    information not available in other survey papers.
2 = Marginal: Minor additions or improvements to previous surveys.
1 = Little to be gained from this survey paper.


COMPREHENSIVENESS / CORRECTNESS (1-5)

Does the paper provide a comprehensive overview of the area? Are all
of the most important contributions included? Does the paper
appropriately compare and contrast the different work? Is the survey
correct in its discussion of approaches, methodologies, etc.?

5 = The survey is thorough and manages to cover all of the most
    important work in the area. The survey correctly analyzes the
    different approaches and provides a critical analysis that is
    enlightening.
4 = Generally solid work, although there are some aspects of the
    survey that could be improved.
3 = Fairly reasonable work. The survey is good, but overlooks some
    important work in the area or is missing a strong critical
    analysis of the work that is described.
2 = Troublesome. The survey is very limited and does not provide a
    good overview of the area.
1 = Fatally flawed. The survey incorrectly describes the projects in
    the area.


IMPACT OF THE SURVEY (1-5)

How influential will the survey be? Will it serve as a useful resource
for other researchers? Will it be essential background reading for
someone just beginning research in an area for which it is relevant.

5 = A superb survey that will be read and highly recommended to
    others. Will be heavily cited.
4 = A useful survey for new researchers, but of little utility for
    those already working in the area. Will have a reasonable number
    of citations.
3 = Some impact on computational linguistics research but will not be
    viewed as important.
2 = Marginally interesting. May or may not be cited.
1 = Will have no impact on the field.


RECOMMENDATION (1-6)

There are many good submissions competing for slots at ACL 2010; how
important is it to feature this one? Will people learn a lot by
reading this paper or seeing it presented?

In deciding on your ultimate recommendation, please think over all
your scores above. But remember that no paper is perfect, and remember
that we want a conference full of interesting, diverse, and timely
work. If a paper has some weaknesses, but you really got a lot out of
it, feel free to fight for it. If a paper is solid but you could live
without it, let us know that you're ambivalent. Remember also that the
author has a few weeks to address reviewer comments before the
camera-ready deadline.

Should the paper be accepted or rejected?

6 = Exciting: I'd fight to get it accepted; probably would be one
              of the best papers at the conference.
5 = Strong: I'd like to see it accepted; it will be one of the
            better papers at the conference.
4 = Worthy: A good paper that is worthy of being presented at ACL.
3 = Ambivalent: OK but does not seem up to the standards of ACL.
2 = Leaning against: I'd rather not see it in the conference.
1 = Poor: I'd fight to have it rejected.


REVIEWER CONFIDENCE (1-5)

5 = Positive that my evaluation is correct. I read the paper very
    carefully and am familiar with related work.  
4 = Quite sure. I tried to check the important points carefully. It's
    unlikely, though conceivable, that I missed something that should
    affect my ratings.
3 = Pretty sure, but there's a chance I missed something. Although I
    have a good feel for this area in general, I did not carefully check
    the paper's details, e.g., the math, experimental design, or novelty.
2 = Willing to defend my evaluation, but it is fairly likely that I
    missed some details, didn't understand some central points, or can't
    be sure about the novelty of the work.
1 = Not my area, or paper is very hard to understand. My evaluation is
    just an educated guess.


RECOMMENDATION FOR BEST LONG PAPER AWARD (1-3)

3 = Definitely.
2 = Maybe.
1 = Definitely not.