Crackpot report bloomers ...

[BW08]
pdf R
War on tuples :

Following on from the Hilbertian work, the authors slip unannounced in to category theory (fst etc are not "usual" operators elsewhere, I think?). I have yet to meet anyone in this community who can speak knowledgeably about category theory, so a little more background is needed.

Software Language Engineering (SLE 2008), Chairs: Dragan Gasevic, Mark van den Brand, Jeff Gray.

[BBKW09]
pdf R
Rejecting a given formal proof:

My understanding of Isabelle/HOL is that it is a validation tool and not a real "proof". (An "expert" with level 3).

Testcom/FATES 2009, Chair: Manuel Nunez.

[BKW09]
pdf R
A masterpiece of well-phrased inconsistency:

The problem tackled by the paper is interesting. Indeed, the lack of proper semantics for null (or any other special value, for that matter) can be a hassle [...]. The paper [...] presents a nice exercise in formalizing OCL semantics, building on the previous work of the authors on the subject. [...] My main concern is that very little conclusion can be drawn from the work [...]

Models 2009, Chairs: Andy Schürr, Bran Selic.

Why am I doing this: Collecting Crackpot Referee Reports

2009-now

The formal refereeing process lies at the heart of what makes a science: at the very end, a collection of "good" conferences or journal papers advances academic careers and research grants, i.e. it controls the future directions of a research field. The formal refereeing process of a scientific conference consists in given a paper to three or four "referees" which are hopefully expert for it. The process is fundamentally un-democratic and non-wikipedia: 1) science is ideally on truth and not on majorities, and 2), not everyone can contribute, and decisions can not level out over a long time-period. In contrast, the process is founded on something exclusive and apparently vague like "authority" and "expertise".

Beeing a referee of a scientific conference is therefore a particular responsability; a responsibility which is reflected in a number of written and unwritten rules, evaluation criteria (like significance, relevance, technical correctness, ...), by a formal refereeing process technically enforced by web-based services like easychair, and by institutions such as program comittees, their chairs and scientific organizations.

Admittedly, refereeing is not easy: I encountered in my own practice as a referee (about 200 reports) a few papers which I did not understand at all. In most cases, it turned out nobody understood it; such cases end up by a search for formal arguments why the paper is actually uncomprehensible (or "does not meet formal standards"). I encountered papers, where my own expertise and background was indeed insufficient --- this happens in particular, when the conference does not use a "bidding phase" (where members of the program comittee indicate their interest for submitted papers to referee them) or where I simply misunderstood the abstract which I took as basis for a bidding. I remember a very painful discussion with two very respected collegues, who knew by personal contacts more over the impressing work behind a certain paper as could actually be inferred from it; they wanted to force it through although we all agreed that the presentation was poor.

In a well-organized conference (comprising a reasonably balanced program comittee, a good bidding phase, active chairs that stimulate the debate and criticize insulting, vague or inconsistent referee reports), about 70 % of the reports are roughly uncontroversial and differ only gradually in necessarily subjective criteria such as "significance for a scientific community". Above all, this shows that science exists and is not just a tribal behaviour, as philosophers like Kuhn or Feyerabend suggest. Unfortunately, there are certain temptations that may prevent a conference to turn into a "scientific event": being considered an expert may inflate egos, and being a program committee member may advance a career.

To stir a debate and to make referees and chairs a bit more aware on their responsability, I put my most spectacular crap referee reports on this web page - for fairness together with the original submitted paper, the complete report and sometimes the reports of the other referees (which are not necessarily criticized here or are not obviously factual nonsense).

Note, what I attack on this list is strictly obvious factual nonsense, inconsistency or lack of common computer science background, but not (necessarily subjective) opinions on significance, relevance, or originality.