Multiple Simultaneous Hypothesis Testing
Submission deadline: March 31, 2008
The Journal of Machine Learning Research invites authors to submit papers for the Special Issue on Multiple Simultaneous Hypothesis Testing. This special issue follows the MSHT 2007 workshop on the same topic, but is open also to contributions that were not presented in it.
Multiple Simultaneous Hypothesis Testing is a main issue in many areas of information extraction:
Along these lines, a type I error is to extract an entity which does not satisfy the considered constraint while a type II error is to miss an entity which does satisfy the constraint. How to estimate, bound, or (even better !) reduce type I and type II errors are the goals of the proposed challenge.
VC-theory, empirical process and various approaches related to simultaneous hypothesis testings are fully relevant, as well as specific approaches, e.g. based on simulations, resamplings or probes. The challenge consists in extending previous results to the field of simultaneous hypothesis testing, or proposing new results specifically related to this topic.
We welcome survey papers related to type I and type II errors, and papers presenting new results, proposing theoretical bounds or smart empirical experiments. In the latter case, the experimental setting as well as the algorithmic principles and explicit criteria must be carefully described and discussed; the use of publicly available software will be much appreciated.
Results combining type I and type II risk are particularly welcome. Asymptotic and non-asymptotic results are equally welcome.
Key words : Empirical process, Learning theory, Multiple hypothesis testing, Rule extraction, Bio-informatics, Statistical Validation of Information Extraction.
To submit a paper send the normal emails asked for by the JMLR in their information for authors to email@example.com (not to the editors directly), indicating in the subject headers that the submission is intended for the Special Issue on Multiple Simultaneous Hypothesis Testing.
Tips for a successful paper:
- The abstract should state: background, method, results, conclusions.
- The introduction should not paraphrase the abstract but rather develop the background and motivate the method.
- The conclusion should summarize the results, the main advantages and disadvantages, contrast with other methods, and propose further directions. There should always be a conclusion.
- An algorithm is often best described by pseudo-code or an organigram.
- Avoid adding too many details in the text. Create appendices for algorithmic details and derivations. Create a discussion Section for
alternative ways, side remarks, open questions, connections to other methods.
- Avoid redundancy. Favor conciseness and precision and refer to a technical memorandum or a web site for more details.
- Show on the same page Figures or Tables that have to be compared.
- In general be "nice" to the reader: be as clear as possible.
|A||Summary||Summarize briefly the contents of the paper:|
|B||Questions||Provide answers in text and grades on a scale 0 to 2 (0=worst, 2=best)|
|1||Scope||Is the paper relevant to MSHT ? (0 1 2)|
|2||Novelty||Does the material constitute a novel unobvious contribution to the field or if it is tutorial in nature, does it review the field appropriately? (0 1 2)|
|3||Usefulness||Are the methods, theories, and/or conclusions particularly useful (usefulness should be well supported by results)? (0 1 2)|
|4||Sanity||Is the paper technically sound (good methodology, correct proofs, accurate and sufficient result analysis)? (0 1 2)|
|5||Quantity||Does the paper contain enough interesting material? (0 1 2)|
|6||Reproducibility||Are the methods introduced or considered sufficiently described to be implemented and/or to reproduce the results? (0 1 2)|
|7||Demonstration||Has the efficiency, advantages, and/or drawbacks of the methods introduced or considered been sufficiently and convincingly demonstrated theoretically and/or experimentally? (0 1 2)|
|8||Comparison||Has a sufficient method comparison been performed? (0 1 2)|
|9||Completeness||Is the paper self contained, rather than referring to other publications extensively? (0 1 2)|
|10||Take-aways||Does the paper clearly state its objectives (in the title, abstract, and introduction) and delivers them (in the abstract, body of the text, and conclusion)? (0 1 2)|
|11||Bibliography||Is the background properly described in the introduction and/or discussion, with an adequate bibliography? (0 1 2)|
|12||Outlook||Are the results critically analyzed and further research directions outlined in a discussion or conclusion section? (0 1 2)|
|13||Data availability||Are the data made available to other researchers? (0 1 2)|
|14||Code availability||Is the implementation made available to other researchers? (0 1 2)|
|15||Readability||Is the paper easily readable for machine learning experts or statisticians interested in MSHT ? (0 1 2)|
|16||Notations||Are the notations clear ? (0 1 2)|
|17||Figures||Is the paper well and sufficiently illustrated by figures?|
|18||Formalism||Are the methods clearly formalized by a step by step procedure (e.g. algorithm pseudo-code or flow charts provided)? (0 1 2)|
|19||Density||Is the length appropriate, relative to the contents? (0 1 2)|
|20||Language||Is the English satisfactory? (0 1 2)|
|C||Comments||Add other detailled comments, corrections and suggestions:|
Notification of acceptance: August, 15th, 2008 (delayed, sorry!).
Final papers: October 30, 2008
The schedule may be subject to revisions. Prospective authors are invited to make themselves known to the editors ahead of time to facilitate the harmonization of the issue and ensure that the authors will be informed of any change.