Skip to Content


CfP - Special issue on Reproducibility in Information Retrieval - Submission Deadline: 8 September 2017

Call for Papers - Special issue on Reproducibility in Information Retrieval 
ACM Journal of Data and Information Quality (ACM JDIQ)
Submission Deadline: 8 September 2017

Guest editors

Nicola Ferro, University of Padua, Italy, 
Norbert Fuhr, University of Duisburg-Essen, Germany, 
Andreas Rauber, Technical University of Vienna, Austria,


Information Retrieval is a discipline that has been strongly rooted in experimentation since its inception. Experimental evaluation has always been a strong driver for IR research and innovation, and these activities have been shaped by large scale evaluation campaigns such as TREC, CLEF, NTCIR and FIRE.
IR systems are getting more and more complex. They need to cross language and media barriers; they span from unstructured, to semi-structured to highly structured data; and they are faced with diverse and complex user information needs, search tasks, and societal challenges. As a consequence, evaluation and experimentation, which has remained a fundamental element, has in turn become increasingly sophisticated and challenging.
In this context, repeatability, reproducibility, and generalizability of experiments and results cannot be taken for granted. Indeed we need to emphasize these aspects as  key requirements, if we wish to continue to reliably and durably advance research and technology in the field. In turn, we need to actively pursue them as a core part of our experimental methodology and practice. 
In this special issue of JDIQ, we aspire to provide an overview of innovative research at the intersection of information retrieval and data quality, from theory to practice, with a focus on challenges, solutions, and experiences in reproducibility of IR experimental results.


Specific topics within the scope of the call include, but are not limited to, the following:
- Analysis of reproducibility challenges in system-oriented evaluation.
- Analysis of reproducibility challenges in user-oriented evaluation.
- General reproducibility frameworks for IR.
- Lessons learned in reproducing third-party experiments.
- Reproducibility of query results.
- Reproducibility challenges on private or proprietary data.
- Reproducibility challenges on ephemeral data, like streaming data, tweets, etc.
- Reproducibility challenges on online experiments, e.g., A/B testing.
- Reproducibility in evaluation campaigns.
- Evaluation infrastructures and Evaluation as a Service (EaaS).
- Experiment data management, data curation, and data quality.
- Data models, semantic or not, for IR experimental data.
- Reproducible experimental workflows: tools and experiences.
- Quality of IR experimental data.
- Data Citation: citing experimental data, dynamic data sets, samples, and statistical analyses.

Expected contributions

We welcome the following two types of contributions:
- Research manuscripts reporting mature results [25+ pages].  
- Experience papers that report on lessons learned from addressing specific issues towards improved quality and reproducibility of experimental results [12+ pages plus an optional appendix].
If this is an extension of prior published work, then submitted manuscripts must contain at least 30% new material, and the significant new contributions must be clearly identified in the introduction. 
Submission guidelines with Latex (preferred) or Word templates are available here: 

Important dates

- Initial submission: Friday September 8, 2017
- First review: Thursday December 7, 2017
- Revised manuscripts: Friday March 9, 2018
- Second review: Friday May 11, 2018 
- Camera-ready manuscripts: Friday July 13, 2018
- Publication: Late October 2018

PROMISE: Participative Research labOratory for Multimedia and Multilingual Information Systems Evaluation

Large-scale worldwide experimental evaluations provide fundamental contributions to the advancement of state-of-the-art techniques through common evaluation procedures, regular and systematic evaluation cycles, comparison and benchmarking of the adopted approaches, and spreading of knowledge. In the process, vast amounts of experimental data are generated that beg for analysis tools to enable interpretation and thereby facilitate scientific and technological progress.

PROMISE will provide a virtual laboratory for conducting participative research and experimentation to carry out, advance and bring automation into the evaluation and benchmarking of such complex information systems, by facilitating management and offering access, curation, preservation, re-use, analysis, visualization, and mining of the collected experimental data. PROMISE will:

  • foster the adoption of regular experimental evaluation activities;
  • bring automation into the experimental evaluation process;
  • promotecollaboration and re-use over the acquired knowledge-base;
  • stimulate knowledge transfer and uptake.

Europe is unique: a powerful economic community that politically and culturally strives for equality in its languages and an appreciation of diversity in its citizens. New Internet paradigms are continually extending the media and the task where multiple language based interaction must be supported. PROMISE will direct a world-wide research community to track these changes and deliver solutions so that Europe can achieve one of its most cherished goals.