Evaluating Information Retrieval and Access Tasks : NTCIR's Legacy of Research Impact.

Bibliographic Details
Main Author: Sakai, Tetsuya.
Other Authors: Oard, Douglas W., Kando, Noriko.
Format: eBook
Language:English
Published: Singapore : Springer Singapore Pte. Limited, 2020.
Edition:1st ed.
Series:The Information Retrieval Series
Subjects:
Online Access:Click to View
Table of Contents:
  • Intro
  • Foreword
  • Preface
  • Contents
  • Acronyms
  • 1 Graded Relevance
  • 1.1 Introduction
  • 1.2 Graded Relevance Assessments, Binary Relevance Measures
  • 1.2.1 Early IR and CLIR Tasks (NTCIR-1 Through -5)
  • 1.2.2 Patent (NTCIR-3 Through-6)
  • 1.2.3 SpokenDoc/SpokenQuery&amp
  • Doc (NTCIR-9 Through -12)
  • 1.2.4 Math/MathIR (NTCIR-10 Through -12)
  • 1.3 Graded Relevance Assessments, Graded Relevance Measures
  • 1.3.1 Web (NTCIR-3 Through-5)
  • 1.3.2 CLIR (NTCIR-6)
  • 1.3.3 ACLIA IR4QA (NTCIR-7 and -8)
  • 1.3.4 GeoTime (NTCIR-8 and -9)
  • 1.3.5 CQA (NTCIR-8)
  • 1.3.6 INTENT/IMine (NTCIR-9 Through 12)
  • 1.3.7 RecipeSearch (NTCIR-11)
  • 1.3.8 Temporalia (NTCIR-11 and -12)
  • 1.3.9 STC (NTCIR-12 Through -14)
  • 1.3.10 WWW (NTCIR-13 and -14) and CENTRE (NTCIR-14)
  • 1.3.11 AKG (NTCIR-13)
  • 1.3.12 OpenLiveQ (NTCIR-13 and -14)
  • 1.4 Summary
  • References
  • 2 Experiments on Cross-Language Information Retrieval Using Comparable Corpora of Chinese, Japanese, and Korean Languages
  • 2.1 Introduction
  • 2.2 Outline of Cross-Language Information Retrieval (CLIR)
  • 2.2.1 CLIR Types and Techniques
  • 2.2.2 Word Sense Disambiguation for CLIR
  • 2.2.3 Language Resources for CLIR
  • 2.3 Test Collections for CLIR from NTCIR-1 to NTCIR-6
  • 2.3.1 Japanese-English Comparable Corpora in NTCIR-1 and NTCIR-2
  • 2.3.2 Chinese-Japanese-Korean (CJK) Corpora from NTCIR-3 to NTCIR-6
  • 2.3.3 CJKE Test Collection Construction
  • 2.3.4 IR System Evaluation
  • 2.4 CLIR Techniques in NTCIR
  • 2.4.1 Monolingual Information Retrieval Techniques
  • 2.4.2 Bilingual Information Retrieval (BLIR) Techniques
  • 2.4.3 Multilingual Information Retrieval (MLIR) Techniques
  • 2.5 Concluding Remarks
  • References
  • 3 Text Summarization Challenge: An Evaluation Program for Text Summarization
  • 3.1 What is Text Summarization?
  • 3.2 Various Types of Summaries.
  • 3.3 Evaluation Metrics for Text Summarization
  • 3.4 Text Summarization Evaluation Campaigns Before TSC
  • 3.5 TSC: Our Challenge
  • 3.5.1 TSC1
  • 3.5.2 TSC2
  • 3.5.3 TSC3
  • 3.6 Text Summarization Evaluation Campaigns After TSC
  • 3.7 Future Perspectives
  • References
  • 4 Challenges in Patent Information Retrieval
  • 4.1 Introduction
  • 4.2 Overview of NTCIR Tasks
  • 4.2.1 Technology Survey
  • 4.2.2 Invalidity Search
  • 4.2.3 Classification
  • 4.2.4 Mining
  • 4.3 Outline of the NTCIR Tasks
  • 4.3.1 Technology Survey Task: NTCIR-3
  • 4.3.2 Invalidity Search Task: NTCIR-4, NTCIR-5, and NTCIR-6
  • 4.3.3 Patent Classification Task: NTCIR-5, NTCIR-6
  • 4.3.4 Patent Mining Task: NTCIR-7, NTCIR-8
  • 4.4 Contributions
  • 4.4.1 Preliminary Workshop
  • 4.4.2 Technology Survey
  • 4.4.3 Collaboration with Patent Experts
  • 4.4.4 Invalidity Search
  • 4.4.5 Patent Classification
  • 4.4.6 Mining
  • 4.4.7 Workshops and Publications
  • 4.4.8 CLEF-IP and TREC-CHEM
  • References
  • 5 Multi-modal Summarization
  • 5.1 Background
  • 5.2 Applications Envisioned
  • 5.3 Multi-modal Summarization on Trend Information
  • 5.3.1 Objective
  • 5.3.2 Data Set as a Unifying Force
  • 5.3.3 Outcome
  • 5.4 Implication
  • References
  • 6 Opinion Analysis Corpora Across Languages
  • 6.1 Introduction
  • 6.2 NTCIR MOAT
  • 6.2.1 Overview
  • 6.2.2 Research Questions at NTCIR MOAT
  • 6.2.3 Subtasks
  • 6.2.4 Opinion Corpus Annotation Requirements
  • 6.2.5 Cross-Lingual Topic Analysis
  • 6.3 Opinion Analysis Research Since MOAT
  • 6.3.1 Research Using the NTCIR MOAT Test Collection
  • 6.3.2 Opinion Corpus in News
  • 6.3.3 Current Opinion Analysis Research: The Social Media Corpus and Deep NLP
  • 6.4 Conclusion
  • References
  • 7 Patent Translation
  • 7.1 Introduction
  • 7.2 Innovations at NTCIR
  • 7.2.1 Patent Translation Task at NTCIR-7 (2007-2008).
  • 7.2.2 Patent Translation Task at NTCIR-8 (2009-2010)
  • 7.2.3 Patent Translation Task at NTCIR-9 (2010-2011)
  • 7.2.4 Patent Translation Task at NTCIR-10 (2012-2013)
  • 7.3 Developments After NTCIR-10
  • References
  • 8 Component-Based Evaluation for Question Answering
  • 8.1 Introduction
  • 8.1.1 History of Component-Based Evaluation in QA
  • 8.1.2 Contributions of NTCIR
  • 8.2 Component-Based Evaluation in NTCIR
  • 8.2.1 Shared Data Schema and Tracks
  • 8.2.2 Shared Evaluation Metrics and Process
  • 8.3 Recent Developments in Component Evaluation
  • 8.3.1 Open Advancement of Question Answering
  • 8.3.2 Configuration Space Exploration (CSE)
  • 8.3.3 Component Evaluation for Biomedical QA
  • 8.4 Remaining Challenges and Future Directions
  • 8.5 Conclusion
  • References
  • 9 Temporal Information Access
  • 9.1 Introduction
  • 9.2 Temporal Information Retrieval
  • 9.2.1 NTCIR-8 GeoTime Task
  • 9.2.2 NTCIR-9 GeoTime Task Round 2
  • 9.2.3 Issues Discussed Related to Temporal IR
  • 9.3 Temporal Query Analysis and Search Result Diversification
  • 9.3.1 NTCIR-11 Temporal Information Access Task
  • 9.3.2 NTCIR-12 Temporal Information Access Task Round 2
  • 9.3.3 Implications from Temporalia
  • 9.4 Related Work and Broad Impacts
  • 9.5 Conclusion
  • References
  • 10 SogouQ: The First Large-Scale Test Collection with Click Streams Used in a Shared-Task Evaluation
  • 10.1 Introduction
  • 10.2 SogouQ and Related Data Collections
  • 10.3 SogouQ and NTCIR Tasks
  • 10.4 Impact of SogouQ
  • 10.5 Conclusion
  • References
  • 11 Evaluation of Information Access with Smartphones
  • 11.1 Introduction
  • 11.2 NTCIR Tasks for Information Access with Smartphones
  • 11.2.1 NTCIR 1CLICK
  • 11.2.2 NTCIR MobileClick
  • 11.3 Evaluation Methodology in NTCIR 1CLICK and MobileClick
  • 11.3.1 Textual Output Evaluation
  • 11.3.2 From Nuggets to iUnits
  • 11.3.3 S-Measure.
  • 11.3.4 M-Measure
  • 11.4 Outcomes of NTCIR 1CLICK and MobileClick
  • 11.4.1 Results
  • 11.4.2 Impacts
  • 11.5 Summary
  • References
  • 12 Mathematical Information Retrieval
  • 12.1 Introduction
  • 12.2 NTCIR Math: Overview
  • 12.2.1 NTCIR-10 Math Pilot Task
  • 12.2.2 NTCIR-11 Math-2 Task
  • 12.2.3 NTCIR-12 MathIR Task
  • 12.3 NTCIR Math Datasets
  • 12.3.1 Corpora
  • 12.3.2 Topics
  • 12.3.3 Relevance Judgment
  • 12.4 Task Results and Discussion
  • 12.4.1 Evaluation Metrics
  • 12.4.2 MIR Systems
  • 12.5 Further Trials
  • 12.5.1 ArXiv Free-Form Query Search at NTCIR-10
  • 12.5.2 Wikipedia Formula Search at NTCIR-11
  • 12.5.3 Math Understanding Subtask at NTCIR-10
  • 12.6 Further Impact of NTCIR Math Tasks
  • 12.6.1 Math Information Retrieval
  • 12.6.2 Semantics Extraction in Mathematical Documents
  • 12.6.3 Corpora for Math Linguistics
  • 12.7 Conclusion
  • References
  • 13 Experiments in Lifelog Organisation and Retrieval at NTCIR
  • 13.1 Introduction
  • 13.2 Related Activities
  • 13.3 Lifelog Datasets Released at NTCIR
  • 13.4 Lifelog Subtasks at NTCIR
  • 13.4.1 Lifelog Semantic Access Subtask
  • 13.4.2 Lifelog Insight Subtask
  • 13.4.3 Other Subtasks (LEST, LAT and LADT)
  • 13.5 Lessons Learned
  • 13.5.1 Conclusions and Future Plans
  • References
  • 14 The Future of Information Retrieval Evaluation
  • 14.1 Introduction
  • 14.2 First Things First
  • 14.3 The Shared Task Evaluation Ecosystem
  • 14.4 A Brave New World
  • 14.5 Trendlines
  • 14.6 An Inconclusion
  • References
  • Index.