Links

Tools

Export citation

Search in Google Scholar

Experiments in Finding Chinese and Japanese Answer Documents at NTCIR-7

Journal article published in 1 by Stephen Tomlinson
This paper was not found in any repository; the policy of its publisher is unknown or unclear.
This paper was not found in any repository; the policy of its publisher is unknown or unclear.

Full text: Unavailable

Question mark in circle
Preprint: policy unknown
Question mark in circle
Postprint: policy unknown
Question mark in circle
Published version: policy unknown

Abstract

We describe evaluation experiments conducted by submitting retrieval runs for the natural language Simplified Chinese, Traditional Chinese and Japanese questions of the Information Retrieval for Question Answering (IR4QA) Task of the Advanced Cross-lingual Information Access (ACLIA) Task Cluster of the 7th NII Test Collection for IR Systems Workshop (NTCIR-7). In a sampling experiment, we found that, on average per topic, the percentage of answer documents assessed was less than 65% for Simpli-fied Chinese, 32% for Traditional Chinese and 41% for Japanese. However, our preferred measure for this task, Generalized Success@10, only considers the rank of the first answer document retrieved for each topic, as one good document answering the question is all that a user needs for this task. We experimented with different techniques (words vs. n-grams, remov-ing question words and blind feedback) and found that the choice of technique can have a substantial impact on the rank of the first answer document for particu-lar questions.