Search
Now showing items 1-4 of 4
Crowd vs. Expert: What can relevance judgment rationales teach us about assessor disagreement?
(
ACM
, 2018 , Conference Paper)
© 2018 ACM. While crowdsourcing offers a low-cost, scalable way to collect relevance judgments, lack of transparency with remote crowd work has limited understanding about the quality of collected judgments. In prior work, ...
Re-ranking web search results for better fact-checking: A preliminary study
(
Association for Computing Machinery
, 2018 , Conference Paper)
Even though Web search engines play an important role in finding documents relevant to user queries, there is little to no attention given to how they perform in terms of usefulness for fact-checking claims. In this paper, ...
When rank order isn't enough: New statistical-significance-aware correlation measures
(
Association for Computing Machinery
, 2018 , Conference Paper)
Because it is expensive to construct test collections for Cranfield-based evaluation of information retrieval systems, a variety of lower-cost methods have been proposed. The reliability of these methods is often validated ...
Mix and match: Collaborative expert-crowd judging for building test collections accurately and affordably
(
CEUR-WS
, 2018 , Conference Paper)
Crowdsourcing offers an affordable and scalable means to collect relevance judgments for information retrieval test collections. However, crowd assessors may showhigher variance in judgment quality than trusted assessors. ...