Published in

Association for Computing Machinery (ACM), ACM Transactions on Internet Technology, 3(17), p. 1-21, 2017

DOI: 10.1145/3053371

Links

Tools

Export citation

Search in Google Scholar

Experimental Assessment of Aggregation Principles in Argumentation-Enabled Collective Intelligence

This paper is available in a repository.
This paper is available in a repository.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

On the Web, there is always a need to aggregate opinions from the crowd (as in posts, social networks, forums, etc.). Different mechanisms have been implemented to capture these opinions such as Like in Facebook, Favorite in Twitter, thumbs-up/-down, flagging, and so on. However, in more contested domains (e.g., Wikipedia, political discussion, and climate change discussion), these mechanisms are not sufficient, since they only deal with each issue independently without considering the relationships between different claims. We can view a set of conflicting arguments as a graph in which the nodes represent arguments and the arcs between these nodes represent the defeat relation. A group of people can then collectively evaluate such graphs. To do this, the group must use a rule to aggregate their individual opinions about the entire argument graph. Here we present the first experimental evaluation of different principles commonly employed by aggregation rules presented in the literature. We use randomized controlled experiments to investigate which principles people consider better at aggregating opinions under different conditions. Our analysis reveals a number of factors, not captured by traditional formal models, that play an important role in determining the efficacy of aggregation. These results help bring formal models of argumentation closer to real-world application.