Retrieving Collective Distribution of Crowd Annotation by Deliberative Estimation on How Others Would Annotate

less than 1 minute read

Crowdsourcing has been widely used for labeling categories of data. It has tried to aggregate varied and error-prone responses into one accurate category. However, some data like the audio-visual expression of the emotion can be interpreted in diverse ways. For such data, the overall perception of the crowd on the data can convey how diverse interpretations are possible. However, how to collect such distribution of the perception efficiently is under-explored and naive crowdsourcing method can be expensive to collect the distribution. In this paper, we explore how we can crowdsource the perception of the overall crowd efficiently by making workers interact each other and estimate how other people would annotate the data. We introduce a workflow that not only make workers converge in their estimation by collaborative deliberation, but also reconsider neglected options by deliberative prompting.