An experimental characterization of workers' behavior and accuracy in crowdsourced tasks Articles uri icon

publication date

  • June 2021

start page

  • 1

end page

  • 14

issue

  • 6 June

volume

  • 16

International Standard Serial Number (ISSN)

  • 1932-6203

abstract

  • Crowdsourcing systems are evolving into a powerful tool of choice to deal with repetitive or
    lengthy human-based tasks. Prominent among those is Amazon Mechanical Turk, in which
    Human Intelligence Tasks, are posted by requesters, and afterwards selected and executed
    by subscribed (human) workers in the platform. Many times these HITs serve for research
    purposes. In this context, a very important question is how reliable the results obtained
    through these platforms are, in view of the limited control a requester has on the workers"
    actions. Various control techniques are currently proposed but they are not free from shortcomings,
    and their use must be accompanied by a deeper understanding of the workers"
    behavior. In this work, we attempt to interpret the workers" behavior and reliability level in
    the absence of control techniques. To do so, we perform a series of experiments with 600
    distinct MTurk workers, specifically designed to elicit the worker"s level of dedication to a
    task, according to the task"s nature and difficulty. We show that the time required by a
    worker to carry out a task correlates with its difficulty, and also with the quality of the outcome.
    We find that there are different types of workers. While some of them are willing to
    invest a significant amount of time to arrive at the correct answer, at the same time we
    observe a significant fraction of workers that reply with a wrong answer. For the latter, the
    difficulty of the task and the very short time they took to reply suggest that they, intentionally,
    did not even attempt to solve the task.

subjects

  • Business
  • Industrial Engineering