Special Session: Quality of Experience and User Experience Research with and for Crowdsourcing

Title

Quality of Experience and User Experience Research with and for Crowdsourcing

Organizers

Babak NaderiMatthias Hirth, and Katrien De Moor

Motivation and objectives

Crowdsourcing has become a valuable tool in Quality of Experience (QoE) and User Experience (UX) research. In contrast to lab-based studies, crowdsourcing enables researchers to easily and quickly collect a large number of subjective ratings from a diverse set of participants. This diversity further enables researching cultural effects, influence factors generated by different end-user devices, and effects of different surrounding environments. However, these opportunities come at costs. Experimenters have only limited control of the test settings and in which environmental conditions the study takes place. Additionally, the remote test procedure can be error-prone as the test participants are not under the direct supervision of the test conductor.

To fully utilize the advantages and benefits of crowdsourcing the context of subjective testing, those challenges need to be addressed. In particular, technical monitoring methods can help to gain a better understanding of the current hardware and environmental settings in which the test takes place. Here, recent developments and the increasing availability of Internet of Things, smart metering, and wearable devices opens up new opportunities to obtain insights into the user’s surroundings and the user herself to identify now – currently hidden – influence factors. This additional information may also be used to increase the reproducibility of subjective crowdsourcing studies, another, yet not fully solved research challenge. The results of repeated subjective crowdsourcing studies

sometimes differ significantly, due to the diversity of the crowdsourcing users and their unknown surroundings. This calls for the development of new methodologies and test procedures to enable consistent test results across multiple studies and crowds. A first step in this direction is the recent ITU-T standard P.Crowd, that focuses on subjective audio test as an initial use case. However, it remains unclear how the recommendations given in the standard can be generalized for subjective studies in general.

With the increasing interest in subjective crowdsourcing studies also the field of application broadened. Starting with simple assessments of standard definition image and video quality the complexity of crowdsourcing tasks and crowdsourcing workflows ever increased even to support the assessment of 3D and virtual reality content. This, however, also results in increasing complexity of the task interfaces with which the workers have to cope. Considering the short amount of time the workers have to familiarize themselves with the task interface, the usability and the design of those interfaces plays an important role. Still, to the best of our knowledge, the usability and the actual user experience of crowd workers has not been addressed yet.

In this context, the aim of the special session is twofold. On the one hand, we want to foster contributions following the traditional way optimizing and designing crowdsourced subjective studies for Quality of Experience and User Experience research. This includes novel methodologies for quality assurance and replicability, new fields of application like assessing the QoE of IoT devices in a crowdsources fashion, and using new technologies like wearable to collect additional environmental and user signals. On the other hand, we also want to raise awareness and foster research in a new research direction, the Quality of Experience and User Experience of crowdsourcing workers. Crowdsourcing has become mature in the academic and business usage, and much effort is put into a (cost-) efficient and quality optimizing the design of the tasks. However, no or only very little efforts are made to improve the working experience of the workers.

Topics of interest

  • Crowdsourcing for subjective studies
    • Novel application
    • Limitations of current crowdsourcing systems
    • Quality control mechanism / Monitoring of test and environmental conditions
    • Large scale crowdsourcing studie
  • Reproducibility of results
    • Cross platform studies / Repeated studies
    • Assessment of hidden influence factors / Impact of hidden influence factors
    • Bias estimation and bias reduction
    • Standardisation of crowdsourcing test methods
  • Usability and User Experience of crowdsourcing tasks
    • Optimization of task interfaces and task workflows
    • Relation to result quality and worker motivation
    • Enhancing workers’ UX (e.g., by means of gamification of tasks)
    • Quality of complex crowdsourcing workflows (e.g. combination of AI and Crowds)
  • Interconnection of crowdsourcing and lab-based tests, e.g.,
    • Studies comparing results from lab and crowdsourcing
    • Adaptations of established lab-test standards to the crowdsourcing environment

Please find more detailed information on the submission procedure here: