Conference proceedings article

Unleashing the Potential of Conversational Agents for Course Evaluations: Empirical Insights from a Comparison with Web Surveys



Publication Details
Authors:
Wambsganss, T.; Winkler, R.; Schmid, P.; Söllner, M.
Editor:
Rowe, Frantz; El Amrani, Redouane; Limayem, Moez; Newell, Sue; Pouloudi, Nancy; van Heck, Eric; El Quammah, Ali
Place:
Marrakech, Morocco

Publication year:
2020
Pages range :
Research Papers 50
Book title:
Proceedings of the 28th European Conference on Information Systems (ECIS) - Liberty, Equality, and Fraternity in a Digitizing World


Abstract
Recent advances in Natural Language Processing (NLP) bear the opportunity to design new forms of human-computer interaction with conversational interfaces. However, little is known about how these interfaces change the way users respond in online course evaluations. We aim to explore the effects of conversational agents (CAs) on the response quality of online course evaluations in education compared to the common standard of web surveys. Past research indicates that web surveys come with disadvantages, such as poor response quality caused by inattention, survey fatigue or satisficing behavior. We propose that a conversational interface will have a positive effect on the response quality through the different way of interaction. To test our hypotheses, we design an NLP-based CA and deploy it in a field experiment with 176 students in three different course formats and compare it with a web survey as a baseline. The results indicate that participants using the CA showed higher levels of response quality and social presence compared to the web survey. These findings along with technology acceptance measurements suggest that using CAs for evaluation are a promising approach to increase the effectiveness of surveys in general.


Authors/Editors

Last updated on 2024-12-07 at 19:50