Support making RiG more international!

Click here to start the survey

Support making "Research in Germany" more international! Your expertise and commitment are essential for advancing the promotion of the German research landscape. We invite you to participate in our online survey and share your valuable insights after your website visit.

Possibly win up to €100 in rewards by participating in the follow-up survey. Duration: 7-10 minutes

Stupid computer! - Human-like chatbots are insulted

Virtual assistants, so-called chatbots, have become an integral part of many company websites and are playing an increasingly important role. A study by TU Dresden has investigated whether errors made by chatbots lead to aggressive behavior among their users and what influence users’ perceived humanness of the virtual assistants has on their reactions.

Apr 10, 2024, 5:00:49 PM
Karl J. Donath , Technische Universität Dresden

Chatbots are designed to make it easier for internet users to find the information they need quickly by responding directly to questions and requests. However, the reactions to chatbots have proven to be not always positive, says Professor Alfred Benedikt Brendel from TU Dresden. "If a chatbot gives incorrect or confusing answers, this can trigger aggression from users towards the digital conversation partner," explains the holder of the Chair of Information Systems, in par-ticular Intelligent Systems and Services. In the worst case, aggression towards the virtual assis-tant, including verbal abuse, can also have other repercussions - or example increasing users’ negative attitude towards the provider or the website itself. In their study, the international research team led by Alfred Brendel hypothesized that the design of chatbots has an influence on how users react to unsatisfactory answers. If a chatbot is provid-ed with human attributes, it can be assumed that aggressive behavior occurs less frequently than with a neutrally designed chatbot. "In our experiments, some of the participants used a human chatbot that was equipped with a name, gender and picture. It answered questions in a very friendly manner and reinforced its messages with appropriate emojis." On the other hand, the neutral chatbot, with which another part of the study participants interacted, did not contain any such design elements. The results of the study show first and foremost that the human chatbot generally increases user satisfaction. This, in turn, also reduces the occurrence of frustration. However, if a chatbot gives unsatisfactory answers, this leads to frustration and aggression even towards a chatbot with hu-man attributes, which contradicts the researchers' original assumption. Overall, around 10 per-cent of users show aggressive behavior towards the virtual assistants. In contrast to a neutral chatbot, however, the intensity of aggressive behavior is reduced when faced with a more human chatbot. Users were less likely to use offensive language when interact-ing with a human chatbot, for example. The results have far-reaching consequences, particularly in practice, explains Alfred Brendel. "I would advise software developers to be cautious in their approach to human-like design and think carefully about the positive and negative effects that additional human-like design elements such as gender, age or certain names can have." Contact: Prof. Alfred Benedikt Brendel TU Dresden Faculty of Business Administration and Economics Chair of Information Systems, esp. Intelligent Systems and Services Tel.: +49 351 463-33082 Email: alfred_benedikt.brendel@tu-dresden.de

Original Publication:

Brendel, A.B.; Hildebrandt, F.; Dennis, R.; Riquel, J. (2023): The Paradoxical Role of Humanness in Aggression Toward Conversational Agents, in: Journal of Management Information Systems. https://doi.org/10.1080/07421222.2023.2229127

Source:

https://idw-online.de/de/news831660
Chat-Icon