Hoe halen chatbots de kink uit de kabel? | Amsterdam University Press Journals Online
2004
Volume 52, Issue 3
  • ISSN: 1384-6930
  • E-ISSN: 1875-7286

Samenvatting

Samenvatting

Chatbots worden steeds vaker ingezet in de klantenservice, maar zijn verre van foutloos. Wanneer chatbots fouten maken, zijn er verschillende reparatiestrategieën om het onbegrip te communiceren. Dit artikel geeft een overzicht van de literatuur over dit onderwerp, en presenteert twee experimentele studies waaruit blijkt dat chatbots onbegrip beter met een tegemoetkomende reparatiestrategie kunnen communiceren dan met een defensieve strategie.

Loading

Article metrics loading...

/content/journals/10.5117/TCW2024.3.003.LIEB
2024-07-01
2024-08-24
Loading full text...

Full text loading...

/deliver/fulltext/13846930/52/3/TCW2024.3.003.LIEB.html?itemId=/content/journals/10.5117/TCW2024.3.003.LIEB&mimeType=html&fmt=ahah

References

  1. Araujo, T. (2018). Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Computers in Human Behavior, 85, 183-189.
    [Google Scholar]
  2. Ashktorab, Z., Jain, M., Liao, Q. V., & Weisz, J. D. (2019, May). Resilient chatbots: Repair strategy preferences for conversational breakdowns. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1-12).
    [Google Scholar]
  3. Asisof, A. (2022). Towards a comprehensive repair framework for human-chatbot interaction. In Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents (pp. 1-3).
    [Google Scholar]
  4. Benner, D., Elshan, E., Schöbel, S., & Janson, A. (2021). What do you mean? A review on recovery strategies to overcome conversational breakdowns of conversational agents. In Proceedings of the 42nd International Conference on Information Systems (ICIS). https://www.researchgate.net/publication/354812835_What_do_you_mean_A_Review_on_Recovery_Strategies_to_Overcome_Conversational_Breakdowns_of_Conversational_Agents
    [Google Scholar]
  5. Benoit, W. L. (1997). Image repair discourse and crisis communication. Public Relations Review, 23(2), 177-186.
    [Google Scholar]
  6. Bohus, D., & Rudnicky, A. I. (2005, september). Sorry and I didn’t catch that! – An investigation of non-understanding errors and recovery strategies. In Proceedings of the 6th SIGdial Workshop on Discourse and Dialogue (pp. 128-143).
    [Google Scholar]
  7. Bulyko, I., Kirchhoff, K., Ostendorf, M., & Goldberg, J. (2005). Error-correction detection and response generation in a spoken dialogue system. Speech Communication, 45(3), 271-288.
    [Google Scholar]
  8. Braggaar, A., Verhagen, J., Martijn, G., & Liebrecht, C. (2023, November). Conversational repair strategies to cope with errors and breakdowns in customer service chatbot conversations. In International Workshop on Chatbot Research and Design (pp. 1-19). Cham: Springer International Publishing.
    [Google Scholar]
  9. Campbell, M. C., & Keller, K. L. (2003). Brand familiarity and advertising repetition effects. Journal of Consumer Research, 30(2), 292-304. https://doi.org/10.1086/376800
    [Google Scholar]
  10. Carter, L. (2023, 19december). Chevrolet dealer’s AI chatbot goes rogue thanks to pranksters. Yahoo.com. https://autos.yahoo.com/chevrolet-dealer-ai-chatbot-goes-195647786.html
    [Google Scholar]
  11. Cialdini, R. B. (1993). Influence. The psychology of persuasion. HarperCollins.
    [Google Scholar]
  12. Claeys, A. S., Cauberghe, V., & Vyncke, P. (2010). Restoring reputations in times of crisis: An experimental study of the Situational Crisis Communication Theory and the moderating effects of locus of control. Public Relations Review, 36(3), 256-262.
    [Google Scholar]
  13. Clausen, M., Kyhn, M. P., Papachristos, E., & Merritt, T. (2023, juli). Exploring humor as a repair strategy during communication breakdowns with voice assistants. In Proceedings of the 5th International Conference on Conversational User Interfaces (pp. 1-9).
    [Google Scholar]
  14. Coombs, W. T. (1998). An analytic framework for crisis situations: Better responses from a better understanding of the situation. Journal of Public Relations Research, 10(3), 177-191.
    [Google Scholar]
  15. Coombs, W. T. (2000). Crisis management: Advantages of a relational perspective. In J. A.Ledingham & S. D.Bruning (Eds.), Public relations as relationship management: A relational approach to the study and practice of public relations (pp. 73-93). Lawrence Erlbaum Associates Publishers.
    [Google Scholar]
  16. Coombs, W.T. (2007). Protecting organization reputations during a crisis: The development and application of situational crisis communication theory. Corporate Reputation Review, 10(3), 163-176.
    [Google Scholar]
  17. Coombs, W. T., & Holladay, S. J. (2008). Comparing apology to equivalent crisis response strategies: Clarifying apology’s role and value in crisis communication. Public Relations Review, 34(3), 252-257.
    [Google Scholar]
  18. Croes, E. A., & Antheunis, M. L. (2021). Can we be friends with Mitsuku? A longitudinal study on the process of relationship formation between humans and a social chatbot. Journal of Social and Personal Relationships, 38(1), 279-300.
    [Google Scholar]
  19. Cuadra, A., Li, S., Lee, H., Cho, J., & Ju, W. (2021). My bad! Repairing intelligent voice assistant errors improves interaction. In Proceedings of the ACM on Human-Computer Interaction, 5(CSCW1) (pp. 1-24).
    [Google Scholar]
  20. Dens, N., De Pelsmacker, P., & Purnawirawan, N. (2015). “We (b) care”: How review set balance moderates the appropriate response strategy to negative online reviews. Journal of Service Management, 26(3), 486-515.
    [Google Scholar]
  21. De Sá Siqueira, M.A., Müller, B.C.N. & Bosse, T. (2023). When do we accept mistakes from chatbots? The impact of human-like communication on user experience in chatbots that make mistakes. International Journal of Human-Computer Interaction, 1-11.
    [Google Scholar]
  22. Dippold, D. (2023). “Can i have the scan on Tuesday?” User repair in interaction with a task-oriented chatbot and the question of communication skills for AI. Journal of Pragmatics, 204, 21-32.
    [Google Scholar]
  23. Dzikovska, M. O., Callaway, C. B., Farrow, E., Moore, J. D., Steinhauser, N., & Campbell, G. (2009, September). Dealing with interpretation errors in tutorial dialogue. In Proceedings of the SIGDIAL 2009 Conference (pp. 38-45). https://aclanthology.org/W09-3906
    [Google Scholar]
  24. Fiske, S. T., Cuddy, A. J., Glick, P., & Xu, J. (2002). A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 82(6), 878-902.
    [Google Scholar]
  25. Følstad, A., Skjuve, M., & Brandtzaeg, P. B. (2019). Different chatbots for different purposes: Towards a typology of chatbots to understand interaction design. In Internet Science: INSCI 2018 International Workshops, St. Petersburg, Russia, October 24-26, 2018, Revised Selected Papers 5 (pp. 145-156). Springer International Publishing.
    [Google Scholar]
  26. Følstad, A., & Taylor, C. (2020). Conversational repair in chatbots for customer service: The effect of expressing uncertainty and suggesting alternatives. In International Workshop on Chatbot Research and Design (pp. 201-214). Springer International Publishing.
    [Google Scholar]
  27. Freepik. (2021). Graphic resources for everyone. Verkregen via: https://www.freepik.com/
    [Google Scholar]
  28. Gnewuch, U., Morana, S., & Maedche, A. (2017). Towards designing cooperative and social conversational agents for customer service. In Proceedings of the International Conference on Information Systems (ICIS). https://aisel.aisnet.org/icis2017/HCI/Presentations/1/
    [Google Scholar]
  29. Gretry, A., Horváth, C., Belei, N., & van Riel, A. C. (2017). “Don’t pretend to be my friend!” When an informal brand communication style backfires on social media. Journal of Business Research, 74, 77-89.
    [Google Scholar]
  30. Hachmang, D. & Keuning, A. (2020). Stand van Webcare 2020. https://www.upstream.nl/wp-content/uploads/2020/06/Stand-van-Webcare-juni-2020.pdf
    [Google Scholar]
  31. Hayes, A. F. (2022). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (3rd ed.). The Guilford Press.
    [Google Scholar]
  32. He, L., Basar, E., Wiers, R. W., Antheunis, M. L., & Krahmer, E. (2022). Can chatbots help to motivate smoking cessation? A study on the effectiveness of motivational interviewing on engagement and therapeutic alliance. BMC Public Health, 22(1), 726.
    [Google Scholar]
  33. Hendriks, F., Ou, C., Amiri, A. K., & Bockting, S. (2020). The power of computer-mediated communication theories in explaining the effect of chatbot introduction on user experience. In Proceedings of the 53 Hawaii International Conference on System Sciences (HICSS 2020) (pp. 271-278).
    [Google Scholar]
  34. Heyselaar, E., & Bosse, T. (2019, november). Using theory of mind to assess users’ sense of agency in social chatbots. In International Workshop on Chatbot Research and Design (pp. 158-169). Springer International Publishing.
    [Google Scholar]
  35. Hobert, S. & Berens, F. (2020). Small talk conversations and the long-term use of chatbots in educational settings – experiences from a field study. In International Workshop on Chatbot Research and Design (pp. 260-272). Springer International Publishing.
    [Google Scholar]
  36. Hooijdonk, C. van & Liebrecht, C. (2021). Sorry but no sorry: The use and effects of apologies in airline webcare responses to NeWOM messages of flight passengers. Discourse, Context, & Media, 40, 100442.
    [Google Scholar]
  37. Hooijdonk, C. van, Martijn, G., & Liebrecht, C. (2023). A framework and content analysis of social cues in the introductions of customer service chatbots. In International Workshop on Chatbot Research and Design (pp. 118-133). Springer International Publishing.
    [Google Scholar]
  38. Höhn, S., & Bongard-Blanchy, K. (2020, november). Heuristic evaluation of COVID-19 chatbots. In International Workshop on Chatbot Research and Design (pp. 131-144). Springer International Publishing.
    [Google Scholar]
  39. Huibers, J. & Verhoeven, J. (2014). Webcare als online reputatiemanagement. Het gebruik van webcarestrategieën en conversational human voice in Nederland, en de effecten hiervan op de corporate reputatie. Tijdschrift voor Communicatiewetenschap, 42(2), 165-189.
    [Google Scholar]
  40. Inie, N., Stray, J., & Derczynski, L. (2023). Summon a demon and bind it: A grounded theory of LLM red teaming in the wild. arXiv (Cornell University).
    [Google Scholar]
  41. Ischen, C., Araujo, T., Voorveld, H., van Noort, G., & Smit, E. (2019, november). Privacy concerns in chatbot interactions. In International Workshop on Chatbot Research and Design (pp. 34-48). Springer International Publishing.
    [Google Scholar]
  42. Jakic, A., Wagner, M. O., & Meyer, A. (2017). The impact of language style accommodation during social media interactions on brand trust. Journal of Service Management, 28(3), 418-441.
    [Google Scholar]
  43. Janssen, A., Grützner, L., & Breitmeuterner, M. H. (2021). Why do chatbots fail? A critical success factors analysis. Forty-Second International Conference on Information Systems, 1-18. https://aisel.aisnet.org/icis2021/hci_robot/hci_robot/6
    [Google Scholar]
  44. Javornik, A., Filieri, R., & Gumann, R. (2020). “Don’t forget that others are watching, too!” The effect of conversational human voice and reply length on observers’ perceptions of complaint handling in social media. Journal of Interactive Marketing, 50(1), 100-119.
    [Google Scholar]
  45. Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., Madotto, A., & Fung, P. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1-38.
    [Google Scholar]
  46. Kamoen, N., & Liebrecht, C. (2022). I Need a CAVAA: How Conversational Agent Voting Advice Applications (CAVAAs) Affect Users’ Political Knowledge and Tool Experience. Frontiers in Artificial Intelligence, 5, 835505.
    [Google Scholar]
  47. Khadpe, P., Krishna, R., Fei-Fei, L., Hancock, J. T., & Bernstein, M. S. (2020). Conceptual metaphors impact perceptions of human-AI collaboration. In Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2) (pp. 1-26).
    [Google Scholar]
  48. Kontogiorgos, D., Tran, M., Gustafson, J., & Soleymani, M. (2021, oktober). A systematic cross-corpus analysis of human reactions to robot conversational failures. In Proceedings of the 2021 International Conference on Multimodal Interaction (pp. 112-120).
    [Google Scholar]
  49. Krahmer, E., Swerts, M., Theune, M., & Weegels, M. (2001). Error detection in spoken human-machine interaction. International Journal of Speech Technology, 4(1), 19-30.
    [Google Scholar]
  50. Kvale, K., Sell, O. A., Hodnebrog, S., & Følstad, A. (2021). Improving conversations: Lessons learnt from manual analysis of chatbot dialogues. In International Workshop on Chatbot Research and Design (pp. 187-200). Springer International Publishing.
    [Google Scholar]
  51. Liebrecht, C., & van der Weegen, E. (2019). Menselijke chatbots: een zegen voor online klantcontact? Het effect van conversational human voice door chatbots op social presence en merkattitude. Tijdschrift voor Communicatiewetenschap, 47(3), 217-238.
    [Google Scholar]
  52. Liebrecht, C., Sander, L., & Van Hooijdonk, C. (2021). Too informal? How a chatbot’s communication style affects brand attitude and quality of interaction. In International Workshop on Chatbot Research and Design (pp. 16-31). Springer International Publishing.
    [Google Scholar]
  53. Liew, T.W., Tan, S., Tee, J. & Goh, G.G.G. (2021). The effects of designing conversational commerce chatbots with expertise cues. In 14th International Conference on Human System Interaction (HSI).
    [Google Scholar]
  54. Maguire, M. (2010). The use of colour in intercultural website design. Design, User Experience, and Usability. Theory, Methods, Tools and Practice, 1, 162-171.
    [Google Scholar]
  55. Mostafa, R. B., Lages, C. R., Shabbir, H. A., & Thwaites, D. (2015). Corporate image: A service recovery perspective. Journal of Service Research, 18(4), 468-483.
    [Google Scholar]
  56. Mozafari, N., Weiger, W. H., & Hammerschmidt, M. (2021). Resolving the chatbot disclosure dilemma: leveraging selective self-presentation to mitigate the negative effect of chatbot disclosure. In Proceedings of the 54th Hawaii International Conference on System Sciences (pp. 2916-2923).
    [Google Scholar]
  57. Myers, C., Furqan, A., Nebolsky, J., Caro, K., & Zhu, J. (2018, april). Patterns for how users overcome obstacles in voice user interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-7).
    [Google Scholar]
  58. OpenAI. (2022, 30november). Introducing ChatGPT. https://openai.com/blog/chatgpt
    [Google Scholar]
  59. PcMag. (n.d.). Definition of beta version. https://www.pcmag.com/encyclopedia/term/beta-version
    [Google Scholar]
  60. Pricilla, C., Lestari, D.P. & Dharma, D. (2018). Designing interaction for chatbotbased conversational commerce with user-centered design. In 5th International Conference on Advanced Informatics: Concept Theory and Applications (ICAICTA) (pp. 244-249). IEEE.
    [Google Scholar]
  61. Prahl, A., & Goh, W. W. P. (2021). “Rogue machines” and crisis communication: When AI fails, how do companies publicly respond? Public Relations Review, 47(4), 102077.
    [Google Scholar]
  62. Reinkemeier, F., & Gnewuch, U. (2022). Designing effective conversational repair strategies for chatbots. In Proceedings of the Thirtieth European Conference on Information Systems (ECIS 2022), Timişoara, Romania. https://aisel.aisnet.org/ecis2022_rp/1/
    [Google Scholar]
  63. Salesforce (2023). Vijfde editie van het State of Service rapport. https://www.salesforce.com/nl/resources/research-reports/state-of-service/
    [Google Scholar]
  64. Sanguinetti, M., Mazzei, A., Patti, V., Scalerandi, M., Mana, D., & Simeoni, R. B. (2020). Annotating errors and emotions in human-chatbot interactions in Italian. In Proceedings of the 14th Linguistic Annotation Workshop (pp. 148-159). https://www.aclweb.org/anthology/2020.law-1.14/
    [Google Scholar]
  65. Schultz, F., Utz, S., & Göritz, A. (2011). Is the medium the message? Perceptions of and reactions to crisis communication via Twitter, blogs and traditional media. Public Relations Review, 37(1), 20-27.
    [Google Scholar]
  66. Skantze, G. (2003). Exploring human error handling strategies: Implications for spoken dialogue systems. In ISCA Tutorial and Research Workshop on Error Handling in Spoken Dialogue Systems (pp. 71-76). https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=835ec15a0d156e2842b71c3172b032fabdae279f
    [Google Scholar]
  67. Toader, D.-C., Boca, G. T., Toader, R., Măcelaru, M., Toader, C., Ighian, D., & Rădulescu, A. T. (2019). The effect of social presence and chatbot errors on trust. Sustainability, 12(1), 256.
    [Google Scholar]
http://instance.metastore.ingenta.com/content/journals/10.5117/TCW2024.3.003.LIEB
Loading
/content/journals/10.5117/TCW2024.3.003.LIEB
Loading

Data & Media loading...

Dit is een verplicht veld
Graag een geldig e-mailadres invoeren
Approval was a Success
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error