Personal quarterly 2/2023

17 02 / 23 PERSONALquarterly PROF. DR. ANNA-SOPHIE ULFERT-BLANK Assistenzprofessorin für Organizational Behavior and Artificial Intelligence; Human Performance Management Group am Department für Industrial Engineering & Innovation Sciences, Technische Universität Eindhoven E-Mail: a.s.ulfert.blank@tue.nl www.research.tue.nl SUMMARY Research question: Artificial intelligence (AI) is increasingly positioned as a team member. Therefore, we investigate what expectations people place on AI team members and outline how these expectations affect human-AI collaboration. Methodology: We summarize the state of research and derive seven observations. Practical implications: AI-related expectations should be actively addressed in AI-development, implementation, and human-AI collaboration. We illustrate ways to realize this kind of expectation management in practice. SOPHIE KERSTAN Doktorandin an der Professur für Arbeits- und Organisationspsychologie am Department Management, Technologie und Ökonomie, ETH Zürich E-Mail: skerstan@ethz.ch www.wop.ethz.ch LITERATURVERZEICHNIS Andrews, R.W./Lilly, J.M./Srivastava, D./Feigh, K.M. (2022): The role of shared mental models in human-AI teams: a theoretical review. Theoretical Issues in Ergonomics Science, S. 1-47. Bhattacherjee, A./Premkumar, G. (2004): Understanding changes in belief and attitude toward information technology usage: A theoretical model and longitudinal test. MIS Quarterly, 28(2), S. 229-254. Demir, M./McNeese, N. J./Cooke, N. J. (2018): The impact of perceived autonomous agents on dynamic team behaviors. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(4), S. 258-267. Grimes, G. M./Schuetzler, R. M./Giboney, J. S. (2021): Mental models and expectation violations in conversational AI interactions. Decision Support Systems, 144. Kaplan, A. D./Kessler, T. T./Brill, J. C./Hancock, P. A. (2021): Trust in artificial intelligence: Meta-analytic findings. Human Factors. Lyons, J. B./Wynne, K. T./Mahoney, S./Roebke, M. A. (2019): Trust and humanmachine teaming: A qualitative study. In: Artificial Intelligence for the Internet of Everything. Elsevier, S. 101-116. McNeese, N. J./Demir, M./Cooke, N. J./Myers, C. (2018): Teaming with a synthetic teammate: Insights into human-autonomy teaming. Human Factors, 60(2), S. 262–273. Merritt, S. M./Unnerstall, J. L./Lee, D./Huber, K. (2015): Measuring individual differences in the perfect automation schema. Human Factors, 57(5), S. 740-753. O’Neill, T./McNeese, N./Barron, A./Schelble, B. (2020): Human–autonomy teaming: A review and analysis of the empirical literature. Human Factors. Parker, S. K./Grote, G. (2020): Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology, S. 1-45. Pinquart, M./Endres, D./Teige-Mocigemba, S./Panitz, C./Schütz, A. C. (2021): Why expectations do or do not change after expectation violation: A comparison of seven models. Consciousness and Cognition, 89. Rieth, M./Hagemann, V. (2022): Automation as an equal team player for humans? – A view into the field and implications for research and practice. Applied Ergonomics, 98. Seeber, I./Bittner, E./Briggs, R. O./de Vreede, T./de Vreede, G.-J./Elkins, A./ Maier, R./Merz, A. B./Oeste-Reiß, S./Randrup, N./Schwabe, G./Söllner, M. (2020): Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2). Zhang, R./McNeese, N. J./Freeman, G./Musick, G. (2021): „An ideal human“”: Expectations of AI teammates in human-AI teaming. Proceedings of the ACM on Human-Computer Interaction, 4, S. 1–25. PROF. DR. ELENI GEORGANTA Assistenzprofessorin in Work and Organizational Psychology; Programme Group Work and Organizational Psychology an der Fakultät für Social and Behavioural Sciences, University of Amsterdam E-Mail: e.georganta@uva.nl www.uva.nl wird – bspw. hinsichtlich der Frage, wie genau sich voneinander abweichende KI-bezogene Erwartungen verschiedener Teammitglieder auf Interaktionen in Mensch-KI-Teams auswirken. Um diese und ähnliche Aspekte zu adressieren, sind Experimentalstudien im Labor zwar unabdingbar, vor allem können jedoch Zusammenschlüsse zwischen Praxis und Forschung aktiv zur Wissensgenerierung in diesem Themenfeld beitragen. Stakeholder aus der Praxis sind in diesem Sinne nicht nur dazu eingeladen, die Handlungsempfehlungen eines menschzentrierten Designs von KI-Teammitgliedern anzuwenden, sondern haben die Möglichkeit, das Feld durch Forschungskollaborationen entscheidend voranzubringen.

RkJQdWJsaXNoZXIy Mjc4MQ==