8 SCHWERPUNKT_INTERVIEW PERSONALquarterly 02 / 23 gulatory deficiencies. A key example is moving too quickly towards critiquing ideas when the team is trying to ideate. Or in the decision making space, we know that teams are notoriously over focused on common and redundant information and tend to under explore and consider the importance of members uniquely held information. These are two examples where AI could make a contribution to the team by helping regulate the discussion away from critique and towards idea generation, for creative tasks. For problem solving tasks, they can say simple things that help the team shift towards identifying unique perspectives and ideas. However, when we did experiments where the AI did exactly these kinds of things, people didn’t like it. Teammates appreciate AI when it contributes to the task but they‘re not really willing to let the AI tell it how to interact socially within a team. PERSONALquarterly: How does collaboration between humans and AI work? What factors need to be considered? How can these be taken into account during the development of AI so that the right decisions can be made and team goals achieved? Leslie DeChurch: In human teams we think about the big three factors that help people work together: team affect, team cognition, and team behavioral processes. Team affect and cognition are thought of as emergent states - the emotional and mental processes that provide a framework for people to work together. Behavioral processes involve the timing and sequencing of joint actions. Research on teams shows that affect and cognition are core enabling conditions for teams, people have to emotionally connect and trust one another to be willing to work together (affect) and they have to mentally have enough compatibility in their understanding of the task (cognition). These same affective and cognitive states will be critical to human AI interfaces as well. On the affect side people need to trust the AI and feel a sense of identity with the team. That drives their motivation to engage and to remain committed. At the same time people need to understand tasks in ways that are compatible with how the AI views the task. That‘s one of the fascinating prospects because when two people don‘t see the task the same way the solution is to talk to each other. As people discuss a problem or a task through planning activities for example they are able to converge their mental understanding of the work in ways that will reduce the need for future direct communication. How will this work with a person and an AI? Because people and machines think and learn in fundamentally different ways, and ways that are not transparent all the time to humans, that‘s one of the critical barriers that we will need to overcome. What we don‘t want are people learning to live with the incompatibility or disengaging. Designers need to develop mechanisms by which people can synchronize mental processes with AI without being computer scientists. PERSONALquarterly: We know that shared mental models (an understanding of what is done by whom, when, how) are essential for decision making, coordination, and goal achievement in teams. Due to the existing complexity and black-box issues of self-learning systems, the question of how AI can be helpful in a team context may not be so easy to answer. What do we know about whether and how these mental models also relate to AI and what this means for team success? Leslie DeChurch: Mental models are been one of the most robust predictors of team performance. Fundamentally mental models in teams are about developing a way to predict how people are going to act in particular situations. Having similarity in your schema about your task, how the team works, or how you should interact is a critical foundation of two people being able to work together. Mental models will be all the more important in human AI teams. I think one of the big challenges though is understanding how we can develop effective shared mental models working with AI given the complexity and differences in how people and machines learn. PERSONALquarterly: What are success factors for AI to provide value as a teammate? Leslie DeChurch: Respected contributions to the task are the sine non qua for AI teammates. One of the most important things our early research has shown is that AI is only viewed as a valuable teammate to the degree that it is making foundational contributions to the work. Anything else, providing leadership or helping to regulate the team social climate is going to be totally negated if the AI isn‘t contributing to the task. PERSONALquarterly: What do you see as the practical benefits of seeing AI as a teammate rather than just an assistance system or a tool, and integrating it into teams? Leslie DeChurch: One of the practical benefits of seeing AI as a teammate is that people will grant it far greater latitude in providing input into teamwork and taskwork. When people view it as an assistant, it can make only limited contributions to the team. So realizing the full potential of AI is going to require this transition to viewing AI not just as an assistant but as a full blown member of the team. There are other benefits as well, we know that teams provide an important source of social support. People exhibit greater motivation when they work in successful teams than when they work alone. AI has the benefit of going with a person and working with them anywhere. It doesn‘t require travel or colocation to be effective. This could greatly contribute to meaningful work. PERSONALquarterly: What dangers do you see when AI is no longer just understood as a tool but as a teammate? Leslie DeChurch: Like many technologies one of the potential dangers is suppressing human connections. If people come to view
RkJQdWJsaXNoZXIy Mjc4MQ==