AI as their teammate, and we can imagine experiencing less conflict and having an AI that is easier to work with than a human teammate, that might ultimately devalue the contributions of human teammates. Another potential danger is “over trust.” I think one of the things we‘ve seen as a result of automation within the aviation industry is that sometimes when automation gets really good people come to rely on it too much and their own skills degrade. I‘m a little concerned when I think about AI as a decision aid where people rely on it too much and don‘t ask critical enough questions and ultimately their ability to make highly complex decisions may degrade. PERSONALquarterly: In your experience, do the use of AI as teammates – differ across organizations and industries? How about intercultural differences? Leslie DeChurch: I think that some industries have been more receptive to AI than others. Certainly those in data-intensive fields or in fields that have other forms of automation in place, AI teammates have had an easier time gaining acceptance. The creative sector has been a bit slower to see the benefits but that is certainly changing. PERSONALquarterly: People do not want to be patronized by a machine. People have a basic need for autonomy. One question, therefore, is who actually decides - the robot or the human being? Do humans always want to act in the role of decisionmakers or as the final authority? Leslie DeChurch: People don‘t want to be patronized but you would be shocked how quickly they will rescind authority to a machine. I think that‘s the bigger worry personally. I welcome the contributions of AI to teams, but it shouldn‘t substitute for thoughtful deliberative and critical decision making by people. That‘s not to say that the human should be the only one to decide, but I don‘t think the goal is to have a situation where humans are taken out of the loop. PERSONALquarterly: It is becoming increasingly apparent that work is being done on or with machines that not only behave humanly, but also look human - in some cases even lifelike, with artificial hair, silicone skin and natural micro-movements to imitate humans as best as possible. Are these machines actually already in development, or is the human look still more of a vision? What is the impetus behind the development of these extremely human-like robots? Is this embodiment always important or can AI be effective without it? Leslie DeChurch: Humanoid robots have been in development for quite some time and are becoming extremely realistic. Google Sophia (Hanson Robotics)! Many researchers have found that embedded AI can be just as accepted as a teammate as embodied AI. Embodied AI has a physical form. One of the early theories in this area was the uncanny valley, this was the idea that the more an embodied AI or robot looked human it would improve people‘s liking, but, as it got more and more human but not quite perfect there would be an uncanny valley where people wouldn‘t like interacting with it. That‘s been used to explain the ick factor with some of the humanoid robots that look almost human but not quite. You can compare this to other robots like Pepper (Aldebaran) that have an almost childlike, cute appearance but are clearly not human. The important thing to consider with embodiment is that people use physical features as a way to make attributions of the functioning of the underlying system. Social perception research finds people use physical features to make attributions of for example agency and communion. When designing AI, incorporating physical features adds a whole other layer of complexity because the designer has to be aware of the psychology of social perception and how people will take cues from physical form and make attributions about the motives and intention of the underlying system. In some ways it‘s simpler to begin with the embedded system that doesn‘t have physical form. PERSONALquarterly: What are the „critical hurdles“ that need to be overcome when implementing hybrid teams? What challenges do organizations face? Leslie DeChurch: There are plenty of challenges that computer scientists are working on. I think a lot about the social challenges. I think teamwork is one of the critical hurdles that has to be overcome and it is important that we are studying how people react to intelligent machines as teammates now even before the technology advances. We see this with the recent release of chatGPT technology. PERSONALquarterly: What is your personal conclusion: why do we need hybrid teams in the future world of work? How will our work change as a result? Leslie DeChurch: There is no question that we need hybrid teams in the future world of work. I like the vision of AI taking on the work tasks that people dislike, freeing them up to have more meaningful work engaged in the parts of the work they enjoy most. There is great potential of humans working alongside intelligent machines. I can think of many aspects of jobs that people would be happy to offload. Being able to partner with machines means there is an opportunity for people to do more of what they like. There are also major hurdles. There is a large percentage of the workforce that will need the skill set to enable them to make valuable contributions and engage in meaningful work as an increasing number of tasks become automatable through AI.
RkJQdWJsaXNoZXIy Mjc4MQ==