Author Profile Picture

Anna Shields

Consensio

Co-founder

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Can AI resolve employee conflict?

With the rapid expansion of AI technology, Anna Shields of Consensio considers whether it can help build conflict management skills.
woman in black and white dress sitting on bed, AI resolve conflict

The rise of artificial intelligence (AI) has happened at breakneck speed. We have gone from occasionally interacting with a chatbot at the bank, to asking sophisticated generative AI tools, such as ChatGPT, to prepare speeches, generate surveys and write complex reports. In the workplace, AI can automate some time-consuming tasks, which frees up the time of busy HR and L&D teams. But could AI resolve employee conflict – one of the organisation’s most complex issues? 

Conflict architecture 

Today’s AI and machine learning bots have evolved to learn quickly from experiences and input. According to its creators, ChatGPT is explicitly programmed to “answer follow-up questions, admit mistakes, challenge incorrect premises, and reject inappropriate requests.” 

In contrast, when it comes to handling conflict, people often default to their ingrained responses – the ways we are 'programmed' to react. These attitudes and behaviours stem from experiences during childhood or earlier in our careers. They can lead to us repeating unhealthy reactions, rather than learning and evolving, as AI would. 

Whilst we may be hardwired to respond to conflict situations in a certain way, the challenge for AI is that it is only as good as its inputs. Let’s explore some of the opportunities and challenges for staff using AI to build conflict management skills and resilience.

When conversing with AI, workers won’t get that same level of emotional recognition.

Conflict communication skills

AI could potentially be used to challenge preconceptions that intensify conflict situations. AI can absorb a plethora of data (such as emails, text inputs, and even facial expressions from a video) to come up with an objective response to the issues underlying the conflict.

Although early AI bots tended to be programmed with scripted actions based on rigid rules, or ‘yes’ and ‘no’ responses, they can now increasingly interact more naturally and offer a more nuanced solution. They can ask questions and prompt parties to consider a range of views they might have otherwise ignored. 

That said, AI has key limitations in this area, too. Empathy, for example, is a vital component of conflict resolution. While AI is growing in sophistication, technology cannot compete with humans in terms of how we understand each other.

When people are in conflict, they have a core need to be heard, and to have their feelings recognised and validated. When we engage with someone in a difficult conversation, we convey our emotions in words, body language, tone of voice and pace. When conversing with AI, workers won’t get that same level of emotional recognition. This is fundamental to greater understanding of each other and our different perspectives, which helps us to resolve conflicts (or indeed recognise when conflict is useful) 

Dealing with bias

Another common feature of AI is its inherent bias. Studies have shown that AI can suffer from programming bias due to developers’ demographic and cultural backgrounds. This means that outcomes can be at risk of age, gender or racial discrimination.

Likewise, algorithms are limited to their range of inputs, and many only analyse text or verbal input. In a human, face-to-face conversation, the parties would react to movement and visual cues, such as facial expressions and body language. Although AI is progressing in leaps and bounds, it still has a long way to go. 

Of course, as emotional beings, we also suffer from our own biases, which shape our approach to conflict. The question is whether AI, with its inbuilt biases, could help us challenge our own biases. Or would AI approach a situation without bias, taking all perspectives into account, at some point in the future? 

AI will get better at reducing bias, and it is likely to learn to respond more empathetically to complex situations.

Machine learning 

One of the most valuable lessons we can take from AI is the ability to learn from feedback. Machine learning means that technology can analyse actions, test different approaches, or understand where improvements can be made.

It could, therefore, help staff apply a similar approach to learning from their own conflicts. 

Rather than feeling relieved when a conflict has been resolved and then moving on, AI could prompt us to ask for feedback. It could also, sometime later, ask those involved in the conflict about how they felt the situation was handled.

This can help embed essential skills and build “conflict resilience” to enable us to better manage workplace conflict situations in the future. Building conflict management skills could then become a continual process, rather than just offered as a one-off mediation or training course.

The future of AI resolving conflict

AI will get better at reducing bias, and it is likely to learn to respond more empathetically to complex situations. But it is unlikely to ever become a viable solution for complex interpersonal workplace conflicts.

Instead of relying on AI, L&D managers can look to implement different training approaches that will support the workforce to facilitate their own human learning by increasing self-awareness, considering other people's perspectives, and building skills for effective conflict conversations.

This will help people make real human connections in a technology-driven world, strengthening relationships at work and minimising conflict in the future. 

Your next read: How to defuse tensions at work

Author Profile Picture
Anna Shields

Co-founder

Read more from Anna Shields