Author Profile Picture

Volker Patent

The Open University

Lecturer in Psychology

Read more from Volker Patent

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Could AI coachbots replace human coaches?

We’ve already seen an explosion of AI-enabled services in every area of business, education and training that have the potential to radically change how we work. Coaching is one such area that is following suit... but how effective could a coachbot really be?
man in white shirt sitting on chair, coachbot

AI has almost become synonymous with ChatGPT and, thus, needs few introductions. ChatGPT, and other generative AI systems are now generally quite good at doing simple tasks out of the box without much further training. For example, generating short ads, descriptions or summaries of text. 

However, asking ChatGPT to do more complex things, such as coaching, requires more elaborate prompting.

Is it possible for an AI to serve as an effective performance coach?

A brief primer on AI technology

There are a variety of challenges in developing effective prompts, including the tendency of the AI to ‘make up’ information that seems to fit the request but which cannot be verified (e.g. statistics, sources). Another issue is that, similar to how unconscious biases operate in humans, AIs may respond in biased ways. For example, through the assumptions implicit in the response.

Getting ChatGPT to perform conversational tasks may also be difficult without careful consideration of the rules of the game of conversation, such as turn-taking and responding in ways that are unambiguous and appropriately focused.  

Taking into consideration the capabilities and limitations of generative AI, is it possible for an AI to serve as an effective performance coach?

Key issues with creating AI coachbots

First, let’s remind ourselves of what coaching is.

In summary, coaching is a goal-oriented conversation aiming to produce optimal work performance and develop the skills required to produce such performance (see the CIPD’s full definition here).

Coaching is non-directive and may touch upon more personal aspects of the coachee’s life. In person-centred coaching, the coach views each client as unique, creating potentially large numbers of use cases due to the different experiences and contexts coachees bring and the unique ways they talk about them.

Coaching is a language game

Given that every individual is unique, for a coachbot to provide a meaningful coaching session, it would need to be consistently responsive to the context and language meanings provided by the coachee.

Coaching is effectively a language game. So one clear problem with current AI bots is their tendency to ask closed questions, multiple questions, or give a response that overwhelms the coachee with information.

AI coaches also tend to slip into advice mode, giving information about a subject. As a result, they can appear quite directive. When instructed to use a specific coaching model (such as GROW or OSCAR) it can be too rigid in its response or not adhere to the structure. 

Currently we do not fully understand how AI forms the decisions for its interactions. The AI’s reasons for choosing certain types of response may largely be incomprehensible and intransparent. This has implications for understanding and identifying as well as preventing bias and fairness issues in the interactions between AIs and humans.

One clear problem with current AI bots is their tendency to ask closed questions

Data protection considerations for coachbots

For coaching to be effective, clients often need to be able to expose vulnerabilities to progress toward their goals. Who has access to the input from the user, confidentiality and data protection are thus critical issues to consider.

Depending on the service agreements, the AI provider could technically use any user input from a coachee to train its models further or use the data in other ways unbeknownst to the user.  When used in organisations, personal information may create GDPR issues and present potential cyber security risks.

Do not underestimate the need for human-to-human connection

Alongside questions on how well coaching bots perform (and the cost of use and data risks), adopters should also not underestimate the potential backlash from the use of AI. AI may contribute to losing the relational qualities that human-to-human (h2h) coaching fosters.

It is possible to give AI a voice or even telepresence using CGI-generated avatars. But how much employees will accept such technology to entrust their vulnerabilities to is presently unknown.

An over reliance on such technology may also drive a decline in the human relationships that currently facilitate employee support. The risk here is that new norms emerge where hard-pressed managers may defer to an AI coachbot to address issues that ideally require a human touch. 

It is possible to give AI a voice or even telepresence using CGI-generated avatars

Organisational opportunities with AI coachbots

AI offers scope for making coaching available to many more employees than is currently cost-effective. This promises to help more employees work healthier and live healthier lives.

Beyond the obvious cost and accessibility benefits of using AI coachbots, huge potential could also come from the data. For instance, organisations could gain powerful insights into employees and the organisation through analysing coaching interactions at an organisational level using generative AI. 

It is reasonable in workplace coaching to assume that some form of feedback from the coach to the organisation is desirable if not essential for supporting talent development.

Similarly, having access to all the data from coaching interactions may provide an organisation with individual and global summaries of themes and issues flowing out of the coaching. Such data could be used to provide reports on each coachee or to train another AI to provide predictions for future leadership performance (based on the coaching interactions).  

Considering how AI is already used in selection and recruitment, it is not inconceivable that succession planning processes may, in future, train AI models on such data to identify and predict individuals’ leadership potential.

Ethical considerations of AI coaching bots

The coaching industry is largely unregulated. Although there are some standards, for example in leadership coaching (such as the UK's level 5 and level 7 ICM standards in executive coaching), in practice not all coaching is necessarily of this standard. 

On the one hand, to ensure coaching of employees by AIs meets these standards, considerable work into the performance of AI coaches may be needed. The lack of regulation, on the other hand, may drive start-ups to provide coaching AIs without meeting such standards.

Addressing transparency and comprehensibility issues, use and security of data are relevant to the ethical practice of coaching by AIs. Therefore careful evaluation of the performance of AI coaches would be highly recommended to cut through the hype likely to surround AI coaching, and to ensure that their performance meets ethical standards before they are deployed at scale. 

An example of the potential complexities can be found in coach supervision arrangements. While good practice in coaching is that coaches are supervised by an appropriate coach supervisor it is unclear who would supervise the AI or even if such supervision would be practicable.

AI coaching ... carries unfamiliar and unpredictable risks that require forethought

Final takeaways and next steps

Once AI coaching moves beyond the novelty stage, questions about the developmental relationships between humans and AI will become increasingly important to understand and manage.

Potentially, AI coaching is a serious proposition for organisations but also carries unfamiliar and unpredictable risks that require forethought and clear thinking.

Here are a few ways to explore the questions raised about AI coaching in the context of AI in general:

  1. Engage groups/committees with reviewing the implications of AI in talent management. This could be part of a wider framing of AI integration within an organisation or focused specifically on talent management.
  2. Ensure stakeholders with relevant experience and knowledge are part of the review.
  3. Thoroughly research tools and services offering AI coaching/talent management solutions. Be prepared to be critical and look beyond the sales pitch.
  4. Identify business needs for training and recruitment of internal AI experts who can engage at a technical level and lead evaluation of third party products including AI coaching software.
  5. Evaluate the use of AI in talent management and elsewhere against sustainability goals. AI energy use and carbon footprints are likely to increase rapidly and negatively affect an organisations’ carbon budget.
  6. Engage with the data protection and cyber security aspects of AI coaching to ensure deployment does not lead to breaches and security threats. 
  7. Consider the potential positive and negative impacts of using AI coaching on company culture and employee experience. 
  8. Develop a clear understanding of how the use of AI in coaching serves and aligns with the values and mission of the organisation.

If you enjoyed this, read: How to close the AI gender gap

Author Profile Picture
Volker Patent

Lecturer in Psychology

Read more from Volker Patent
Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

 

Thank you!