Author Profile Picture
Tom Calvard

University of Edinburgh Business School

Senior Lecturer in HRM

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

From skills erosion to complex resilience: How AI will transform L&D

How will AI disrupt the skills and development landscape? Professor Tom Calvard of the University of Edinburgh Business School examines the latest academic research, highlighting key insights for L&D professionals. From the realities of complex systems to the potential demise of human skills, this analysis explores what the future holds – and how to prepare for it.
rock formations during daytime, How AI will transform L&D

Technological changes have long affected employee skills and development. However, over the past few years, the rise of Generative AI has led to renewed commentary about the future of work. Where will these latest changes take us? Through an academic lens, here we’ll examine how AI will transform L&D. We’ll explore why automation and augmentation are an intricate balancing act, the realities of complex resilient systems, and the potential demise of human skills.

Automation, augmentation and other technological impacts

Typically, discussions on technological impacts focus on two main effects:

  1. Automation, where Gen AI fully performs tasks previously performed by humans.
  2. Augmentation, where Gen AI partially performs tasks in ways that support or enhance human capabilities. 

It’s important to note here that automation and augmentation cannot and should not be neatly separated from each other. Over different timeframes and levels of scale, we must not take a one-sided focus. Instead, we should accept how they can affect each other. 

The automation–augmentation balancing act

For instance, we can use automation to go beyond human cognitive abilities such as large information searches. But we should then switch to augmentation to reintegrate with human capabilities for intuitive and holistic judgements, for example choosing options and evaluating contexts.

Balancing automation and augmentation in work is not an easy task, though. We will need to scrutinise their effects critically and regularly in context to fully understand potential futures of work.

Anxieties and aspirations

Another consideration is how automation versus augmentation makes us feel. Automation can generate future-of-work anxieties about many jobs being replaced by AI. Meanwhile, augmentation can generate future-of-work aspirations around learning and development, requiring future-proof social and emotional skills.

Additionally, there is still a need to better understand how AI will affect existing gender, race and class inequalities across different occupations, such as white-collar, blue-collar and pink-collar workers.

Adopting technology – a broader lens

Beyond automation and augmentation, technological adoption can also have a variety of other impacts. These include:

  • Transferring skills between people.
  • Creating new tasks or jobs.
  • Enabling work to be done remotely.
  • Requiring more work to be done in the time allocated.

The way technological impacts are addressed by managers and HR has implications for ensuring ‘good work’ with dignity, autonomy, equality, development and community. ‘High-road’ approaches keep people’s wellbeing and contributions to productivity centre-stage. Contrastingly, ‘low-road’ approaches risk creating poor-quality work, harming individuals and societies.

Deskilling and degradation of human skills

Across many advanced economies, academics and commentators alike have raised concerns that while technological change has increased overall economic growth and productivity, income and job prospects for many typical workers have not been keeping pace.

Relevant here is political economist Harry Braverman’s ‘deskilling thesis’ from the 1970s. He argues that powerful organisations and managers, combined with technological advancements, tend toward greater control over the planning and design of work. This reduces employees to executing narrow, specific tasks, experiencing degraded work and eroded skills.

Growing technological dependence

Let’s also consider our dependence on widespread and complex forms of computing and automation, such as GPS, autopilots, Google searches, drones. Writing on this topic, journalist Nicholas Carr argues we risk being trapped in a ‘glass cage’ of electronic screens and data, while our cognitive and manual skills fall further into disuse.

These concerns persist alongside the evolution of AI, with various evidence suggesting our greater exposure to devices, content and screens is straining our brains, skills, capacity and mental health. 

GenAI and the demise of critical thinking

More specifically, recent research has suggested that greater reliance on GenAI may diminish problem-solving and critical thinking skills. We should therefore consider how we are using it, seeking to challenge and interrogate its ‘frictionless’ outputs so our thinking skills are not blunted or flattened.

Lack of human oversight

These changes also have implications for human oversight (or lack of) regarding the safety, resilience and sustainability of complex technological systems. We run the  risk, albeit relatively rare, of causing disruptive and potentially catastrophic incidents of error and failure, with harmful impacts on many stakeholders.

Designing and engineering resilient systems

We may initially consider Generative AI as a simple technological tool supporting employees in their daily lives. Yet it will eventually underpin more pervasive, autonomous, networked and large-scale complex systems.

We can learn lessons from existing complex technological systems and contexts with reliable safety and automation records, such as nuclear reactors and aviation. The designers, operators and other users of these systems come to regard this complex technology as ‘ultra-safe’ or ‘almost totally safe’. 

Insights from the 2009 Air France plane crash

But the very sophistication of these technologies gives rise to paradoxes, ironies and surprises, as my collaborative work on the 2009 Air France plane crash shows. In unusual situations outside of normal limits, technology can behave in unexpected ways. In such scenarios, human operators find themselves struggling to comprehend and respond.

We also need to understand that the ‘ultra-safety’ of these transportation systems will not transfer straightforwardly to other technologies and environments, such as self-driving cars

The red flags of co-pilots and auto-pilots

Not all user contexts are so dramatic of course. But the language of AI ‘co-pilots’ or humans ‘being on auto-pilot’ in relying on technologies is telling. The vision of AI ‘freeing people up’ from work is also concerning, if it comes at the expense of human connection, relationships and skills.

Technological developments do not happen in a vacuum, they are connected with and layered on top of many other devices, processes, datasets and systems, the legacy of which is difficult, if not impossible, to uninvent. 

Ultimately, how well we deal with technological safety depends on top-down and bottom-up interactions and trade-offs between different technologies, people and goals. 

Final thoughts

Perhaps in the future, we will all need to think more like operators working within complex systems. What does this mean for HR and L&D professionals working with AI technologies in different workplace contexts? Here are some key next steps:

  • Join forces with early adopters, enthusiasts and skeptics to work closely together, testing your respective definitions and assumptions of the technology.
  • Focus on fostering a culture of open, balanced discussion about where AI can/can’t help with workload, understanding problems, reaching goals and improving experiences.
  • Avoid a ‘rip and replace’ approach to implementation. Instead, carefully evaluate what might be ‘lost’ through AI, and what human capabilities will remain crucial, grounding evaluations in risk and trade-off analyses.
  • Use approaches based on resilience engineering and sociotechnical systems principles. These offer established frameworks and tools for understanding how to bring tasks, people, technology and structures together as part of a successful system.
  • Ensure principles of ‘good work’ are upheld during technological adoption, in support of human productivity and wellbeing.
  • Balance the need for human comprehension and oversight of technology, with the need to make AI environments as safe as possible.
Your next read: How AI helps learning professionals push into untapped spaces