Author Profile Picture

Robin Hoyle

Huthwaite International

Head of Learning Innovation at Huthwaite International

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Will AI send us hurtling towards ‘learning magnolia?’

savushkin

At the Learning Technologies Conference and Exhibition last week there was much excited talk about Artificial Intelligence (AI). I suspect many of those in attendance have a vague inkling of AI but pretty little else.

What really is AI?

If you’re not entirely sure, Artificial Intelligence is the ability of computers to ‘learn’ how to do things.

This kind of machine learning requires two things – massive amounts of data and algorithms.

The data is sliced and diced and inferences made using the algorithms – essentially a set of rules about what is important and what isn’t.

From these rules and the data which has been crunched in superfast time by powerful computers, we have ‘learning’.

At their most complex, AI programmes can beat chess grandmasters and experienced people who play the apparently fiendishly complicated game of Go.

AI can also make recommendations – helping us by auto-completing our google search terms, or directing us to the next book we might want to read or the next box set we may wish to binge watch.

The £1.4m system replacing 34 insurance claims workers

As a group of Japanese insurance workers recently found out, the march of AI has business implications too. A group of 34 insurance claims workers will be replaced next month by a £1.4m AI system.

The expenditure on the system is designed to achieve a Return on Investment within a little over a year.

Now I’m not across all the numbers but this seems to mean that these workers are not low wage folks undertaking repetitive mind-numbing jobs.

£1.4million plus running costs divided by 34 salaries gives an average annual wage of around £47,000.

The move from low skill, low wage jobs to judgement-based roles

Up until now, discussions of the employment implications of AI systems have focused on low skill, low wage jobs.

The general assumption has been that building systems to replace workers doing jobs with a high degree of routine drudgery would be the natural progression of things.

An AI system designed to assist judges in sentencing policy gave white men significantly lower sentence recommendations than African American men or women

But as the experience of the Fukoko Mutual life Insurance company employees shows, it is the reasonably well remunerated, judgement-based roles which are most financially attractive targets for AI.

Put simply, unless the jobs you are replacing are pretty expensive jobs to fill, then AI just isn’t viable.

The future redundancies won’t be amongst those in production line jobs in massive factories but among those in cognitive-intensive tasks – those roles which require people to make sense of information, synthesise that data with knowledge and experience and determine the best next steps or a range of options.

You might want to think about how much of your role matches this description!

Trump and his US factory push...

Routine factory jobs have already been automated.

Despite Donald Trump’s commitment to bring American jobs back from overseas, the reality is that three-times as many US factory jobs have been automated out of existence as have been moved to Mexico, China or other lower-wage economies.

Algorithms tend to average things out. Nuance is only possible where you have massive amounts of data and even then it isn’t that sophisticated.

In learning, AI may be able to gather data from learners or from people doing the jobs we want learners to do. Using an algorithm, the machine can then accurately predict what learning might be needed next and serve up appropriate content before employees know they need it.

The fact that so much of what we now do involves interacting with machines means that the data gathered is not simply from the LMS or similar, but includes granular performance data about how efficiently we interact with various bits of connected kit.

If all this is beginning to sound a bit Big Brother-ish, it may get worse.

The ability to crunch massive amounts of data has been relatively recent gained, so we’re not too sure what we’ve got nor what we can use.

In L&D we’ve been pretty good in the past at according importance to things we can easily measure rather than measuring the things which are really important.

If we rush helter-skelter to crunch whatever numbers we can get hold of, we run the risk of magnifying that tendency – valuing the things that can be counted instead of things which are more difficult to quantify.

The inappropriate use of small data

However, I think there is another worrying option. I expect that organisations will try to implement big data actions on small data sets.

Imagine if you will that an average employee interacts with a Learning Management System or Virtual Learning Environment once a month.

The amount of data that is generated is unlikely to be massive.

Ceaseless targeting works when we become supplicant consumers. Not, in my mind, a recommendation for the role of AI in learning.

Small variations – perhaps attributable to the one individual who is a really keen and heavy user of your company’s library of online courses, or the entire department which can’t gain access because of a technological black hole – massively skew the limited data available.

Can we be nuanced when it comes to data?

Algorithms tend to average things out. Nuance is only possible where you have massive amounts of data and even then it isn’t that sophisticated. It makes assumptions about what you will want based on past choices.

So, if you once bought a book about the Victorians to assist your child with a school project, Amazon will assume you have a life-long interest in the 19th Century verging on the obsessional.

If you once watched a TV show with a visiting niece about horse-riding, your digital TV supplier will decide that you will be imminently in the market for saddles and jodhpurs.

The recommendations made by these AI systems which learn about us and predict our every need in order to earn our brand loyalty, are often pretty crass.

But they have more personal data to crunch than your average L&D team by a factor of several hundred. AI in retail gives us the outcome predicted by late 20th century Thinker and Philosopher, The Jam’s Paul Weller: “And the public want what the public get”.

Supplicant consumers v organisational learners

Ceaseless targeting works when we become supplicant consumers. Not, in my mind, a recommendation for the role of AI in learning.

AI is also used to predict our desires and design things we might want.

So Netflix commissions new shows entirely determined by the programmes individuals watch most often.  I’m personally can’t wait for the digital spawn of Bake Off and Call the Midwife – “The Great British Babe Off” – or Game of Thrones meets House of Cards in a new mini series called ‘Donald for President’.

Perhaps there will be a combination of The Second Best Marigold Hotel and Casualty?

A story featuring famous actors playing ageing British ex-pats using hospital services in India?  Oh, hang on – that one’s on ITV on Sunday evenings.

The neutrality of AI?

There is another very disturbing idea about AI. This was voiced very eloquently by Euan Semple in his Learning Live session in 2016.

An additional challenge is that algorithms can be cheated.

He said: “Algorithms are not neutral”. He is, of course, right. Whether they are predicated on intentional biases and beliefs or merely find new ways of delivering up unintended consequences, we need to be wary of algorithms. 

The tale of Microsoft's AI Twitterbot

Microsoft is pretty smart when it comes to all things technological.

However, when they launched an AI Twitterbot in March 2016 they hoped it would communicate positively with, and learn from, 16 – 25 year olds on Twitter. 

The idea was that the experimental AI programme would soon communicate so naturally with this demographic that users would not even realise that Tay (the name they gave their creation) was just a computer programme. Perhaps they should have read Mary Shelley’s Frankenstein?

Within 24 hours Tay had gone from innocently proclaiming “Hello world” to its followers to expressing the opinion that Jews were responsible for 9/11 and Hitler did nothing wrong! 24 hours! You can create an AI Anti-Semite in less than a day! Miraculous.

The unintended consequences of big data learning

There have been other cases of dodgy opinions emerging from machine learning.

In the US, an AI system designed to assist judges in sentencing policy gave white men significantly lower sentence recommendations than African American men or women when similar offences were reviewed.

By basing its algorithm on past sentencing practice, the (perhaps) unconscious bias of members of the judiciary became a permanent feature of the programme.

Can you cheat an intelligent system?

An additional challenge is that algorithms can be cheated. We know this. Search Engine Optimisation (SEO) is the art and science of ensuring that product websites game the Google algorithm.

If you have the resources to push a particular product or political belief, you can make it appear far more mainstream, admired or desirable than it actually is.

If this can happen on Google which has massive amounts of data, think how it could be manipulated in an environment using significantly less data – such as that relied on by L&D departments.

Using algorithms to predict what people want or need actually reduces choice rather than expands it.

Will we hurtle towards "learning magnolia?"

Using algorithms to predict what people want or need actually reduces choice rather than expands it.

AI in learning potentially reduces the range of options available and pushes everyone to kind of learning magnolia – a safe, bland alternative which limits our experience rather than expands our horizons.

It is, in a real sense, driving a car by only every looking in the rear view mirror.  If you have done something in the past, the chances are you’ll want to do something similar in the future. Haven’t we moved beyond such mechanistic categorisation? 

In a world where we are increasingly questioning simplistic categorisations of individuals – from MBTI to Learning Styles – it seems to me to be profoundly sad that we are in danger of replacing one set of boxes and labels with an additional set of personality types dreamed up by the most emotionally intelligent of humans, West Coast geeks.

This use of past data ignores the most important part of learning: that learning is transformative.

I have been changed unutterably by learning activities in which I have participated. I hope I have helped change the lives or careers of some of those who have experienced learning activities in which I was involved. 

I am sufficiently humble to know that I have emerged from some learning experiences as a different person from the one who went in. Good learning experiences change us.

Despite this hope for growth change and development, the idea that we shall for ever be defined by a set of numbers and data points about things we have done in the past is deeply depressing. A great leap forward it is not.

It must be resisted.

Author Profile Picture
Robin Hoyle

Head of Learning Innovation at Huthwaite International

Read more from Robin Hoyle