Author Profile Picture

Ross Garner

GoodPractice Ltd

Online Instructional Designer

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

“The challenge for any organisation will be to create a culture where the intervention of AI is seen as a positive and not as a threat.”

istock_93275439_medium

What is the total cost of all of the bills that you pay? How much do you spend each week on food? Where could you cut back if you suddenly found yourself out of work?

For most people, these are difficult questions to answer. Like telling the doctor how much alcohol we drink, or how often we exercise, our answers are subjective and full of bias - particularly when we think that we know the answer being sought.

I recently had to answer these questions because I’m buying my first home. The process involved a three-hour appointment with an incredibly helpful mortgage advisor, throughout which I couldn’t help but think: don’t you know the answers to these questions?

Surely ‘The Bank’ knows exactly how much I spend each month? Can’t it tell me whether or not I will be able to stick to my mortgage payments if my wife or I falls ill, or if one of us loses our jobs? Apparently not yet, but it seems increasingly likely that in a few years’ time this automated approach will be business-as-usual. 

The slow creep of AI

While the world has waited for the arrival of ‘hard AI’ - an artificially intelligent machine that can do everything a human can do - ‘soft AI’ has crept into our lives. And like all great technology, it’s made life easier while remaining virtually invisible.

Soft (or ‘weak’) AI isn’t intelligent in the way that a human is intelligent, but it’s smart enough to solve a specific problem. Most usefully, it’s smart enough to solve a problem that humans can’t solve on their own.

It’s smart enough to solve a problem that humans can’t solve on their own.

For example, Google search trawls the web in seconds to deliver results tailored to your browsing habits. Netflix and Amazon show you recommendations based on your previous visits. The bad guys in Call of Duty shoot you when they see you, and run away when they get injured. 

If you’ve ever had a call from your bank's fraud department, then you’ve experienced how useful AI can be. The machine noticed an unusual transaction on your account and issued a command: ‘Human, call this customer and check if it was fraud’.

So how long will it be until ’The Bank’ is recommending the best financial products for my needs? And using all of the data at their fingertips to ensure that it really is the most appropriate?

The next step

Imagine a site that tells you: '80% of customers matching your income and spending habits take out this mortgage product, and these are the risks based on past performance’ - would this AI make my mortgage advisor redundant? Not yet, I don’t think.

For now, I don’t even feel comfortable buying a bin off Groupon because I think I’m being tricked by the small print. (Note that it’s not the site I don’t trust, it’s my inability to grasp the implications of fine details.)

When taking out a mortgage...I still want to have a relationship with a human. I just want that relationship to be more effective

When taking out a mortgage - probably the most expensive product I’ll ever buy - I still want to have a relationship with a human. I just want that relationship to be more effective: AI will cut out my bias and subjectivity, while giving the mortgage advisor more time to check that I’m happy, respond to my calls and fit more appointments in.

There are industries where this could be devastating. Driverless vehicles, for example, could obliterate driving jobs. But, for the mortgage advisor, the machine is not taking her job; it’s augmenting it so that she can focus on the relationships side.

Think about your own job. Are there aspects of it that could be performed better by a machine?

Fresh opportunities

At GoodPractice, where I work, our core product is an online toolkit filled with leadership and management content. We already use AI to make content recommendations. Now, we’re starting to use it to write abstracts of business books - a task traditionally done by our team of content editors.

Does that mean we’re cutting down on our content team? Absolutely not: they’re now able to spend more time scripting animations, writing more creative pieces and designing infographics.

The challenge for any organisation, however, will be to create a culture where the intervention of AI is seen as a positive and not as a threat. AI will make our jobs easier and our time at work more effective, but we are going to have to embrace change as a constant feature of our working lives. 

If an AI took over part of your job, what tasks would you do instead?

If an AI took over part of your job, what tasks would you do instead? What new skills would you require? What new opportunities are created?

And, for anyone working in Learning & Development, how will you support your colleagues throughout the organisation as they are challenged to develop alongside the machines?

In time, huge swathes of our jobs might be eroded away. We may even find that work is no longer necessary. But in the near future, we do have a choice to make: Will we resist AI until we come in one morning and find that we’ve been replaced by a hard drive? Or will we embrace AI as a tool that can help us do our jobs better?

I discussed AI and jobs on the GoodPractice Podcast with our COO Owen Ferguson, our technical director Jonny Anderson and the CIPD’s David D’Souza. You can listen to our podcast here.

Author Profile Picture
Ross Garner

Online Instructional Designer

Read more from Ross Garner