Author Profile Picture

Erica Farmer

Quantum Rise Talent Group Ltd

Co- Founder & Business Director, Digital Learning & Apprenticeship Expert, Speaker & Facilitator -

Read more from Erica Farmer

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Why managers must consider ethics when upskilling with AI

Don’t rush into AI implementation without considering ethics.
istock-1273822907

‘Not another article about AI!’ I hear you say. Many of us are feeling a level of fatigue when it comes to this topic, such is its prevalence -  including me, and talking about this is part of my job! 

The thing that I’m most tired of is only hearing about one aspect, or ‘subspecies’ of AI (generative AI), and one aspect of that. Yes, you guessed it: prompting.

Prompt engineering, or prompting, is absolutely the next important skill that folks in business need, in my opinion. It’s a skill that functions and teams in learning and development, HR, IT and operations should all be thinking about, upskilling in and building into their plans right now.

I’m not here to talk about this today, though. You can go onto any of the platforms and start learning about minimal prompts, monster prompts, and everything in between right now. You can also get sucked into adverts telling you how you can make £5000 a day using ChatGPT. (Warning: approach with care. If this was true, we’d all be millionaires!)

What I want you to take from this article is that nothing should happen in your organisation without a digital ethics and integrity strategy and supporting behavioural framework.

‘What on earth is this?’ I hear you say. Well, if AI prompting is the tidal wave, then digital ethics is a tsunami in the background that we need to prepare for and get ahead of. Drastic language, I know, but I’m just trying to get your attention as this is going to be the most important foundation that we must get right before we do anything in the AI space.

Ensuring the accuracy and integrity of information is vital in the digital era. Fake news and misinformation can spread rapidly online, causing panic, division, or even harm. 

Ten ethical issues with AI use

According to ChatGPT 4, digital ethics and integrity pertains to the moral principles and guidelines that govern the use of technology, information, and digital media. It breaks it down into the following ten items:

1. Privacy

The right to privacy involves protecting personal information and ensuring that it is collected, stored, and shared responsibly. This includes adhering to regulations like the General Data Protection Regulation (GDPR) and other privacy laws, and ensuring that individuals have control over their data.

2. Security

Digital integrity requires safeguarding information from unauthorised access and potential harm. This includes implementing proper cybersecurity measures to protect against hacking, data breaches, and other cyber threats.

3. Accessibility

Digital ethics also includes making sure that digital technology and information are accessible to all, including individuals with disabilities. This ensures equal access and opportunities for everyone, regardless of physical or cognitive challenges.

4. Transparency and accountability

Being open and transparent about data collection practices, algorithms, and business operations fosters trust and helps stakeholders understand how and why decisions are made. Accountability is about taking responsibility for actions and their consequences.

5. Intellectual property rights

Respecting copyrights, trademarks, and patents ensures fair use of intellectual property and promotes innovation and creativity.

People have ChatGPT on their phones and are using it just like Google. You. Can’t. Stop. It.

6. Fake news and misinformation

Ensuring the accuracy and integrity of information is vital in the digital era. Fake news and misinformation can spread rapidly online, causing panic, division, or even harm. Media literacy and verification processes are essential to combat this issue.

7. Inclusivity and diversity

Promoting inclusivity and diversity in technology means recognizing and addressing biases and ensuring that products and services are designed with all users in mind, not just a specific demographic.

8. Environmental impact

Digital ethics also encompasses the environmental impact of technology, such as energy consumption and electronic waste. Sustainable practices and responsible e-waste management are important aspects to consider.

9. Human rights

Technology should be developed and used in a way that respects and upholds human rights. This includes ensuring that technology does not infringe on freedom of expression, discriminate against groups or individuals, or otherwise cause harm.

10. Algorithmic bias and fairness

Artificial intelligence and machine learning models can unintentionally perpetuate societal biases if not designed and trained with care. Ensuring fairness and eliminating biases in algorithms is a significant aspect of digital ethics.

Interesting, right?I actually took that directly from GPT’s response to my prompt.* 

A pervasive issue

So, you might ask, does that mean I’m not acting digitally ethically or with integrity? You decide.

I’ve ‘owned up’ and cited the source, just like I would do in an academic paper or referencing from a book or website.

How can you tell it’s from generative AI? No personalised examples? Very script-like text? Quite unlike the first part of my blog? All of the above. Look how easy it was, though. This is one of the main challenges with generative AI and digital ethics. 

People are going to use the output of these systems to create content, use it through partners, build learning, create new graphics, author stories and much more. You can’t control this so don’t even try. This is the mistake I’m seeing in many academic organisations and institutions. People have ChatGPT on their phones and are using it just like Google. You. Can’t. Stop. It.

So, what can you do? There is no silver bullet or operations manual for this stuff yet and, well, it sounds cheesy, but if you build it and they will come.

Mapping your ethical approach

Your organisation may have a set of standards which outlines expectations of employees and helps you to measure their digital adherence and behaviours. This is a great place to start.

It could be a standalone policy, or included in your social media, information security, or HR policies. Have a look to see if you find them and understand the gap.

Seek out the right people to follow in this space, as there is a lot of noise online about AI. 

If there’s nothing there, start with what the Mission Group does with any AI related work; I love this simple yet impactful approach. You ask three fundamental questions which position the integrity of any work which could involve AI:

  • Is it legal?
  • Is it ethical?
  • Does it add value?

Shared by Brad Stacey, Head of Data Science and Creative technology, Mission.

What’s your current digital framework?

So how does this translate to communicating with your team? We need to ensure all levels of team members are doing what they should be, as there could be huge data security and brand impacts from poor digital behaviour.

How about starting with a simple set of questions to assess current approaches to working digitally? Something like this:

  • Do team members consider risks when using digital technology and weigh up the possible hazards? How do they mitigate them?
  • According to your team members, do your policies and frameworks encourage a flexible approach in using digital resources for the right task at the right time?
  • Do team members consider the brand, reputation and the impact to the customer, and represent the organisation with care when using social media and AI?

I know there is lots to consider, but hopefully this article has ‘prompted’ the grey matter to get this sorted for your own organisation.

Seek out the right people to follow in this space, as there is a lot of noise online about AI. You can start with our AI for the Average Joe podcast, which covers many elements in AI adoption. Find the playlist here.

If you enjoyed this article, read: AI for the ‘average Joe’.


*Here is the reference; (OpenAI's ChatGPT, accessed 07/08/2023).

Author Profile Picture
Erica Farmer

Co- Founder & Business Director, Digital Learning & Apprenticeship Expert, Speaker & Facilitator -

Read more from Erica Farmer
Newsletter

Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.

Thank you!

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Subscribe to TrainingZone's newsletter
ErrorHere