You are here

  1. Home
  2. Trends in Learning 2024
  3. Talking AI ethics at work

Talking AI ethics at work

About this trend

When gen AI arrived a couple of years ago, there was an explosion of new tools. There has been much excitement about these technologies, but also much concern about how to use them and what could go wrong. Employers and employees are concerned about the potential implications and pitfalls. It’s a big area – the EU recently brought in The AI Act, new regulation that looks at risk and ethics.

Fears around AI can manifest in many ways. Some people are worried that they will become unemployable and will be left behind if they don’t jump on the AI bandwagon, or that AI will make their skills redundant. This may make them overly keen to adopt AI, without taking due caution and thinking through the ethics.

Other people are held back by their fear of what could go wrong, having read and heard about bias, discrimination, hallucinations, etc. They may be fearful of new technology generally, so are reluctant to embrace it for fear of making mistakes.

Organisations and L&D need to help employees through this transition stage and whatever comes next as AI tools keep evolving.

There may be generational differences in how people approach gen AI. In the Innovating Pedagogy report, the focus is on talking AI ethics with young people, mostly from a rights perspective.

It is worth bearing in mind that because young people are so tech savvy and so engaged with technology in their everyday lives, they are more likely to get stuck in and use AI to do what they want to do. They might focus on the output – how using gen AI at work can achieve a certain task or goal – without considering the ethical considerations.

Older workers, however, might take a more considered, cautious approach to using gen AI, partly because they are not digital natives. But also, because their approach to work in general may be more considered and cautious. People who have been in the workplace for several years will have experienced or witnessed the fallout when actions have unintended consequences.

Some organisations use mentoring and reverse mentoring to help bridge differences in knowledge and experience. With reverse mentoring, younger workers can help older workers understand the relevance and application of emergent technologies in the workplace, for example. Everyone, young and old, needs to be involved in the debate.

The expert view

Jon Fletcher, Fractional HR AI strategist

Jon Fletcher

I’m having a lot of conversations with people and organisations where they’re saying “What should I be doing? How should I be using AI?”

Individuals need to learn how to use these technologies properly and effectively. There is a really strong requirement to support the adoption of AI in organisations. L&D needs to help with that. It needs to be partnering with implementation teams and deployment teams to make sure employees can use the technology correctly. This is a really big opportunity for L&D.

But I’m concerned that L&D is getting sidetracked, focusing the majority of its attention on how it uses AI, when it should also be thinking about how to support employees to use AI and driving that conversation.

The two big skills we need to be training on are around critical thinking and fact checking. ‘Is what the AI is producing accurate? Does it make sense? Is there bias or discrimination in the output? Should I even be using an AI solution to do this?’ People need to understand how these solutions are creating their outputs.

The biggest concern I have, from an ethical perspective, is that people are asking AI to do tasks that they do not have the knowledge and experience to review, identify and understand, often without doing their own research. And a key reason why they’re using gen AI and not doing their own research is probably because of time.

People sometimes lack an understanding around the ethical considerations, the bias and discrimination that can occur. Images, for example – they are sometimes created by AI in a way that doesn't represent the culture or society or the population within an organisation.

Or a person could ask a generative AI solution to create a description or a piece of content, and within that description or content there are unknown biases which go against the global organisation they work in. It might seem right for somebody based in the UK, but from a cultural perspective it’s not okay for somebody based in the Middle East, for example.

When we talk about ethics, it’s largely about bias, discrimination and outputs, but there’s also the huge environmental impact. It is well known that every prompt put into ChatGPT uses, on average, about half a litre of water to cool the servers down. Do it day in day out, maybe that’s 100 prompts a person and you’ve used 50 litres of water to cool the servers down that generate the response to your prompts. This needs to be considered if you’re driving Net Zero policies in your organisation. Should we be doing this?

Top tips:

  • Educate yourself and don’t forget to educate employees. Educate them on how to use these technologies properly.
  • Human capability development is just as important as training on the technology itself. Support employees from a technology perspective – how are they going to use this technology. And from human-centric skills perspective – skills like empathy, emotional intelligence, adaptability, a personal growth mindset, ethical reasoning and sensitivity, social and cultural awareness are so important. Human skills generally are going to become even more important
  • Stay informed around ethical updates.
  • Be at the forefront. If conversations aren’t being had around the ethics of AI in your organisation, then start them.

AI ethics has been an emerging topic over the past few years – how to use AI in an ethical manner. And ‘is AI ethical?’ It is important that people think about and understand ethical issues.

Professor Agnes Kukulska-Hulme
Institute of Educational Technology, The Open University

Resources:

 

Read the original report behind Trends in Learning

The Innovating Pedagogy report is an annual report co-authored by academics at the OU's Institute of Educational Technology, and this year, together with researchers from the Centre for Innovation in Learning and Teaching at the University of Cape Town.

Read Innovating Pedagogy

Find out how we can help your organisation

Please contact us to speak to one of our business team advisors.

Contact us

Not on our mailing list?

Sign up to receive regular emails that are full of advice and resources to support staff development in your organisation.

Sign up to our emails