Learning in conversation with AI is increasingly popular. The style of communication and conversation with gen AI tools such as ChatGPT is often likened to Socratic questioning (Socrates being the ancient Greek philosopher who explored topics with students through a process of questions and answers). It’s the same with gen AI - learners ask questions (called prompts), seeking answers and clarification, with the AI tool replying and prompting the learner to take their learning further. Rather than being a static process, it is a dialogue, replicating a human-to-human conversation. Or a trainer-learner conversation.
Gen AI chatbots are dialogue-based online systems that receive prompts (generally in text form, although images and spoken questions are increasingly being used) and generate human-like answers in real time. Those answers can be short, simple answers, or more lengthy, complex responses. And learners can keep refining their prompts, with the AI responding and checking their comprehension as the dialogue progresses.
The idea of having gen AI as a 24/7 virtual tutor is appealing to a lot of learners (and to a lot of organisations). It means people can access learning at any time, at the point of need, with questions that are specific to their context. They can ask gen AI to challenge them, and to produce case studies or examples to help their understanding.
Organisations are also increasingly interested in using gen AI tools for interactive learning, such as simulations and role play. Doing this enables learners to practice skills and behaviours in a safe space and receive real time feedback. Some learners express a preference for honing their interpersonal skills in an AI environment as they feel less worried about making mistakes or being judged on their performance.
However, there are still many concerns around the use of gen AI tools, in particular around data privacy and accuracy. Continuing problems with hallucinations – when an AI model generates a response that is either incorrect or misleading but gives it as correct – mean many users or potential users are concerned that the responses they receive back cannot be trusted.
Detlef Hold, Global Strategy Lead People & Organisational Capabilities at Roche Diagnostics
When the gen AI hype started, I believed ‘this is going to transform learning and how we think about learning’. Many people still do think that, and I agree, there’s a lot of potential. However, it seems to me that we’re still in the exploration phase of how gen AI can add business value in L&D, much more so than I would have thought a year ago.
For some capabilities, such as content generation, learning format and ways of learning, the use of gen AI is very appealing (it sounds so simple and quick, for example) and it has lots of transformational potential, yet I am less sure now about how long it will take to realise its value.
Current exploration is guiding us towards an interesting, additional design element for learning effectiveness, personalisation and technology-enabled “human-like” learning – because it is a dialogue and not one-sided information sharing.
After the initial hype and the phase of experimenting, we are in a different phase now. There’s a lot of tension between regulation and innovation, opportunity and risk, people being sceptical and seeing inaccuracies, failures and the investment of time, budget and resources it takes to benefit from the technology. There’s tension between the US driving innovation at all costs and the EU, which is going more towards regulation and safety first.
I see lots of energy for experimentation though and very interesting use cases – AI tutors for feedback skills or an AI simulation for coaching, for example. There are trailblazers who are trying to really show value and move it forward. And there are others who are saying ‘Is it really worth it? Do we have time for it? What’s the cost benefit and how long will it take?’
Currently, gen AI is not replacing L&D. I see it as an augmentation, maybe a process acceleration in some cases. I don’t see any scalable additional capabilities beyond what vendors claim AI can do – more personalised, more specific content recommendations, faster generation of content and higher quality of multi-modality learning products. We haven’t solved the data problem – data standards, quality, availability and the volume of data needed to capitalise on gen AI. There’s a huge legacy gap in learning capability there and access to relevant data as the fuel for gen AI has become a major challenge.
Do L&D teams have the deep understanding to drive this forward in the most efficient, effective way? I wish we could accelerate the skills shift needed to feel well-equipped for the changes to come. In organisations, I don’t think it’s L&D who is driving the conversation around the use of gen AI or AI – it’s other groups, like data scientists, LLM experts, IT experts. L&D could be a catalyst. We need to continue building our gen AI muscles while also improving core capabilities like performance consultancy, organisational effectiveness interventions or digital learning design. Beyond investing in L&D teams’ own capabilities, we could be saying ‘What does it mean for the business? Where can we tap in, use our business acumen and play a pivotal role in driving the transformation of our core business?’
Top tips:
It’s a skill we all need to develop – an understanding of the possibilities of conversations with AI and how you learn through conversation.
Professor Agnes Kukulska-Hulme
Institute of Educational Technology, The Open University
Resources:
The Innovating Pedagogy report is an annual report co-authored by academics at the OU's Institute of Educational Technology, and this year, together with researchers from the LIVE Learning Innovation Incubator at Vanderbilt University in the US. |
Please contact us to speak to one of our business team advisors.
Sign up to receive regular emails that are full of advice and resources to support staff development in your organisation.