The Power of 'Intent' in Designing AI-Powered Conversation Simulations
The use of AI and natural language processing (NLP) in learning is still fairly new. So when buying simulation capabilities, or designing simulation-based training experiences, it’s useful to take a step back and think about the 'art of the possible' in the current technology landscape
One of the biggest predictors of the success for AI-powered simulations is the 'lifelikeness' of a conversation or chat experience. Simply put: conversations that are too linear, restrictive, or script-based feel less 'real.'
If you're not familiar with AI-powered simulation experiences, the background here is that most tools deliver the experience through some version of branched routing that takes what a learner says, transcribes it, and then uses natural language processing to make an assessment about 1) how the customer/patient should reply and 2) a rating or coaching for the learner in that stage of the conversation.
The most tried and true capability used by most is phrase matching, through which the system queries key terms or alternatives. While it has its downfalls, this is incredibly effective in many scenarios. But it can also be difficult to configure and think of enough alternatives to make the simulation dynamic. For example, how many ways are there to upsell your product? Can you really key-phrase your way to every alternative? Yes - but it does take a minute, and depending on your products or learner population that may not be a viable option.
Large language models have made a huge dent here by making it increasingly easier to deliver conversation simulations based on intent.
Intent analyzes the true meaning behind a learner's statement in a simulated conversation with a customer or patient. There might be 50 ways to show empathy, attempt to upsell, or explain a complicated cancellation policy. But by using NLP models that assess intent your company can dramatically reduce the amount of time that you need to build a simulation, and introduce a much more lifelike flexibility into the learning experience.
For example, if you were using phrase matches to design the closing of a conversation, you may have to enter multiple versions of "have a great day," "have a nice day," "take care now," etc. Transcription errors or an incomplete set of acceptable alternatives may not actually score well in the simulation, frustrating you and your learners.
By using intent models, a Learning Experience Designer can enter samples of strong closing statements (like the ones above). The intent model is then able to automatically assess whether the learner submission matches the underlying meaning of a "fond farewell."
There's one caveat worth mentioning.
Intent models introduce a certain amount of risk that your model will assess intent differently than your company would like. The one benefit of phrase-based assessment is that it gives the company absolute control and predictability. So additional testing to confirm that the intent model is performing as desired is critical. And in our experience at Bright, the best result is usually a mix of both approaches.
At Bright, we enable our customers to build custom simulations using all of the above, giving our customers the broadest range of capabilities possible to train and upscale their people through practice. Interested in learning more? Reach out today!