Future-Proof: 3 Insights to Help Instructional Designers Become Performance Coaches
There are a number of articles out there right now analyzing what AI means for the future of various industries. While the long-term impacts of AI on the labor market are still unclear, most articles right now can be boiled down to at least one common takeaway:
Humans who know how to wield AI will replace those who don't.
One of the core pillars of Bright's mission is to help future-proof the L&D practitioner role by providing meaningful upskilling resources + experiences. (check out other posts on Bright Academy for more on this!)
So it's important to us to answer what this trend means for corporate learning. Will AI take over teaching, instructional design, and coaching? Or will it fizzle out as another fad?
As a company helping some of the world's largest and best-recognized brands wield AI at scale, we can confidently say: no, it won't take over these activities altogether. But also, no, it's not a fad. There are hard skills that trainers, instructional designers, and learning leaders should embrace to future-proof their roles. AI won't replace these essential learning activities altogether. But it will replace some of these more traditional L&D skillsets. To understand how, let's walk through a few highlights from the experience of forward-leaning brands. To start:
If AI-powered experiences could do the following, how would it change the way you spend your day/week?
Convert an intake session from notes into a lesson outline
Answer common questions from learners after a course gets delivered
Design + deliver 10 fully-branching conversation simulations in under an hour
Walk learners through a new software tool, measure their data accuracy, and answer their questions about what to do next
Review hours of simulation experiences results + recommend the next 3 practice/coaching experiences (note, we didn't say 'courses') a learner should take
Predict what a new hire handle time, sales closure, and other key metrics will be
Here's what we're seeing on what these opportunities have meant for companies who are already on their AI journey:
Less Live Training Time - often up to 50% less; but NOT 'no' live training time
Digitized Shadowing - which means less nesting / pulling veterans off the floor in favor of more consistent, interactive shadowing options
No More "Blind Leading the Blind" Role Play - learners weren't getting a full authentic experience out of practicing with a fellow new hire anyway!
Mostly Digital Software Training - fully-guided tours, practice, and system training is difficult to deliver otherwise
Fully Branching Conversation Simulations that provide real-time rating + coaching - up to 50% of the program
Some of this obviously sounds exciting. But honestly, some of this also sounds scary. So let's balance the story line. Here's what we're ALSO seeing:
Instructional designers who spend 20 fewer hours a week in Storyline or Captivate
Facilitators who have time every day to give personalized, one-on-one coaching to trainees, instead of generic feedback to groups of 30
Trainers who know how to read real-time simulation-based learning data and understand their cohort's shared development areas in order to personalize group sessions
L&D practitioners who know more about AI than their Ops + QA counterparts
Learning strategists who spend MUCH more time during strategic intake understanding current performance levels + what distinguishes high performers from average or low performers
Learning Leaders who ask WAY more questions about things like transcription quality, custom language models, and other topics that have historically been the domain of operations
For those willing and interested, the net effect of this trend is an elevated role for the L&D function.
And the good news is that this is a journey. AI is nowhere near the level where these kinds of impacts are an overnight change. So there's plenty of time to steward this in a way that helps L&D practitioners future-proof their roles. Here are 3 tips to support that journey:
Push Beyond 'Knowledge' to Identify Skills + Behavior: While instructional design in its best form is MUCH more than information delivery + quizzes, unfortunately, a huge portion of corporate L&D assets fall into this category. The key discussion with leaders is often 'what do we want people to know.' If you start asking questions such as 'what do we want people do' and 'show me examples of people who are doing it well/poorly,' you'll be well on your way to getting the type of information you'll need to design AI-powered simulations + upskilling experiences.
The reason - in our experience - is that the most powerful use case for AI-powered learning is scenario-based practice. In order to give people practice, you need 1) scenarios and 2) a clear definition of 'what good looks like.' Conversations that build toward practice look different than the ones that lead toward courses.
Spend 30 Minutes a Week Writing Prompts for 1 Month: While many of the emerging AI tools out there (like Bright) won't make you write full-on AI prompts, if you don't know how AI works behind the scenes, you won't know how best to wield it. Here's a simple exercise you can do to build your understanding and intuition:
Identify the #1 skills gap for your company or division, and then write a short exercise asking a learner to reply and exhibit that skill in a specific scenario. For example, ask them to overcome an objection, de-escalate an angry customer, or explain a complex policy.
Then try to write a prompt in Chat-GPT or Bard that can evaluate various learner responses. Because we don't want to rob you of learning, here's an 'ok-but-not-great' starting point:
"Assume you're assisting a major global corporation in the training and development of world class employees. For this exercise you'll be rating learner responses to scenarios that can impact revenue and customer experiences. To rate the learner submission, use the following rating scale...<insert stuff about how your company thinks about skills for the scenario> The scenario is...<insert a sample scenario for your topic> The learner's submission is...<make up all sorts of amazing/average/poor learner responses> Once your analysis is compete, use the rating scale to provide your overall rating for the learner's submission. Provide a rationale for your rating with examples from the learner's submission + any tips they could use to improve their attempt next time."
Again... this is far from a perfect/universal approach, but here's what you'll find if you were to spend time to perfect this prompt:
The model will get it right in ways that surprise you
The model will get it blatantly wrong, also in ways that will surprise you
You'll have to interact with the model to get it to correct itself when its wrong, which means you want the model to have memory
You'll probably have to change your rating scale a couple times
What works well for some types of learner attempts will work poorly for others
Make Space for Practice + Coaching: Take a step back and look at how you spend your day/week. What % of your time is used for course development, training facilitation, reporting, or cohort administration? For many L&D practitioners, the number is in the 50-80% range. If you got 10-20 hours back per week from delivering these types of activities, what would you re-invest them into? What activities would make an even bigger impact on your company's bottom line + the lives an careers of your trainees? While there are many answers to that question, we'd like to suggest that hyper-personalized practice + coaching should be at least one of the answers. The more you spend time on these activities, the more your role will transition to what the industry is increasingly calling 'performance coaching.' If you'd like to learn more, be sure to reach out!
コメント