*** SPOILER *** Some early data from our 2025 LEADx Leadership Development Benchmark Report that I’m too eager to hold back: MOST leadership development professionals DO NOT MEASURE LEVELS 3&4 of the Kirkpatrick model (behavior change & impact). 41% measure level 3 (behavior change) 24% measure level 4 (impact) Meanwhile, 92% measure learner reaction. I mean, I know learner reaction is easier to measure. But if I have to choose ONE level to devote my time, energy, and budget to… And ONE level to share with senior leaders… I’m at LEAST choosing behavior change! I can’t help but think: If you don’t measure it, good luck delivering on it. 🤷♂️ This is why I always advocate to FLIP the Kirkpatrick Model. Before you even begin training, think about the impact you want to have and the behaviors you’ll need to change to get there. FIRST, set up a plan to MEASURE baseline, progress, and change. THEN, start training. Begin with the end in mind! ___ P.S. If you can’t find the time or budget to measure at least level 3, you probably want to rethink your program. There might be a simple, creative solution. Or, you might need to change vendors. ___ P.P.S EXAMPLE SIMPLE WAY TO MEASURE LEVELS 3&4 Here’s a simple, data-informed example: You want to boost team engagement because it’s linked to your org’s goals to: - improve retention - improve productivity You follow a five-step process: 1. Measure team engagement and manager effectiveness (i.e., a CAT Scan 180 assessment). 2. Locate top areas for improvement (i.e., “effective one-on-one meetings” and “psychological safety”). 3. Train leaders on the top three behaviors holding back team engagement. 4. Pull learning through with exercises, job aids, monthly power hours to discuss with peers and an expert coach. 5. Re-measure team engagement and manager effectiveness. You should see measurable improvement, and your new focus areas for next year. We do the above with clients every year... ___ P.P.S. I find it funny that I took a lot of heat for suggesting we flip the Kirkpatrick model, only to find that most people don’t even measure levels 3&4…😂
How to Use Data to Measure Program Effectiveness
Explore top LinkedIn content from expert professionals.
Summary
Understanding how to use data to measure program success involves assessing tangible outcomes, analyzing behavioral changes, and ensuring programs meet their intended goals. By identifying key metrics and effectively tracking progress, organizations can connect their initiatives to meaningful business results.
- Set clear objectives: Define your program's goals and identify specific metrics that will indicate success, such as engagement rates, behavior changes, or business outcomes.
- Track data consistently: Use accessible tools and create processes to measure progress over time, such as pre- and post-assessments or observational evaluations.
- Analyze for insights: Focus on finding connections between data points to understand why certain outcomes occur and use those insights to refine future programs.
-
-
Our tech stack is never going to be perfect when it comes to measuring learning outcomes. However, technology can give us back precious time to allocate toward measurement and evaluation … if we use it right! ATD and Toward Maturity reports consistently reveal technology as a great barrier to learning measurement. Either our tech doesn’t report the right metrics, our tech stack isn’t integrated, or we don’t have the right tech at all. It’s time we stop letting tech hold us back! The first step is to leverage tech in its highest and best use: managing admin work (registrations, reminders, time tracking, scheduling, repetitive tasks, data entry, transcription, meeting notes, etc). Invest your free time in exploring how to measure your program’s success! Our latest industry leader chat with Arielle K., founder of Dado, suggests we focus on what makes measurement possible instead of what’s difficult. There’s always a way to measure outcomes. We simply need to adopt product thinking and an iterative process. Instead of asking, “what metrics does my tech stack provide?” lean into product thinking and ask the following. “What are my indicators of program success? What data do we need to demonstrate if we were successful?” Data artifacts are everywhere! They don’t necessarily live in your LMS/LXP. Arielle says there are two primary indicators of success. (1) Engagement. (2) Change. (1) To create engagement indicators … → Answer: Where is engagement happening? This will likely be on Zoom, Slack, Teams, Email, Intranet, LMS, etc. → Answer: What does successful engagement look like? What are people doing? What are your targets? (For example: 80% are opening emails, 75% are showing up to Zoom workshops, 100% are sending Slack messages monthly). (2) To create change indicators … → Answer: How do we know change has occurred? What indicators tell us change has happened? → Select: 3 data points that tell you indirectly if change has occurred. (For example: An initiative to increase productivity by limiting meetings to 2 days per week. Indicators of change in productivity might include: 100% of deadlines being met, decrease in product delays, decrease in errors, increase in employees taking personal time off. The power is triangulating 3 indirect indicators of change. If all three indicators improve, then you have a high level of confidence your initiative was successful. Measurement is not easy. That doesn’t mean it’s not possible. Use an iterative approach. → Find the simplest way to evaluate change. → Select easy to access indicators. → Track those indicators for a short while and learn from this process. → Then decide: is the initiative worth a continued investment? Join next week’s industry leader talk with Chris Taylor where we discuss using self-report to reliably measure change and outcomes of learning! https://lnkd.in/dphKqpGX #learninganddevelopment #measurementmadeeasy
-
I was reviewing quarterly reports with a client last month when they asked me a question that stopped me in my tracks: "Scott, we have all this learning data, but I still don't know which programs are actually improving performance." After 12 years as CEO of Continu, I've seen firsthand how organizations struggle with this exact problem. You're collecting mountains of learning data, but traditional analytics only tell you what happened - not why it matters. Here's what we've learned working with thousands of organizations: The real value isn't in completion rates or assessment scores. It's in the connections between those data points that remain invisible without the power of tools like AI. One of our financial services clients was tracking 14 different metrics across their onboarding program. Despite all that data, they couldn't explain why certain regions consistently outperformed others. When we implemented our AI analytics engine, the answer emerged within days: specific learning sequences created knowledge gaps that weren't visible in their traditional reports. This isn't just about better reporting - it's about actionable intelligence: - AI identifies which learning experiences actually drive on-the-job performance - It spots engagement patterns before completion rates drop - It recognizes content effectiveness across different learning styles Most importantly, it connects learning directly to business outcomes - the holy grail for any L&D leader trying to demonstrate ROI. What's your biggest challenge with learning data? Are you getting the insights you need or just more reports to review? #LearningAnalytics #AIinELearning #WorkforceDevelopment #DataDrivenLearning