these links provide resources to help in the development of an approach to developing a comprehensive evaluation system to support the learning and development function. you should also look at the measurement and reporting resource links for more the more specific discussion of what specific metrics and how they are calculated and reported. if you have any comments regarding these links, please hit the comment button at the end of this post and share them with everyone. if you have suggestions of links i should include here, then email dave.
an overview of kirkpatrick’s model
if you’re not familiar with the kirkpatrick levels of evaluation, this is an nice concise overview. the kirkpatrick levels have been the dominant evaluation paradigm in corporate training for 35+ years. however, with the onset of the metrics movement, they are being called into question by some.
elearning: gaining business value through six sigma
an intriguing proposal to use the six sigma process to measure, and manage to, for the conduct of elearning. the author does, correctly in my mind, wonder if elearning professionals are ready for this.
beyond roi – 7 levels of training evaluation – hr.com
in this article from hr.com, the author proposes adding two more levels to the kirkpatrick + phillips five – sustainability and sharing the benefit. interesting proposal when many are saying that kirkpatrick level 4 is not measureable.
metrics for elearning
this sight provides a nice framework for developing and evaluating metrics and the why and how you get to them. be sure to look at the decision model. it’s the last link provided just before the article gets going.
evaluating elearning – elearninghub.com
a wonderful primer on evaluation of training. the author discussed the models and issues in evaluation, benchmarking, the four target populations, planning and execution, and findings and conclusions. all in about six pages.
measuring training roi and impact
as best as I can tell, this is a student paper. however, it is a very clear and competent summary of the criticisms being levied against kirkpatrick levels for evaluation of training. if you are new to the criticism of kirkpatrick, this is a gentle start. the authors also summarize a proposed alternative to kirkpatrick.
start measuring your elearning programs now – linezine
this artile by josh bersin appeared in the last issue of linezine. josh puts forward a 5 level taxonomy of evalation focusing on client satisfaction and business impact.
evaluation: the connection between learning and performance
a nice article by roger chevalier, cpt, that discussed both the positives and negative aspects of kirkpatrick. his conclusion is a bit sweet for my taste – measure with instrument that already exist and once you get started, you find all kinds of reasons to measure training. Bag please!
the myth of roi
in this article in the january 2004 chief learning officer , bob dust puts forward a very strong argument against roi as a measurement of training. but in the meantime the author also calls into question some of the other leading concepts of what training can do and how to measure it. He does finish with offering four measurements which he believes are solid measures of training.
what’s all the fuss about measurement
this article from performance improvement october 2002 by alan ramias sets out four fundamentals for evaluation. pretty common sense when you read them, but not many others have joined in his chorus until recently.
evaluating leaning – nickols
a very thorough introduction to workplace training evaluation.
evaluation- the link between learning and performance
in this article from apqc’s website, roger chevalier argues that evaluation, when done properly can have a significant impact on improving the quality of training and the performance of the trainee. unfortunately, he concludes, it is seldom done properly.
thinking differently about training evaluation
donna abernathy, in this article in careerjournal.com argues that the two dominant forms of evaluation in corporate development – the kirkpatrick levels and the balanced score card – are out of sync with today’s evaluation needs. her suggestions for change are not very specific – outlining some possible best-in-practice cases and stressing the “intangibles” inherent in the new knowledge economy.
recent comments