Training evaluation, an old challenge with new approaches

  • training evaluation

Training evaluation is an important process. The best way I’ve seen it defined is as follows: “comparing results with intentions”. But many consider it’s not such an easy task.

Let’s say, hypothetically, that your manager is asking for some budget cuts. He suggests to look at the training costs, as that is a pretty large sum and maybe it can be tackled. Your well implemented needs analysis revealed you need that training plan as it is. You are really confident about it. But at the next review, you will have to prove the trainings worked. You must prove they have a great return on investment.

Measuring training ROI

It’s easier to measure the ROI of technical skills training. For example, you can look and the productivity number before and after the training is implemented. But you need a good system to value training that has intangible results. 

First thing that usually comes to mind regarding training evaluation is the Kirkpatrick model. The initial version was published over 50 years ago. It was constantly updated, and implementing it is a good start. The model consists of 4 levels and each one represents a more precise measure of the effectiveness of a training program.

The Kirkpatrick training evaluation 4 levels model

The four levels are reaction, learning, behaviour and results. Let’s see about each one in detail, explore how to apply them and see what are their limitations.

1. Reaction – Did the participant like the training?

This type of evaluation is quick and pretty easy to obtain. Many companies use: feedback forms, verbal reactions, satisfaction surveys or watching participants’ body language during the training.
Next step – analyse the feedback and consider the changes that can be made. Share the information with the trainer, adding your own conclusions of the evaluation.

2. Learning – Did the trainee understand the information and gained some new knowledge?

Make sure you already have the learning objectives set for the training. All parties involved should know and understand them. Then, before the session, test the participants to determine their skills, attitudes and knowledge, as well as their confidence and commitment, depending on the case. Measure again after the training. You can also use methods as interview or observation.

3. Behaviour – Did the training make the participant better at his job?

You can assess change with observation and interview over time. You can also integrate the use of new skills into their tasks so people have the chance to demonstrate what they know. This level requires the cooperation and skills of direct managers, to assess and coach their team members.

4. Results – Did the company increase profits, sales, customer satisfaction, etc? What is the ROI?

The measures for ROI are usually already in place through the management systems and reporting. The challenge here is to relate them to the trainee. Some outcomes to consider are: fewer client or staff complaints, increased production, higher employee or client retention, increased sales, higher morale, reduced absentee rates, non-compliance, achievement of standards and accreditations, quality ratings, increase in internal management promotions, etc.
Also, go back to the needs analysis that entailed the training. What were the business performance factors that the training had to improve? Use them to measure and relate to organizational return achieved.

A new approach on the model

While I consider these four levels useful, I think they have some limitations. Also, numerous works of researchers show them as incomplete. They believe it encouraged many to focus too narrowly on the evaluation of training alone.

Expanding the application of levels of evaluation beyond training is important. This will make you able to use a more integrated approach to evaluation that incorporates other performance improvement interventions.

Expanding evaluation beyond training hours

For example, initiatives of strategic planning, career planning, integrated work teams, and mentoring can be included in evaluation designs. It helps assess how effectively they have been implemented (an aspect of level-3 evaluation) and whether they have positive organizational results (level-4 evaluation). Learning does not end when participants step out of the training room, so the other learning opportunities the organization can offer should be taken into account in evaluation. It will help in improving them and maximize their results.

Roger Kaufman stated that a 5th level should be introduced to the model, concerned with societal impact. He believed that by adding this level, organizations will make sure that the changes were meaningful.

Our solution to the Kirkpatrick model limitations

What we usually do in instructional design is we use this model backward. We are first laying down the results we want.

You can do that by expressing the objectives desired for:

    1.  individual or small groups (microlevel)
    2. the organization (macrolevel), and
    3.  external clients and the society (megalevel)

Then select the best learning solutions that will accomplish that results. It can be training. Or, a game based activity. Sometimes, the best solutions might not be training at all. It can be a reward, a change in internal culture, an integrated team work, a career plan session or a coaching and mentoring program. Today, non-formal methods of organizational learning are often more popular and effective.


To sum it up, we evaluate a training program is to see whether it has achieved or failed its objectives. We need this to make sure that learners get exactly what it’s needed. Secondly, we do it so we can explain our decisions regardind the training plan. In either cases, the best way to go about it is starting with the classical model, but thinking first about the results. Sometimes this will reveal that other methods could be useful, and would be the best choice for all beneficiaries involved. 

Instructional design