No Image Available


Read more from TrainingZone

googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

TNA and Evaluation: Two Sides of the Same Coin?


Pound coinMartin Schmalenbach looks at how to link training needs analysis with evaluation. Afterall, he says, they're two sides of the same business coin.

Training Needs Analysis (TNA) is the process of identifying training needs for an individual or group of people. Its outcome is a clear set of training objectives to be met by what ever method (workshop, on the job training, e-learning etc.) is selected in the future. It takes place before any training is undertaken.

There are several traditional methods for undertaking a TNA:

  • Competency gap analysis – usually as part of an appraisal process, people’s current competencies are assessed against an existing competency framework, and any competencies where people fall below the required standard indicates a training need.
  • Performance gap analysis – usually ‘as and when’, an individual is ‘seen to be underperforming’ and it’s decided this is due to a lack of skills, knowledge, experience etc. The performance areas do not lend themselves obviously to any predetermined competencies in the framework or there is no competency framework in place.

"The excuse that evaluation of impact might cost more than the training cost in the first place is frankly not proven and a ‘cop out’."

Martin Schmalenbach, add title

There are other approaches but these two account for the majority of TNA activity in most organisations. Not every organisation has a formal competency framework, and not every competency framework covers all eventualities, so it is typical to see a combination of these methods in use.

There is no inherent guarantee that training undertaken as a result of only these approaches to TNA will have the required impact on the organisation, even though the training experience itself might be the best thing ‘since sliced bread’!

To ensure there is a desired impact on the organisation requires a firm link to be made between performance standards and what the organisation is actually trying to achieve. Competency frameworks can be seen as an attempt to make these links. Generally these links are either not present in all areas where they should be, or are not specific enough to ensure a sufficiently focused training intervention takes place to deliver the required outcome.

Finally, training doesn’t take place in isolation – it’s almost always part of a bigger picture and piece of work. If the objectives of this bigger piece can be achieved without the support of training, don’t train!


Evaluation of training is the process by which a training intervention is assessed for impact and value given the resources used and any disruption arising (e.g. having a person away from work to attend training).

Traditionally the evaluation is conducted once the training has taken place. Donald Kirkpatrick suggested in his famous article in 'Training Magazine' in 1959 that evaluation can comprise 4 levels:

Level 1: reaction of the learners to the experience

Level 2: the extent of the learning by each learner

Level 3: the changes to learner’s behaviours in the workplace

Level 4:the impact of the training on the organisation’s progress to achieving its objectives

It is rare that organisations make any credible attempt to determine if the training was actually worthwhile given the its use of resources and disruption caused (Level 4 evaluation). For me this suggests management is being at best ‘cavalier’ with its resources, and at worst negligent in discharging its responsibilities. The excuse that evaluation of impact might cost more to do than the training cost in the first place is frankly not proven and a ‘cop out’.

Linking TNA and evaluation

If you conduct a TNA to determine if training is needed, and this TNA is firmly and credibly linked to driving organisational performance, then any training it recommends is likely to have a desirable impact. Repeating the TNA after the training and demonstrating that the original training needs have been met (because the repeat TNA suggests no further training needs) seems like one reasonable approach to evaluating!

If you only evaluate the impact of the training after it has taken place, and discover the training has not had enough of the desired impact, it is too late to turn back time and fix things. All you can do is move on, either coping with the current situation or taking remedial action perhaps through further training. Either option consumes additional resources and can even mean that desired outcomes can never be reached. It doesn’t do the reputation of the training function much good either!

Surely it is better to evaluate the likely impact of the training before committing valuable resources to it? Some will argue that a good TNA process will do just this. I agree if the TNA is firmly, explicitly and robustly linked to achieving the objectives of the organisation, and that the root causes preventing these objectives being met as well as the core drivers for achieving these objectives, are directly addressed by the training it recommends.

"If you only evaluate the impact of the training after it has taken place, and discover the training has not had enough of the desired impact, it is too late to turn back time and fix things."

How can we do this?

  1. Clarify the problem and the desired outcomes – describe both in terms of specific and well-defined measures considered important to the organisation. Describe both also in terms of observable behaviours. Note the current values and behaviours as your ‘baseline’. Avoid any reference to solutions and training at this stage.
  2. Determine the root causes for the current situation, and identify also any core drivers that push towards the desired performance outcome.
  3. Do more of what works, and less/none of what doesn’t. By this I mean select a combination of root causes and core drivers that when tackled, give you just a bit over the desired outcome. By only just achieving the desired outcome you get what you require, but probably for the smallest effort and resource usage. In tackling these root causes and core drivers do more of what already works, do new stuff to close the gap, and use the time & effort saved by stopping doing stuff that doesn’t work to do the new stuff. This means your work load should be largely unchanged…! It also makes it easier to spot the areas where training is essential if the stated outcomes are to be achieved. This is in effect your TNA.
  4. Review progress, using your baseline and clear problem/goal definitions, to determine if you’ve achieved the required outcomes. Find out why any shortfalls have occurred and act on this information in the same way you’ve tackled the original problem. I think this might be called continuous improvement...!

Perhaps the surprise here is the realisation that in order to be sure you can demonstrate the value of any training, you have to identify explicit and clear descriptions of the current situation and desired outcomes, and develop a route from the current situation to the desired situation by way of root cause analysis for example. In doing this you gain both the means to evaluate before training – something all managers would ideally like to be able to do – and to develop the required TNA!

Are TNA and evaluation two sides of the same coin? I’d say they’re two sides of the same ‘business improvement’ coin!


Get the latest from TrainingZone.

Elevate your L&D expertise by subscribing to TrainingZone’s newsletter! Get curated insights, premium reports, and event updates from industry leaders.


Thank you!