googletag.cmd.push(function() { googletag.display(‘div-gpt-ad-1705321608055-0’); });

Parkin Space: Metrics with Meaning

default-16x9

Godfrey ParkinStruggling with your ROI figures? Godfrey Parkin suggests you put your pen down and try a different form of evaluation.


There is a common misconception in business that, because they work with them all the time, financial people understand numbers. They like to reduce everything to money – what did it cost or what did it make? They insist on dealing in certainties and absolutes, where every column balances to the penny. But the real world does not work like that. The real world is characterised by imperfections, probabilities, and approximations. It runs on inference, deduction, and implication, not on absolute irrefutable hard-wiring. Yet we are constantly asked to measure and report on this fuzzy multi-dimensional world we live in as if it were a cartoon or comic book, reducing all of its complexity and ambiguity to hard financial “data.”

We struggle for hours (often for days or weeks) to come up with the recipe for “learning ROI.” The formula itself is simple, but the machinations by which we adjust and tweak the data that go into that formula are anything but simple. Putting a monetary value on training’s impact on business is fraught with estimation, negotiation, and assumption – and putting a monetary value on the cost of learning is often even less precise. Yet when was the last time you saw an ROI figure presented as anything other than an unqualified absolute? If you tried for statistical accuracy and said something like, “this project will produce 90% of the desired ROI, 95% of the time with a 4% error margin,” you’d be thrown out of the boardroom. You simply can’t use real statistics on an accountant, because the average bean-counter can’t tell a Kolmogorov-Smirnov from an Absolut-on-the-rocks. Don’t tell us the truth, just give us numbers that conform to our unrealistic way of measuring the business.

We spend way too much time trying to placate financial people by contorting our world to fit their frame of reference, and we allow them to judge and often condemn our endeavours according to criteria that are irrelevant or inappropriate. Perhaps there is some comfort in knowing that the problem is not unique to training. In my years in marketing, I saw plenty of good brands ruined by ill-conceived financial policies, usually to the long-term detriment of the company as a whole.

But you don’t need to be a statistician or an accountant to make a strong business case based on logic and deduction, and there is no need to be pressured into using the preferred descriptive framework of a book-keeper. The pursuit of the measurement of ROI in training is often a red herring that distracts from the qualitative impacts that our work has on the performance of the business. ROI is typically not the best measure of that, and, after making all of the heroic assumptions and allocations needed to arrive at it, that magic ROI figure may well be a false indicator of impact. Unfortunately, the indicators that are useful and reasonably accurate are often hard to convert to financial data, so they do not get taken seriously. And, compounding the problem, training managers themselves often ignore these indicators because they are not captured at the course level. Our focus too often is on the quality of courses rather than on the quality of our contribution to the business in total.

We need to widen the focus. While learner satisfaction, test results, and average cost of bums-on-seats are useful metrics, it is only after our learners have returned to work that we can begin to see how effective the learning experience really was. What are some of the indicators that let us know how we are doing? Many of them are produced already, often by the financial people themselves, and tracking them over time gives good insights into where we are doing well and where we might need to pay more attention.

Some of those metrics include:
* Training costs per employee.
* Enrolment rates and attendance rates.
* Delivery modes, plans against actuals.
* Percentage of target group that is “compliant”.
* Time from eligibility to compliance, or to proficiency.
* Percentage of workforce trained in particular skill areas.
* Learning time as percentage of job tenure.
* Availability, penetration, and usage rates of help systems.
* Skill gap analyses tracked over time.
* Productivity (such as, for example, number of new clients per 100 pitches).
* Attrition rates.

There are many, many more. Metrics such as these let us put on the manager’s dashboard indicators of performance in areas such as operational performance, compliance, efficiency, effectiveness, and workforce proficiency, as well as harder to capture dimensions such as motivation and readiness for change. Training departments need to think “outside the course” and come up with ways to derive the right indicators in a way that is inexpensive and unobtrusive.

One of my favourite recommendations is that training departments learn something from their marketing colleagues, and set up the ability to run surveys and focus groups, to investigate learner satisfaction, customer attitudes, job impacts, attitudes, and manager perceptions. This skill is often absent in training departments, which is a pity because these methods can produce great insights and save money and time. If you build this capacity into your training organization, getting a read on Levels 3 and 4 can become as much a part of your evaluation regimen as gathering smile sheets. You don’t have to interrogate the universe if you can pick a small sample. And you can produce real data and real trends that go down very well in the boardroom.

* Read more of Godfrey Parkin's columns at Parkin Space.