ROI In Training: The Good, The Bad, And... The Real

ROI In Training: The Good, The Bad, And... The Real

The value of training often needs to be declared in monetary terms before a programme begins. The value might need to be demonstrated after the training ends. Various members of an organisation—the training manager, upper management, the HR people, and others—can have different ideas about the ROI.Given that there are many points of view, ROI in training can become a convoluted topic. What happens when we try to keep it simple?

ROI In Training Is Hard

Calculating ROI in general is sometimes straightforward, following a traditional procedure. It is sometimes complex.In a general context, without reference to the type of organisation: When a training programme is proposed, an estimate of the ROI might be expected of the manager. This can be problematic. The extent of the problem depends on the training area. In some cases, even the topic becomes controversial. A particularly difficult case is soft-skills training, where managers seem to be in one of three camps: “It can be done,” “It is impossible,” and “I don’t have a good idea.”

The questions are simple enough. We find many examples, with only minor variations, posted online: “I need a method or procedure to calculate the ROI for a proposed leadership and behaviour training programme.” The answers vary widely.

Some take the line that it is indeed possible, with the starting point that all skills “have a clear purpose and will impact a business in a specific way.”[1] At the other end of the spectrum is (reference), who says that “subjecting soft-skills training to pure skills-mastery financial analytics is intellectually dishonest, foolish, wrong-headed, useless at best and counter-productive at worst.”[2]Without diving into the topic of soft-skills training, let’s ask an easier question: How hard is it to justify a training programme using an ROI figure?

ROI Estimates: A Necessity

The success of a proposal for a training programme, or training intervention, sometimes depends directly on the manager’s ability to calculate the ROI. Further, the manager’s own continuing success can depend on demonstrating that ROI.This sometimes leads to an artificial straitjacketing of people and events into provable numbers. Quantification towards such a purpose can be counterproductive, and in some cases, unworkable. As an example, consider a leadership training course. After the completion of the course, what could be the ROI metric(s)? Here is a list of possibilities:

  • The number of trainees who subjectively rated themselves as having leadership qualities
  • The (subjective) performance of the concerned department
  • The (quantified) increase in productivity of the department
  • The efficiency of the individual trainees

There seem to be too many possible choices of metric, and each of the metrics is difficult. If this seems bad enough, consider that two people might not agree on the choice of metric and/or on the method of measurement.Further: larger decisions might be based on the ROI calculation. Should the training be outsourced? Should the training be instructor-led or only electronic?In a case where ROI as a metric is abandoned, the choice of training programme might suffer, or, a programme might be conducted without anyone knowing who gained what—which eventually can rebound onto the training manager.Within what seems to be too much complexity, let’s look at a few underlying facts.

Learning Evaluation to ROI: The Leap

The most widely-used system of evaluating training comes from Don Kirkpatrick[3]. His system of evaluation did not mention ROI, but ROI in training often uses Kirkpatrick’s system. The “levels” in this system are Reaction, Learning, Behaviour, and Results, which in simple language equate to:

  • How much the participants enjoyed the training
  • How much they actually learnt
  • How much they apply what they learnt
  • How far the goal of the training was achieved

When we move from this to ROI, we are faced with the obvious problem: the numbers might mean something, or they might not mean anything.Take the context of a sales training programme. Regarding the application of what was learnt during the programme, we might ask: have sales increased? If the answer is “sales went down,” can we conclude that the programme had a negative effect on the trainees? If the answer is “sales did not increase,” can we reliablysay that the course was a failure?The obvious answer to both questions is “No.” Going with that, we can see that even a positive result cannot justify saying that the programme was a success.This is a situation where we have a method, but cannot say whether it can be used. What next?

Hard And Soft Data

With the overarching theme being quantifiability, it is critical to demarcate “hard data” from “soft data.” Hard data relates to what we usually think of as “data” or “numbers.” It is objective and measurable. It comprises numbers with which monetary values can be associated, and it is credible across departments.An easy example of hard data is the implementation of fingerprint scanners for employee entry and exit. The cost of the system can be easily calculated. The time savings—rectification of incorrect manual entries—can be calculated. Precedents can give an idea of loss due to incorrect manual entries, and so on.Soft data is whatever hard data is not: subjective, difficult to measure, less universally credible. As one example: a programme geared towards increased employee-employee interaction might be in line with organisational beliefs, but the benefits are virtually impossible to quantify.With that distinction in place, we can ask: what are the assumptions when a training event is proposed? The universal assumption is that training can increase performance. In some given situation, this might or might not be valid.That assumption itself is akin to soft data. When a soft-skills training programme is proposed, is it based on precedents where the programme was successful? Probably not. Even if it were, the question of whether the programme is suited to the proposed learners is often not considered.Closer in spirit to hard data, and perhaps more valid, is the idea that training can address a problem. This would entail an identification of problem area(s), and then, a quantitative and/or qualitative assessment of those.There is often the chance to create one’s own hard data for later use—informal, or “soft,” at first. Trainees might be asked some time after completing a programme about whether they did anything differently as a result. Questions of this nature, in fact, bridge the “Reaction” and “Behaviour” of Kirkpatrick’s scheme.In creating one’s own metrics, some managers might have the luxury of two groups of employees—one that underwent a training programme and one that did not. The hard data comes about by simply comparing the two groups.

Addressing Problems Versus Assuming Benefit

Now the first, natural line to think along would be: What are the problem area(s) I wish to address using training?This leads to the idea of a continually updated skills matrix. This matrix identifies problems instead of proposing universal benefit, so it can tell us why training is required, if it is.The justification for a training programme can thus be shifted from an ROI proposal to evidence of need.

KPIs And More

The idea of KPIs (key performance indicators) is useful in the current context. Tony Dunk[4] says: “Starting with the KPI, identify the key activities that have to happen for successful achievement of the measure.” With this idea, we can tackle the issue of quantification in regards to soft skills training.Suppose one KPI for a call centre employee were customer satisfaction, which can be determined by use of a survey. The credible necessary events on the part of the employee would be subject knowledge and personal communication skills, with the latter traditionally called a soft skill. A personal communication learning programme could be instituted, and customer satisfaction measured before and after. Further, the programme could form part of a group of precedents.An important point, in the context of addressing need, is alignment with business goals, needs, and values. Donald Taylor[5], once strategic alliances director at InfoBasis, says[1]: “It's crucial that alignment to business needs is done up front with training, so that when you are asked retrospectively whether there has been value added, you have clear answers.”Finally, we need to remember that some aspects of learning are best off not subjected to an ROI analysis. These include, for example, organisational awareness, and employees’ subject knowledge in areas pertinent to the organisation.

References

  1. Redford, K. 2007. How to measure the impact of soft skills training. Retrieved 14 July 2010 from http://www.personneltoday.com/articles/2007/07/17/41446/how-to-measure-t...
  2. Green, C. 2008. Stop Measuring ROI on Soft Skills Training. Retrieved 14 July 2010 from http://trustedadvisor.com/trustmatters/425/Stop-Measuring-ROI-on-Soft-Sk...
  3. Kirkpatrick Partners. 2009. Training that delivers measurable business results. Retrieved 14 July 2010 from http://www.kirkpatrickpartners.com
  4. See Change Consulting. 2008. Measuring ROI on Soft Skills Training. Retrieved 14 July 2010 from http://www.seechangeworld.in
  5. Taylor, D. 2010. Donald H Taylor. Retrieved 14 July 2010 from http://donaldhtaylor.wordpress.com

What has your experience been? Have you needed to demonstrate ROI beforehand? Have you used KPIs to advantage? Comment below, or mail the author: connect@focalworks.in.