When can we consider compliance training as “effective” and what KPIs make sense to measure its effectiveness?
Often we see Compliance teams’ presentations reporting on how many hours of training they have given in a certain period of time. But does this tell us anything about the effectiveness of compliance training? Other than giving an indication that the compliance team was active doing their work, we gain pretty little useful insight.
What can better measures look like?
If we turn to the new guidance of the U.S. DOJ on evaluating the effectiveness of compliance programs, we get a pretty useful high-level approach with three main questions:
- Is the training program designed effectively?
- Is it implemented effectively?
- Is it working in practice?
While the first two questions express the long established concept of design effectiveness and operating effectiveness of internal controls, the third one is maybe the most important addition.
Applying this to compliance training, one quickly sees how little sense it makes to report on “hours of training given”.
A slightly better but still insufficient indicator is “percentage of employees trained in a particular policy”. It still doesn’t say anything about the quality of the training or if participants actually learned something, but it’s at least a starting point to show that the organization tracks which employees received which training, maybe signed a certification to comply with the subject matter policy, and can track this. It may in some cases at least work as a CYA (cover your a…) KPI to put the blame for misconduct on a “bad apple”. This kind of metric is still used fairly widely and one only has to read a daily newspaper to see how well this approach has worked in practice in the past years.
So when is compliance training effective?
In my opinion, we can call training effective if
- in each important risk area
- the people are identified who could make decisions / behave in a way that would pose a compliance risk
- and provided training with content that is relevant to their area of responsibility;
- and if these people understand what is expected of them
- and have the knowledge and capability to apply the expected behavior in practice
- and recognize the situations where to apply the learning from the training;
- and then really behave as is expected of them (=be compliant).
Such a training approach satisfies all three criteria from the DOJ guidance.
How can we measure this? I haven’t found a single KPI integrating all of these, but a combination of KPIs could work:
- Design effectiveness: risk-based approach to selecting employees to be trained in specific areas
- Operating effectiveness: percentage of these designated people actually trained (here we go with our classic)
- Practical effectiveness: results of knowledge tests before and after the training (shows the knowledge gain); + decrease of questions to compliance about situations falling under the training (indicates people did understand something); + decrease in monitoring findings in the respective area (indicates people do actually comply more).
For the last part we don’t have a proof of causation, meaning reduced cases of non-compliance don’t necessarily follow only from effectiveness of training. But if we see a positive trend in knowledge uptake in connection with a decrease of basic questions and a decrease in non-compliant behavior, we have at least a chain of arguments that points in the right direction.
Note that unfortunately we’re left with measures for decreases in misconduct and other non-compliant behavior since we usually cannot directly observe all kinds of increases in compliant behavior.
I would also like to state that this is not a perfect system but just an example to illustrate the way of thinking about compliance training and possible measures of effectiveness.