The IIA’s Three Lines Model

The Institute of Internal Auditors has recently updated its Three Lines of Defense Model, which is now called the “Three Lines Model”.

Last year’s review and my comments

Last year, when the IIA initiated its review of the existing Three Lines of Defense Model I was asked by them to comment on the main weaknesses of the current model. I replied:

The main weaknesses in the Three Lines of Defence Model

The model is totally artificial. It does not represent how organisations and the people in them make decisions to help them achieve their organisation’s purpose. It wrongly promotes technical silos (the 2nd line) who take on responsibilities that rightly should be management’s. While this may inflate the egos and incomes of those heading up the silos, the net effect is detrimental for the organisation.

My greatest criticism is that it is not about the monitoring that should be taking place to ensure organisations pursue success through the normal process of decision making in order to achieve their purpose. Rather, the focus has become the organisational structure and the labels attached to support departments who in truth should always be under the direction of management and to provide support to decision makers.  This is particularly true of Internal Audit – which really does not need to exist as a separate, autonomous organisation: management may need some independent reviews of decisions it has made and of context and changes to that, but an internal regulator is not needed and is counter-productive.

The language of the model also supports the defunct concepts of risks and controls – ‘things’ that somehow ‘exist’ and have to be recorded and tabulated.  While, in reality, controls are just ordinary aspects of an organisation that exist because someone decided (often at some time in the distant past) that they were needed to ensure that the outcomes of a particular decision were as desired. In other words, they were secondary elements of some past decision.

However, these labelled ‘things’ gain a life of their own, disassociated from the original decision, listed and checked – even if their true rationale has been lost.  As a consequence, enormous resources are wasted and mis-directed monitoring these ‘things’ that often don’t really matter anymore, while others that do are ignored.

Similarly, with risk and risks: two terms that are used to strike up fear and concern when, in reality there is almost no agreement on what they are and what they mean, even among (so-called) risk management experts. The terms are so ambiguous and have become so discredited that its simply better to move on and leave them behind. Labelling a particular advisory department, the Risk (or Risk Management) Department only allows decision makers to abrogate their responsibility to ensure that with decisions they make there is sufficient certainty the desired outcomes will be achieved. 

Calling such advisors (if they are that) a line of defense both removes management accountability and also allows large amounts of resources to be mis-directed to the confections and processes those advisors and their silos think are important (to them). If you doubt this is true, ask anyone in the rest of an organisation how they think the ‘risk department’ creates value; even if they can answer this, few if any people will express belief in their own response if pressed.

What the model lacks is a fundamental recognition that it should just be about the strategies for monitoring (and not who does it).  Specifically, how those who make decisions check that:

  1. implementation of the primary element of a decision does not proceed as assumed or intended; or
  2. the secondary elements of a decision are either not properly implemented, malfunction or deteriorate over time;
  3. over the life of a decision (i.e. the duration in which its outcomes will continue to be experienced) changes in context occur that were not allowed for in the decision with the result that the actual outcomes change, and/or the decision no longer provides the best response to the opportunity that it was intended to exploit.

The primary elements of a decision are those features intended to exploit an opportunity in order to realise an organisation’s purpose. Secondary elements are those that make it more likely that the primary purpose will be realised.  Secondary elements include ensuring those implementing the decision correctly understand what is required of them and what the decision is intended to achieve; ongoing monitoring to detect change and contingent arrangements intended to be activated in foreseeable circumstances that otherwise could be disruptive or thwart achievement of the primary purpose.

What issues should have been addressed

When asked what “issues had to be addresses in the refresh”, I replied:

Drop all artificial language and concepts like controls and risks. Focus solely on the processes for monitoring and how these should be deployed in response to the needs of decision makers to ensure that decisions are sufficiently certain of achieving the desired outcomes – and that past decisions remain valid and continue to support the organisation achieving its purpose.

The new model

However, sadly, on reviewing the new model it is clear that the IIA has missed another opportunity to write something clear and simple. Instead, we have another word soup of jargon and confused ideas.

As soon as I read the introduction, I realised that instead of clear ideas, carefully expressed this document relies more on sophistry than common sense and practicality. How, for example can you say that organisations can bale the achievement of objectives “while” supporting strong Governance and “risk management”.  Surely, we have a Venn diagram here of three concentric circles. Surely “strong governance” must include good risk management” and governance can only be about the way the organisation makes decisions that allows it to achieve its purpose (“objectives”?).

Of course, ambiguity is ensured (and hence consultancy income) by the document not defining what it means by ‘governance’ and ‘risk management’, let alone ‘risk’. Also, the ‘r’ word is used variously as a noun a verb and an adjective.

We also have ‘managing risk’, ‘risk management’ (which seems to be an “action”) and also ‘risk-based decision making’ – which is a variant on the made-up term of ‘risk based thinking’ in ISO 9001.

The more I read this, the more confused I become. For example, I’m told that the “objectives” of ‘risk management’ are “compliance with laws, regulations, and acceptable ethical behaviour; internal control; information and technology security; sustainability; and quality assurance”. Is that it? No mention of making decent decisions here. And how can “objectives” be processes such as ‘quality assurance’ or vector qualities such as ‘sustainability’ – and what do all these terms mean, anyway?

When I get to the short section called “Applying the Model”, I realise the authors have both run out of intellectual steam and are beginning to cotton on that none of what they have written before makes much real sense in the real world. Despite the firmness of the previous advice, it seems you can choose how to adapt it how you like according to your “objectives” and “circumstances”.

So rather than being some fundamental truth of life, all this document really it is a web of interconnected and ambiguous words and half formed thoughts.

Consultants and internal auditors will love this – as it justifies their existence.

Should internal audit perform a risk assessment?

Assuming that there is a credible ERM function/process in place, IA needs to provide assurance that the ERM processes are effective, and they also need to validate that the key controls that management assumes to be working in their RA are in fact effective, but also IA should be able to some extent opine on whether the company-wide RA done by the ERM and management teams are reasonably stated and that they are not aware of any major discrepancies. To your point about continuous, every internal audit according to the COSO framework should be concluding about the adequacy of management’s RA process for the area/function reviewed.

Post by John Fraser on Norman Marks on Governance, Risk Management and Audit

John, stripped of all the jargon and acronyms, the ultimate purpose of whatever ‘ERM’ and ‘IA’ are meant to mean or be, can surely only legitimately be that the organisation makes the best decisions it can. That is because the only way that organisations can pursue their purpose is to make (and implement) decisions to take advantage of opportunities, and the only way it can achieve its purpose is by ensuring that the decisions are the best that can be made. This has always been so (long before anyone uttered the ‘a’ or ‘r’ word) and always will be so (long after the ‘a’ and ‘r’ words disappear from the business lexicon … hopefully, a milestone that is not too far away). And yet decision-making is not the focus of either ERM or IA and never has been.

As Grant Purdy and I say in our recent book ‘Deciding’ (which Norman kindly introduced here in his 25 April blog) organisations will have more success in pursuing their purpose if they consistently make ‘even better’ decisions.
In describing what we contend is a universal method of decision-making (i.e. the method used by all ‘Deciders’ whether they realise it or not) we have attempted to explain how to excel in applying each element of this method as it is this – decision-making skill – that is the only way organisations and their ‘Deciders’ can determine their success.

All those responsible for governance need have confidence about, is how well this method is being applied by the many Deciders across the organisation. In the same way that sales, RoI etc are visible to the governance team (because they bother to look) so too is the quality of decision-making easily discernible especially if those involved in governance and management themselves, excel. Trying to contract-in reassurance, rather than looking themselves, is an easy (but generally unsuccessful) cop out.

Ensuring consistently good decision-making is analogous to manufacturers or service providers achieving the intended levels of performance of their product (i.e. delivering ‘quality’). In the 80’s, those efforts switched from checking the final product and throwing out the duds, to focussing on design and execution of each of the steps of the process through which those products were made (in the understanding that dud products are the outcome of dud processes and good products are the outcome of good processes). So too must the focus of organisation performance monitoring, switch to the quality of decision-making.

So the message is, whether it is the board and management (or the cop-out of using a hired gun – however they may be described) just look at the people who are making the decisions and their decision-making skills. As we say in ‘Deciding’, it is a tricky business being a Decider so helping them individually and institutionally to make decisions is where the effort should go. Not in irrelevant ‘r’ and ‘a’ stuff with the associated focus on failure and checking for duds at the end of the decision production line.