Fallacy of human error

This article was published today on Bywater’s site

As professionals with responsibility for developing management systems and for auditing them we often come across instances where the service delivered isn’t what it should have been or we have problems with product quality. As good professionals we investigate and identify root cause as ‘human error’. How real is this and how can we deal with these errors and stop them from hurting us?

Firstly, it is too easy to come up with human error as a root cause for failure – so much so that some customer industries including automotive will not accept it as being a final cause for a supplier failure, the logic being: people only make mistakes because they are allowed to!

To understand real root cause you need to understand the nature of errors – often impossible in the heat of a customer complaint. People do make mistakes – rarely will you find an example of someone deliberately delivering a poor product or service – but there is normally a good reason why a mistake was made. An individual may be distracted or under pressure to keep up with delivery schedules. Process documents may be unclear or authority levels not sufficiently defined.

To resolve these issues needs further investigation and to do this you will have to have the confidence of the people involved. The area is huge and is a minefield. As with all complex systems to be able to understand how errors occur you need to look at a range of different aspects:

  • Leadership – how do your organisation’s leaders exemplify desired behaviours and the importance of satisfying customer requirements so people understand what is required of them?

  • Communications – how do you communicate organisation expectations, including customer requirements; how well do you listen to what employees are telling you about their jobs?

  • Competence – how do individuals within your system demonstrate they have the skills and knowledge required to do the job?

  • Empowerment – how are people authorized to develop and manage areas of their work?

  • Recognition – how are people’s efforts appreciated and good practice rewarded?

If you are able to answer the above questions satisfactorily then you will be a long way towards establishing a quality culture that seeks out and eliminates root causes currently undiscovered and assigned to human error. There is guidance available from ISO TC 176 on people aspects of management systems, a vital area and often neglected, in the form of ISO 10015 and ISO 10018 and they are both being revised as we speak. There are some great examples around of earlier work including quality circles and the more recent self directed work teams at the heart of Lean manufacturing and service.

W. Edwards Deming said that 85% of all quality problems are management problems – if you accept this then you are part way to accepting there is no such thing as human error.

Fail to control design = designed to fail?

I had a recent and very specific query recently about design process and effectiveness measures for Management Review in a Medical Devices environment.

The question was specifically about demonstrating that outputs of the design process match inputs and whether this was acceptable to present to the Top Management team? The question was also about ISO 13485, the quality management standard for Medical Device Manufacturers but for this article I have broadened the field to cover any organisation looking to implement effective design control measures and many of the points made read across to other sectors. In this article clause references are aligned with ISO 9001: 2015 instead of ISO 13485 but again that the principles apply wherever you use design control and can be applied to any core process.

Generally design control is one of the least understood areas of how organisations go about providing products and services to market. Design plays the fundamental role in determining how well products and services operate and whether they deliver customer satisfaction, both at the point of delivery and throughout their useful life. You only have to follow media stories for product recall and regulator intervention to see that product designers in the automotive, aerospace, consumer goods and other areas as well as service designers, particularly in the financial services sector, have ‘designed in’ risk and failure leading to huge liabilities for their organisations. Individuals involved did not create these liabilities deliberately but didn’t have effective controls implemented for their work.

So, to be able to report on effectiveness to the top team, first you have to be clear what the design process is and what it gives your organisation, all covered in clause 4.4 of the standard, with further detail in clause 8.3. By looking at the design process and identifying criteria and methods needed for effective operation (4.4.1 c) you should be able to identify critical success factors (CSF) for design – generally covering three areas of Quality, Cost, Delivery (QCD), as for any project management activity – but more of this later. You can do this as a quality specialist ‘looking in’ but it is far more effective if you work with those involved in designing products or services and gain their views of what ‘good’ design control looks like.

These CSF requirements are used by your organisation to monitor, measure and analyse the design process (9.1.1) and should help you to establish objectives (6.2.1) and design process measures to demonstrate the process is working effectively.

The original question suggested using the matching of design outputs to design input requirements, covered in clause 8.3.5 – a good starting point but what you actually report at a management review might need careful consideration.

Generally design effectiveness is measured by how well:

  • product meets requirements – covered in clause 8.3.5 of ISO 9001 – the ‘Q’ part of QCD

    Internal (design process) measures:

  • design review results

  • results of component and prototype testing (verification activities),

  • field (including clinical) trials (validation output)

Internal (company) measures but after design:

  • Manufacturing:

  • right first time measures – how easy is it to make the product / deiver the service,

  • scrap and rework at new product / new service introduction – a measure of how robust the new design is

External to the company:

  • Warranty

  • Complaints

  • Field data on product effectiveness (clinical use)

Design process efficiency could be reported by:

  • achievement of budget (The ‘C’ part)

  • on time delivery of new products to market against original timing plan (the ‘D’)

Altogether these measures would demonstrate how well the design process is working.

As far as presenting this to the management review, as for the initial question, with the best will in the world top management (as required by ISO 9001) have limited time to spend on reviewing subjects like quality (not seen by them as “sexy”). Somehow you need to produce an edited highlights version of design measures that will hold their attention, a dashboard using a traffic lights system but with the ability to drill down into the detail should help; if you can assign pound notes to any of the measures that should help even more!

Management Review – it’s all in the name

I was in a discussion with a very earnest young man a while back and one of the topics we covered was Management Review as part of the ISO 9001 quality management system he had responsibility for. Let’s just say the conversation was a little heated in places and that lead me to write an article for the Chartered Quality Institute’s Quality World magazine – published in 2011. I’ve been following a couple of threads on social media and was discussing terms with fellow standards developers and thought it might be interesting to revisit the subject.

My earnest friend was of the opinion that Management Review was something new and special – perhaps invented by those wise people in ISO – when it is in fact merely a term for planning – an activity that responsible businesses have been doing for years. Now I understand the need for standard terms and for ISO to define these terms so that users of these standards have a common understanding. But here’s the rub – it is not the place for standard users and in particular quality professionals to continue to use these terms in their daily life. The more we use terms like Management Review, Management Representative and, my personal favourite, Product Realization in both work conversations and management systems documents the further we take these systems away from the people that matter – the users. So once we understand the term we need to go back to our organisation and understand what process(es) we have in place that already satisfy the requirements.

You would expect the board to discuss the effectiveness of the organisation’s management system in ensuring it delivers products and services to meet customer requirements Customer feedback, internal quality measures and the status of improvement plans and programmes would be topics of interest to any managing director. All well and good so far – these topics should also address the requirements of ISO 9001:2015 clause 9.3.2. But, rather than get a regular slot on the board agenda where the responsible manager reports to the board the poor old quality manager generally calls a one off meeting called a ‘Management Review’ with a cut and paste agenda of the standard. The agenda is slavishly followed until the board is bored into submission and everyone can breathe a sigh of relief, go back to the ‘real’ job and drop quality until next year. Worse still the board avoids the meeting as a waste of time – sometimes to the extent that records of reviews are fabricated for meetings that either didn’t take place or where necessary participants couldn’t spare the time. I’ve lost count of the number of wry smiles seen when I float this seemingly ridiculous notion. It is easy to criticize top management commitment in these situations but the responsibility for making the review relevant to busy senior managers is ours. In a previous role as the new quality manager I presented the plan for management review to the board of my ISO 9001 certified company and was faced not with hostility but with blank looks. It took a full eight months of one to one discussions and translation of ISO terms into activities and ,measures they were familiar with before we completed our agenda but I am confident the outcome was much more relevant.

The real challenge for the quality professional is to keep it real and get quality up the agenda so that quality performance is seen to be a leading indicator for financial performance. Recent changes to ISO 9001 give us a real opportunity with the requirements for organization leaders to get involved in establishing meaningful objectives and for process measures to be part of regular quality monitoring – right up to board level. Until those objectives and measures are meaningful and can be seen to be the main route to a sustainable business then we are condemned to a check box approach to review.

Is your internal auditor role a dead end job?

This article was published on the 13th February on Bywater’s site

With this article I’m hoping to prompt discussion about the auditor pool we select from, particularly internal auditors. This started after a comment from a 3rd party certification body where I was told their issue was succession planning and how to deal with an auditor pool where the majority of individuals were well past ‘normal’ retirement age. Now, as a ‘seasoned’ quality professional, I’m all for opportunity at the end of my career and there is great logic in having a good selection of experienced people in your ranks but, as in all things, there has to be balance.

Mirroring the suite of Management System Standards coming out of ISO’s technical committees there seem to be endless ISO / IEC 17021 – xx documents describing 3rd party certification auditor competence and, in the brave new world of demonstrated ability, these documents don’t define auditor competence by a minimum length of stay in a particular technical sector. As a colleague described it: 20 years experience on a cv can be demonstrated progression in a chosen field or one year’s experience repeated 20 times. Nevertheless it takes time to understand sector context, industry tools and techniques, jargon used and regulatory requirements that apply, all things covered in the latest draft of ISO 19011 and 17021 – 3, and hence you need some mileage on the clock to pick them up.

What about internal auditors, do the same rules apply? I say no. Auditors should be selected based on personal capability and behaviours, in particular intelligence and curiosity. If you are lucky enough to be able to afford one, look no further than your graduate programme. These are the best of the new intake to the organisation, proven to have enthusiasm, perhaps tested on assessment days and identified as the ‘best of the best’.

When graduates join you on day 1 they are all desperate to get under the bonnet of your organisation and understand how it works. As an early part of their graduate scheme get them auditing your management system. They bring with them latest thinking and technologies in their field and an inquisitive nature. With the licence to follow their nose, communicate with people working at all levels in the process(es) you operate and to challenge current thinking they can provide a real ‘fresh eyes’ look at what you do.

Of course they’ll need some training in your audit process and as they go they’ll make mistakes and tread on some toes but my bet is that you will get some real nuggets from their insight and fresh thinking about your way of working including how your system is portrayed in systems including documents. The individuals will gain a great deal, too. Having to challenge senior colleagues if poor practices or ineffective controls are found and then to have to report back to function managers is character forming.

Give it a go, particularly if your current internal audit programme is seen as a check box exercise and results generally fall into the ‘And? So what?’ category. This way of holding a mirror up to your organisation is bound to give you value way beyond what you expend.