ISO 9001:2015 – risk and opportunities

Below is an article I added to CQI’s blog in November 2015.

In the third instalment of our guest blog series in collaboration with PMI, Paul Simpson asserts that, just as there are risks and opportunities that we respond to in daily organisational life, quality professionals should focus on the opportunities for improvement presented in ISO 9001:2015.

One of the big new ideas in the 2015 edition of ISO 9001 is ‘Risk Based Thinking’ and if you are to believe the ‘Twitterati’ the concept is akin to the subject of Edvard Munch’s painting ‘The Scream’ as the quality management landscape turns vibrant orange behind them.

But before the hysteria needle hits ‘11’ let’s think back to the real world outside the quality manual.

Everyone involved in running an organisation looks at risk and opportunity – they are two sides of a coin. When an entrepreneur starts their business, risk and opportunity are always front and centre in their mind.

Wherever they have come from, they have identified an opening to start a business, make a living and grow it to the point where it gives them an income with the opportunity of a pot of gold for their retirement. This future is, however, not certain. There will be difficulties along the way and these risks, left unmanaged, could lead to a loss of income and, ultimately, to their business failing.

The entrepreneur recognises these risks come in many forms and many are related to quality:
• Do I have the right products and services for my target customers?
• Can I control production and service delivery to consistently meet those customer needs?
• Can my suppliers keep up with my demands and maintain the quality levels I need?

If I can manage those risks at that level then the business will succeed and I can grasp all the opportunities, including that elusive pot of gold.

Moving forward in time as the business continues to thrive and grow, our entrepreneur has moved upstairs to the boardroom as CEO and has managers and teams dealing with day-to-day business while they buy in high-priced consultants to lead some ‘blue sky’ strategy sessions. Strategic risks haven’t really changed – an incorrect strategy still has the capability to bring down our grown-up start-up.

Tactically the business can cope more easily with risk as it has multiple customers buying a range of products. On the downside, tactical errors can lead to an erosion of hard-earned brand reputation as all our customers inhabit the same system and talk to one another – see the earlier blog on organisational context, Context is King.

Moving out of the boardroom along to the shop floor and offices where ‘business as usual’ happens, ‘risk’ looks a little different but it is just as important it is recognised and managed.

With every order comes a risk the organisation will misunderstand its customers’ needs so, at this process level, there have to be checks and balances. Individuals working with their CEO’s delegated authority, accept orders and enter into contracts including the inherent risks that a legal contract carries.

At the same time on the shop floor, all employees are involved in managing risk. Some develop specifications and standards (perhaps in a separate design office), some manufacture products or deliver services that they believe meet those standards.

Throughout the process managing risks leads to delivered products and services meeting specification, satisfying customer needs and customers paying their bills, thereby allowing the organisation to realise the sales opportunity and contributing to our entrepreneur’s vision of a pot of gold.

If the above risks and opportunities are present in daily organisational life, why do we have concerns for the quality professional’s ability to inhabit this space? Why do we have concerns over what our certification body auditors are going to ‘do to us’?

The revised clauses of ISO 9001 create an opportunity for us to revisit and realign our processes to ensure our systems deliver what our customers and stakeholders want. There are, of course, risks with changes to the standard, but perhaps we can focus on the opportunities presented and maximise them instead.

Change in the railway

Complex systems have inherent risk

Whichever one we think about – weather, eco-systems, financial markets or railways – each has hazards and risks that we are unable to control fully due to the number of factors involved and volume of activities that occur daily.

Changes to complex, man-made systems are managed to prevent catastrophic failures through formal and informal rules – and are captured in legislation and industry codes of practice. The UK rail industry has a long history of legislation and standards to control both railway operation and change management. Performance trends over recent years indicate a dramatic improvement in passenger safety. Indeed UK rail remains among the safest means of transport in Europe both in absolute terms and per kilometre travelled, according to the Office of Rail Regulation (ORR) and Eurostat data.

Sit up and take notice

So, how is ‘change’ changing? With evidence of improving performance, when the ORR announces changes to risk management across the UK rail system, it is time to listen. Since 2006 the Railway and Other Guided Transport Systems (Safety) Regulations 2006 (ROGS) have required dutyholders, including infrastructure managers such as Network Rail or railway undertakings such as train or freight operating companies, to develop and maintain safety management systems. These systems include controlled change management of rail operations and infrastructure and ROGS-required competent, independent persons to verify safety before placing into service. From 2013 ROGS amendments required dutyholders to apply a risk-based approach to assessing significance of technical (structural assets such as buildings, track, signalling, vehicles) and operational (timetable, staffing, ways of working) changes in accordance with their safety management system. The amendments allowed dutyholders to apply significance testing to determine the scrutiny level required. When dutyholders want to make a change they assess significance based on six criteria:

  • Failure consequence: realistically what could go wrong taking into account existing controls?

  • Novelty: how new is the proposed change for both the industry and the company?

  • Complexity of the change: how many sub-systems and groups are involved in the change?

  • Monitoring: how easy will it be to monitor the effect of the change throughout the asset lifecycle (including maintenance)?

  • Reversibility: how easy is it to revert to previous systems?

  • Additionality: how could this change interact with other recent (and not so recent) changes to the asset and operations involved?

    A different track

A change to a piece of track, for example, is rarely assessable on its own. The impact includes the track itself, perhaps buildings and civil structures along the line, signalling and train operations (speed and volume), passenger volumes and traffic at stations nearby. All of this falls under risk assessment and evaluation.

Similarly operational changes involving significant variations on staffing or ways of working might affect safety. Historically the railways have relied on large numbers of standards mandating prescriptive working methods, but they are moving towards a risk-based system. This in turn relies on trained and competent staff making judgements on safety, based on key principles. This represents a significant safety-related change and should be assessed using the common safety method (CSM). Where the proposer determines the change is significant, they have to manage it through:

  • a code(s) of practice comparing it with similar (reference) systems

  • explicit risk estimation.

In each case they must use an independent assessment body (AB) for whichever risk management method is used to evaluate their change management. They must also provide ‘an independent assessment of the correct application of the risk management process’ and a safety assessment report as evidence.

The AB role is a new requirement under ROGS and the underpinning EU Directive. The proposer must decide whether to accept the change as safe for entry into service “based on the safety assessment report provided by the assessment body”. The regulations require several pieces of evidence: z Change description – including the system it relates to. Used as the basis for significance testing, the results of which are also recorded.

  • System definition – key information describing the system being changed.

  • Hazards identified associated with the change and evaluation and assessment of their risk.

  • Risk control measures to be applied for each risk using one or more of the three management methods above.

  • A system hazard record maintained along with the system definition as the project progresses.

  • Evidence that risk management has been applied throughout the change project.

  • For significant risks, the AB report.

The proposer retains responsibility for system safety and decides whether the system change is safe to enter service based on the report.

Legal separation

Change proposers may use any support they require to manage change and ensure risks are kept in check. The railways use consultants to advise and independently assess compliance but the new AB role requires independence from the parties involved with change. The United Kingdom Accreditation Service (UKAS) programme for accreditation of ABs assesses legal separation, impartiality and independence. The initial pilot was open to accredited railway notified and designated bodies and was then extended to others. The accreditation regime for ABs came into force in May 2015 when the last 2013 ROGS amendments come into force and the ORR maintains a watching brief to ensure accreditation is effective and change management remains controlled. In the meantime, dutyholders are expected to use existing safety management systems to verify the safety of changes to vehicles and infrastructure.

Industry bodies such as ORR and the Rail Safety and Standards Board have produced guidance on implementing new requirements, and conformity assessment bodies are working with new and existing clients to explain the new requirements. The overarching aim for managing change in the complex UK railway system is for “a better railway for a better Britain” and “everyone home safe, every day”.

This article is adapted from ‘Managing Change’ – published in the November 2014 edition of IIRSM’s Insight magazine.

[bestwebsoft_contact_form]

Fallacy of human error

This article was published today on Bywater’s site

As professionals with responsibility for developing management systems and for auditing them we often come across instances where the service delivered isn’t what it should have been or we have problems with product quality. As good professionals we investigate and identify root cause as ‘human error’. How real is this and how can we deal with these errors and stop them from hurting us?

Firstly, it is too easy to come up with human error as a root cause for failure – so much so that some customer industries including automotive will not accept it as being a final cause for a supplier failure, the logic being: people only make mistakes because they are allowed to!

To understand real root cause you need to understand the nature of errors – often impossible in the heat of a customer complaint. People do make mistakes – rarely will you find an example of someone deliberately delivering a poor product or service – but there is normally a good reason why a mistake was made. An individual may be distracted or under pressure to keep up with delivery schedules. Process documents may be unclear or authority levels not sufficiently defined.

To resolve these issues needs further investigation and to do this you will have to have the confidence of the people involved. The area is huge and is a minefield. As with all complex systems to be able to understand how errors occur you need to look at a range of different aspects:

  • Leadership – how do your organisation’s leaders exemplify desired behaviours and the importance of satisfying customer requirements so people understand what is required of them?

  • Communications – how do you communicate organisation expectations, including customer requirements; how well do you listen to what employees are telling you about their jobs?

  • Competence – how do individuals within your system demonstrate they have the skills and knowledge required to do the job?

  • Empowerment – how are people authorized to develop and manage areas of their work?

  • Recognition – how are people’s efforts appreciated and good practice rewarded?

If you are able to answer the above questions satisfactorily then you will be a long way towards establishing a quality culture that seeks out and eliminates root causes currently undiscovered and assigned to human error. There is guidance available from ISO TC 176 on people aspects of management systems, a vital area and often neglected, in the form of ISO 10015 and ISO 10018 and they are both being revised as we speak. There are some great examples around of earlier work including quality circles and the more recent self directed work teams at the heart of Lean manufacturing and service.

W. Edwards Deming said that 85% of all quality problems are management problems – if you accept this then you are part way to accepting there is no such thing as human error.

Fail to control design = designed to fail?

I had a recent and very specific query recently about design process and effectiveness measures for Management Review in a Medical Devices environment.

The question was specifically about demonstrating that outputs of the design process match inputs and whether this was acceptable to present to the Top Management team? The question was also about ISO 13485, the quality management standard for Medical Device Manufacturers but for this article I have broadened the field to cover any organisation looking to implement effective design control measures and many of the points made read across to other sectors. In this article clause references are aligned with ISO 9001: 2015 instead of ISO 13485 but again that the principles apply wherever you use design control and can be applied to any core process.

Generally design control is one of the least understood areas of how organisations go about providing products and services to market. Design plays the fundamental role in determining how well products and services operate and whether they deliver customer satisfaction, both at the point of delivery and throughout their useful life. You only have to follow media stories for product recall and regulator intervention to see that product designers in the automotive, aerospace, consumer goods and other areas as well as service designers, particularly in the financial services sector, have ‘designed in’ risk and failure leading to huge liabilities for their organisations. Individuals involved did not create these liabilities deliberately but didn’t have effective controls implemented for their work.

So, to be able to report on effectiveness to the top team, first you have to be clear what the design process is and what it gives your organisation, all covered in clause 4.4 of the standard, with further detail in clause 8.3. By looking at the design process and identifying criteria and methods needed for effective operation (4.4.1 c) you should be able to identify critical success factors (CSF) for design – generally covering three areas of Quality, Cost, Delivery (QCD), as for any project management activity – but more of this later. You can do this as a quality specialist ‘looking in’ but it is far more effective if you work with those involved in designing products or services and gain their views of what ‘good’ design control looks like.

These CSF requirements are used by your organisation to monitor, measure and analyse the design process (9.1.1) and should help you to establish objectives (6.2.1) and design process measures to demonstrate the process is working effectively.

The original question suggested using the matching of design outputs to design input requirements, covered in clause 8.3.5 – a good starting point but what you actually report at a management review might need careful consideration.

Generally design effectiveness is measured by how well:

  • product meets requirements – covered in clause 8.3.5 of ISO 9001 – the ‘Q’ part of QCD

    Internal (design process) measures:

  • design review results

  • results of component and prototype testing (verification activities),

  • field (including clinical) trials (validation output)

Internal (company) measures but after design:

  • Manufacturing:

  • right first time measures – how easy is it to make the product / deiver the service,

  • scrap and rework at new product / new service introduction – a measure of how robust the new design is

External to the company:

  • Warranty

  • Complaints

  • Field data on product effectiveness (clinical use)

Design process efficiency could be reported by:

  • achievement of budget (The ‘C’ part)

  • on time delivery of new products to market against original timing plan (the ‘D’)

Altogether these measures would demonstrate how well the design process is working.

As far as presenting this to the management review, as for the initial question, with the best will in the world top management (as required by ISO 9001) have limited time to spend on reviewing subjects like quality (not seen by them as “sexy”). Somehow you need to produce an edited highlights version of design measures that will hold their attention, a dashboard using a traffic lights system but with the ability to drill down into the detail should help; if you can assign pound notes to any of the measures that should help even more!