Change in the railway

Complex systems have inherent risk

Whichever one we think about – weather, eco-systems, financial markets or railways – each has hazards and risks that we are unable to control fully due to the number of factors involved and volume of activities that occur daily.

Changes to complex, man-made systems are managed to prevent catastrophic failures through formal and informal rules – and are captured in legislation and industry codes of practice. The UK rail industry has a long history of legislation and standards to control both railway operation and change management. Performance trends over recent years indicate a dramatic improvement in passenger safety. Indeed UK rail remains among the safest means of transport in Europe both in absolute terms and per kilometre travelled, according to the Office of Rail Regulation (ORR) and Eurostat data.

Sit up and take notice

So, how is ‘change’ changing? With evidence of improving performance, when the ORR announces changes to risk management across the UK rail system, it is time to listen. Since 2006 the Railway and Other Guided Transport Systems (Safety) Regulations 2006 (ROGS) have required dutyholders, including infrastructure managers such as Network Rail or railway undertakings such as train or freight operating companies, to develop and maintain safety management systems. These systems include controlled change management of rail operations and infrastructure and ROGS-required competent, independent persons to verify safety before placing into service. From 2013 ROGS amendments required dutyholders to apply a risk-based approach to assessing significance of technical (structural assets such as buildings, track, signalling, vehicles) and operational (timetable, staffing, ways of working) changes in accordance with their safety management system. The amendments allowed dutyholders to apply significance testing to determine the scrutiny level required. When dutyholders want to make a change they assess significance based on six criteria:

  • Failure consequence: realistically what could go wrong taking into account existing controls?

  • Novelty: how new is the proposed change for both the industry and the company?

  • Complexity of the change: how many sub-systems and groups are involved in the change?

  • Monitoring: how easy will it be to monitor the effect of the change throughout the asset lifecycle (including maintenance)?

  • Reversibility: how easy is it to revert to previous systems?

  • Additionality: how could this change interact with other recent (and not so recent) changes to the asset and operations involved?

    A different track

A change to a piece of track, for example, is rarely assessable on its own. The impact includes the track itself, perhaps buildings and civil structures along the line, signalling and train operations (speed and volume), passenger volumes and traffic at stations nearby. All of this falls under risk assessment and evaluation.

Similarly operational changes involving significant variations on staffing or ways of working might affect safety. Historically the railways have relied on large numbers of standards mandating prescriptive working methods, but they are moving towards a risk-based system. This in turn relies on trained and competent staff making judgements on safety, based on key principles. This represents a significant safety-related change and should be assessed using the common safety method (CSM). Where the proposer determines the change is significant, they have to manage it through:

  • a code(s) of practice comparing it with similar (reference) systems

  • explicit risk estimation.

In each case they must use an independent assessment body (AB) for whichever risk management method is used to evaluate their change management. They must also provide ‘an independent assessment of the correct application of the risk management process’ and a safety assessment report as evidence.

The AB role is a new requirement under ROGS and the underpinning EU Directive. The proposer must decide whether to accept the change as safe for entry into service “based on the safety assessment report provided by the assessment body”. The regulations require several pieces of evidence: z Change description – including the system it relates to. Used as the basis for significance testing, the results of which are also recorded.

  • System definition – key information describing the system being changed.

  • Hazards identified associated with the change and evaluation and assessment of their risk.

  • Risk control measures to be applied for each risk using one or more of the three management methods above.

  • A system hazard record maintained along with the system definition as the project progresses.

  • Evidence that risk management has been applied throughout the change project.

  • For significant risks, the AB report.

The proposer retains responsibility for system safety and decides whether the system change is safe to enter service based on the report.

Legal separation

Change proposers may use any support they require to manage change and ensure risks are kept in check. The railways use consultants to advise and independently assess compliance but the new AB role requires independence from the parties involved with change. The United Kingdom Accreditation Service (UKAS) programme for accreditation of ABs assesses legal separation, impartiality and independence. The initial pilot was open to accredited railway notified and designated bodies and was then extended to others. The accreditation regime for ABs came into force in May 2015 when the last 2013 ROGS amendments come into force and the ORR maintains a watching brief to ensure accreditation is effective and change management remains controlled. In the meantime, dutyholders are expected to use existing safety management systems to verify the safety of changes to vehicles and infrastructure.

Industry bodies such as ORR and the Rail Safety and Standards Board have produced guidance on implementing new requirements, and conformity assessment bodies are working with new and existing clients to explain the new requirements. The overarching aim for managing change in the complex UK railway system is for “a better railway for a better Britain” and “everyone home safe, every day”.

This article is adapted from ‘Managing Change’ – published in the November 2014 edition of IIRSM’s Insight magazine.

[bestwebsoft_contact_form]

Fail to control design = designed to fail?

I had a recent and very specific query recently about design process and effectiveness measures for Management Review in a Medical Devices environment.

The question was specifically about demonstrating that outputs of the design process match inputs and whether this was acceptable to present to the Top Management team? The question was also about ISO 13485, the quality management standard for Medical Device Manufacturers but for this article I have broadened the field to cover any organisation looking to implement effective design control measures and many of the points made read across to other sectors. In this article clause references are aligned with ISO 9001: 2015 instead of ISO 13485 but again that the principles apply wherever you use design control and can be applied to any core process.

Generally design control is one of the least understood areas of how organisations go about providing products and services to market. Design plays the fundamental role in determining how well products and services operate and whether they deliver customer satisfaction, both at the point of delivery and throughout their useful life. You only have to follow media stories for product recall and regulator intervention to see that product designers in the automotive, aerospace, consumer goods and other areas as well as service designers, particularly in the financial services sector, have ‘designed in’ risk and failure leading to huge liabilities for their organisations. Individuals involved did not create these liabilities deliberately but didn’t have effective controls implemented for their work.

So, to be able to report on effectiveness to the top team, first you have to be clear what the design process is and what it gives your organisation, all covered in clause 4.4 of the standard, with further detail in clause 8.3. By looking at the design process and identifying criteria and methods needed for effective operation (4.4.1 c) you should be able to identify critical success factors (CSF) for design – generally covering three areas of Quality, Cost, Delivery (QCD), as for any project management activity – but more of this later. You can do this as a quality specialist ‘looking in’ but it is far more effective if you work with those involved in designing products or services and gain their views of what ‘good’ design control looks like.

These CSF requirements are used by your organisation to monitor, measure and analyse the design process (9.1.1) and should help you to establish objectives (6.2.1) and design process measures to demonstrate the process is working effectively.

The original question suggested using the matching of design outputs to design input requirements, covered in clause 8.3.5 – a good starting point but what you actually report at a management review might need careful consideration.

Generally design effectiveness is measured by how well:

  • product meets requirements – covered in clause 8.3.5 of ISO 9001 – the ‘Q’ part of QCD

    Internal (design process) measures:

  • design review results

  • results of component and prototype testing (verification activities),

  • field (including clinical) trials (validation output)

Internal (company) measures but after design:

  • Manufacturing:

  • right first time measures – how easy is it to make the product / deiver the service,

  • scrap and rework at new product / new service introduction – a measure of how robust the new design is

External to the company:

  • Warranty

  • Complaints

  • Field data on product effectiveness (clinical use)

Design process efficiency could be reported by:

  • achievement of budget (The ‘C’ part)

  • on time delivery of new products to market against original timing plan (the ‘D’)

Altogether these measures would demonstrate how well the design process is working.

As far as presenting this to the management review, as for the initial question, with the best will in the world top management (as required by ISO 9001) have limited time to spend on reviewing subjects like quality (not seen by them as “sexy”). Somehow you need to produce an edited highlights version of design measures that will hold their attention, a dashboard using a traffic lights system but with the ability to drill down into the detail should help; if you can assign pound notes to any of the measures that should help even more!