Anyone who has been in the oil and gas sector for the last 20 years or so has seen vast improvements in the use of technology to advance operational efficiency and reliability. The improvements in assessing operational readiness of equipment and systems are one of the great stories of the 21st century. So why are we still using archaic risk assessment methodologies as attestation that we are ‘safe to start’ and ‘safe to operate’? These outdated methods include static, manual, and ‘just-in-time’ risk assessments and registers.
Here is a short self-assessment on operational risk management:
- Is your organisation still dependent on spreadsheets, studies, dashboards, etc. that were done six months ago, or longer?
- Has anything changed within your plant, facility or operations since these studies or reports were completed?
- Are you dependent on risk management specialists, such as risk engineers, to evaluate and update your operational risk assessments to qualify barrier health?
If you answered, ‘no’ to the aforementioned questions, congratulations! From our interviews with prospective clients over the past ten years, we can say that you are in the extreme minority of oil and gas organisations. While many have embraced new hardware and software solutions to improve both the quality and quantity of data that is used for predictive analytics for preventative and reparative maintenance and operational efficiency, very few have invested in similar solutions to manage operational or process safety risks.
An Outdated, Inconsistent System
Enterprise software systems, such as those that are used to track training, competency and certification, are typically ‘stand-alone’ arrangements that are not integrated with the risk management process, and they certainly cannot transmit information back to a maintenance management system that says, “Hey, Joe just retired and the new guy has no experience with this equipment.”
Similarly, distributed control systems (DCS) typically provide data on the operational efficiency and health of the equipment that it is part of. However, human factor data is not usually included in the data feed used to evaluate process safety risk and maintenance management systems, yet human error has largely been recognised as one of the key contributory causes of process safety incidents. The systems that we have created to manage operational efficiency, competency, and maintenance and training are not being used effectively to predict and prevent process safety incidents.
Risk is a cumulative assessment process, yet we continue to use stand-alone, outdated processes to determine the safety to start or operate. Why is this?
Perhaps the drivers that influence updating our other systems focus more on optimisation of operational efficiency, i.e. our ability to maximise productivity, rather than evaluate the risk (severity x probability) for catastrophic failure due to a process safety incident. Almost certainly one of the factors is the plethora of methodologies and acronyms that are used across the oil and gas industry sector – bowties, HAZIDS, HAZOPS, ENVIDS, risk registers, pre-start up safety reviews (PSSR), project HSE and security reviews (PHSSER), human factors assessment tools (HFAT), process hazard analysis (PHA), to name just a few.
We have seen a wide spectrum of processes that are or are not being used in the oil and gas sector just like an ‘a la carte’ breakfast bar. It is my belief that this contributes to the confusion and lack of standardisation across the sector of risk management processes. The importance of this standardisation can possibly be best understood by looking at two major process safety incidents from 2010, including several key causal factors and recommendations that were provided by the United States Chemical Safety Board (US CSB).
So, what are the answers to these dynamic and complex situations that are present in the industry?
First, the use of standardised tools to evaluate risk across the sector would be a good starting point. This includes the suite of tools recommended by API and the International Association of Oil and Gas Producers. Additionally, though, evaluation of process safety barriers using methodologies such as James Reason’s Barrier Model, frequently referred to as the ‘swiss cheese’ model, provide a standardised framework against which direct and contributory factors can be assessed using software applications.
Through the use of technology integration software, operators and producers can utilise risk assessment software that can accept millions of inputs, including static registers/studies, DCS, daily operational inputs, and maintenance and enterprise software systems. The amount of data provided is tremendous, but the output in the form of dashboards and registers is simplified so that the operation’s personnel can manage risk at the local level instead of needing a risk engineer to interpret the data.
The most sophisticated software applications are now cloud-based so that data can be accessed anywhere and at any time, as long as there is an internet connection. This enables front-line operation supervisors to objectively consult with senior level management because everyone is able to view the data together in real time from the cloud. Because inputs are continuous and based on actual operating conditions in the facility, risks can be viewed in real time and decisions can be made based on objective data versus subjective opinions.
RiskPoynt’s® safety software application is one such technology solution. While other software solutions exist, very few have designed their products to accept inputs that evaluate human factors, safety critical equipment, deferred maintenance, operational inputs, DCS, enterprise software feeds, and static risk data from risk registers, HAZIDS, HAZOPS, ENVIDS, PHAs and the rest. Based on conversations with major software and hardware developers, there is a consensus that we are currently in an augmented information technology (IT) space where humans make critical safety decisions based on the information provided through such systems.
But the evolution will undoubtedly evolve to the next generation of information technology, cognitive IT, where the software will make critical decisions, such as shutting down systems based on the data, without the need for human intervention. Before this can occur, though, the industry will need to evaluate the current standards, hardware and software that can provide the data, as well as the man-machine interface upon which critical decisions, either augmented or cognitive, are made. The first steps are to embrace the technology and improve on existing processes and systems utilising modern tools to view objective information for competent people to make the right decision.