There is growing concern among owners/operators regarding the quality and consistency in the Process Hazard Analyses (PHA) of their facilities. When a PHA is conducted, whether through an internal or contracted facilitator, there can be a strong influence on the data from many factors. These factors affect the integrity of the data and, can lead to increased risk exposures in the facility.
While PHAs have been conducted within the industry for many years, there are still distinct differences between the quality and completeness of data generated within each session. Using data analytics and visualizations, the industry can move from inconsistent data quality to a place of quality awareness. This knowledge can reduce risk exposure by helping remove inconsistencies within an organization’s PHA data. Additionally, this can increase the percentage of critical scenarios being captured during a PHA session, reducing the likelihood of missing a high-risk scenario and the potential corresponding recommendation required to adequately address the risk. This concept can be adopted for a newly purchased or a newly built facility by having a guideline to start with for the baseline PHA, or for the revalidation of a PHA.
To achieve these visualized analytics, there is a series of transformations while mining the PHA data which will allow it to be in a comparable state. The data must be generalized so specifics such as tag numbers are removed, and comparisons can be drawn. It is then organized into subsections of process safety vulnerabilities, which summarize the threats the facility is exposed to.
From this generalized information, the most critical vulnerabilities across multiple facilities can be extracted to be assessed in the next PHA. Along with the aid of a subject matter expert, it can also be determined if any vulnerabilities were not considered in the PHA, such as prior incidents. This can create a more thorough PHA with the confidence that most scenarios have been analyzed.