In plants and critical infrastructure, functional safety and cybersecurity have long moved in parallel lanes. Today they must intersect. While Safety Integrity Levels (SIL), HAZOP, and LOPA remain the foundation of process safety, adversary tactics now shape demand rates, independence of protection layers, and the credibility of safety functions. Bridging safety and cyber risk is no longer optional—it is central to risk targets, design choices, and operating discipline.

IMAGE: UNSPLASH
Why Cyber Belongs In Functional Safety Conversations
Functional safety frameworks were built to manage random hardware failures and foreseeable process deviations. Target SILs, proof test intervals, and LOPA credits assume components fail in statistically predictable ways and that independent protection layers (IPLs) behave as designed. Cyber risk changes those assumptions:
- Attackers can intentionally trigger hazardous states (increasing the effective demand rate on IPLs).
- Malicious actions can impair or bypass IPLs (reducing or nullifying LOPA credits).
- Adversaries can coordinate failures across layers, breaking assumed independence.
When cyber events drive process upsets or degrade safeguards, the classical inputs to SIL and LOPA need fresh scrutiny. Treating cyber as a separate risk silo invites blind spots; integrating threat modeling helps maintain credible risk reductions.
From HAZOP To Threat Models: A Shared Language
Safety engineers analyze deviations from intent; threat modelers analyze deviations from trust. The bridge is closer than it appears.
Map Guidewords To Cyber-Initiated Causes
HAZOP guidewords such as “No,” “More,” “Less,” “Reverse,” and “As well as” can be extended to include cyber-initiated scenarios. For example:
- “More flow” due to a compromised PID controller driving setpoint bias
- “No pressure relief” if a safety valve final element is tripped offline through unauthorized configuration changes
- “Reverse flow” enabled by malicious override of interlocks
This translation preserves HAZOP discipline while acknowledging that the initiating cause can be adversarial, software-based, or network-borne rather than purely mechanical.
Align With Structured Threat Frameworks
Pair HAZOP deviations with threat techniques (e.g., MITRE ATT&CK for ICS), attack trees, or STRIDE. For each hazardous deviation, identify feasible adversary paths capable of:
- Increasing the frequency or magnitude of the deviation
- Degrading the detectability of the deviation
- Impairing mitigation or recovery functions
Documenting this tie-in keeps the safety case rooted in evidence and avoids abstract cyber fears.
LOPA Meets The Adversary: Quantifying Cyber-Influenced Risk
LOPA’s strength is its disciplined accounting of initiating event frequency and IPL credit. Cyber affects both.
Initiating Event Rates Under Cyber Pressure
If an attacker can drive a process variable into an unsafe range, the initiating event frequency increases beyond historical data. Two practical paths exist:
- Adjust initiating event frequencies to reflect credible cyber-initiated demands (using scenarios and likelihood bands derived from threat modeling).
- Introduce a separate “cyber-induced demand” stream for each relevant scenario and add it to the base rate.
Either method should be justified by threat feasibility, exposure windows, and control-system attack surface.
IPL Credit And Independence In A Connected Plant
Independence is not only physical—it is logical, administrative, and communication-based. An IPL controlled by the same network segment, engineering workstation, or credential store as the BPCS may not be independent under cyber pressure. Reassess credits when:
- Two layers share the same safety PLC family with identical security posture
- Common engineering tools or patch servers can reconfigure multiple layers
- One compromise path can impair sensing, logic, and final elements
Where independence is uncertain, reduce IPL credit or add compensating controls that restore separation.
SIL Targets Under Cyber Conditions
SIL targets arise from risk tolerability criteria, initiating frequencies, and IPL credits. If cyber raises demands or weakens credits, the residual risk may require a higher target or a redesign that restores true independence. Consider:
- Treat cyber impairment of a safety function as a systematic fault class. Design measures that make the SIS resistant to unauthorized logic changes, unsafe write operations, and spoofed inputs.
- The probability of failure on demand (PFDavg) may stay acceptable only if the SIS is kept verifiably independent—physically and logically—from the BPCS and external networks.
- Where independence cannot be guaranteed, revisit the allocation of safety functions across layers and consider adding a mechanical or hardwired layer that is out of band from digital routes.
Architectural Independence And SIS Resilience
A credible safety case under cyber conditions depends on architecture. Common patterns include:
- Rigid network segmentation with minimal, tightly controlled conduits; no routable paths from corporate IT to SIS
- Unidirectional gateways for required data flows out of SIS zones
- Dedicated safety engineering workstations under strict access control and change management
- Cryptographic signing of safety logic, with enforced verification prior to download
- Sensor diversity and voting logic that can tolerate a subset of spoofed or biased inputs
These choices support both the independence assumptions in LOPA and the systematic integrity requirements in IEC 61508/61511, while aligning with ISA/IEC 62443 zones and conduits.
Verification, Validation, And Proof Under Attack
Traditional proof tests confirm dangerous undetected failures. In a connected environment, tests must also address malicious pathways and configuration drift.
Extend Test Coverage To Cyber-Relevant Failure Modes
- Verify logic and configuration integrity: compare running SIS logic against a signed baseline, validate safety-related parameters, and log variances.
- Exercise trip paths with cyber-affected sensors and logic in mind; confirm that spoof-resistant validation (e.g., plausibility checks, cross-sensor correlation) operates as intended.
- Incorporate secure boot, firmware signature checks, and tamper-evident commissioning steps into validation scripts.
Patch And Vulnerability Management That Respects Safety
Not all patches can be immediate, but not all can wait. Use risk-based deferrals with compensating controls:
- Temporarily increase monitoring sensitivity and operator guidance while deferring changes
- Shorten proof test intervals on affected safeguards
- Apply virtual patching at the network boundary and tighten access controls until maintenance windows open
Operations, Monitoring, And Incident Response
Operations teams now monitor for both process excursions and security events that might cause or mask those excursions. Practical steps include:
- Tie anomaly detection to process context: alarms on controller mode flips, sudden setpoint changes, or unusual write operations to safety parameters
- Establish safe-state triggers for security severity thresholds; if conditions suggest loss of view or control integrity, a conservative transition to a known safe state should be authorized and practiced
- Align incident response runbooks with operating procedures and MOC; rehearse cyber-to-safety escalation paths the same way emergency shutdowns are drilled
Standards, Assurance, And Evidence
- IEC 61508/61511: lifecycle rigor for safety functions, including systematic capability and independence
- ISA/IEC 62443: security levels, zones/conduits, and policy/technical controls mapped to OT
- NIST SP 800-82: practical guidance for control-system security posture
Blend the evidence sets. Safety cases should reference security controls that protect the assumptions behind initiating frequencies and IPL credits; security assessments should cite safety consequences when prioritizing mitigations.
RACI And Roles
Bridging disciplines works best when responsibilities are explicit. A simple pattern often proves effective:
- Safety Manager: owns the safety case, SIL targets, and proof strategy
- Control Systems Lead: owns architecture, segregation, and SIS implementation integrity
- CISO/OT Security Lead: owns threat modeling, detection, and incident response that could affect safeguards
- Operations Manager: owns procedures, drills, and plant-wide adoption
In practice, the Safety Manager owns the safety case; the OT Security Lead owns the threat model; both are supported by industrial cybersecurity consulting to align safety cases with cyber threat models. This shared accountability ensures that risk assumptions are consistent, evidence is traceable, and lifecycle changes keep both domains in sync.
Data, Logging, And Traceability
A defensible safety case under cyber scrutiny depends on trustworthy records:
- Immutable logs of safety logic downloads, configuration changes, and bypasses
- Time-synchronized process and security telemetry for post-event reconstruction
- Clear linkages from LOPA assumptions to monitoring rules and alarm philosophy
When an event occurs, the plant should be able to show what changed, by whom, through which access path, and how the safety case remains valid.
Lifecycle And Change Control
Commissioning, expansion, and optimization projects are common sources of risk drift. Treat cyber-impacting changes as safety-relevant:
- Add threat model updates to MOC packages
- Reassess IPL independence when integrating new historians, remote access, or advanced analytics
- Revalidate SIS logic integrity after any tooling or firmware changes, not just logic edits
Vendor And Supply Chain Considerations
Safety and security posture extend to vendor tools, firmware, and cloud services:
- Require signed firmware, documented SBOMs, and vulnerability disclosure practices
- Confirm that vendor commissioning tools do not backdoor independence by bridging zones
- Establish clear patch delivery SLAs aligned to outage windows and safety priorities
Getting Started: A Practical Bridge
Plants do not have to rebuild from scratch. A focused sequence helps teams move quickly while protecting production:
- Pick one unit operation with a high-consequence SIF; extend its HAZOP with cyber-initiated causes and map those to threat techniques.
- Recalculate LOPA with cyber-induced demands and adjusted IPL credits where independence is uncertain.
- Identify two to three architectural or procedural changes that restore independence or add an out-of-band safeguard.
- Update the safety case and monitoring rules together, and run a joint drill that exercises both shutdown and incident response.
This pilot makes the link between theory and plant reality, creating a pattern that scales across units.
Conclusion: One Risk Picture, Many Specialists
Functional safety and cybersecurity remain distinct disciplines, but plants run better when they share the same risk picture. By extending HAZOP to include malicious causes, adjusting LOPA for cyber pressure, protecting SIS independence, and operating with joint procedures, safety engineers and security teams can meet risk targets with confidence.
The result is not just compliance—it is a system that keeps people, equipment, and production on track even when someone tries to push it off course.

COMMENTS