AI: The great enabler

Tuesday, 28 October 2025

NEBOSH

Trained to review and inform critical work documents, artificial intelligence (AI) assistants can enhance OSH deliverables, engendering proactive safety practices, encouraging positive behaviours and freeing up managers' time so they can focus on improving safety where it counts – on the frontline. NEBOSH talks to early adopters about AI's benefits and considerations for its responsible use.

 

For Lucian D'Arco, an early AI champion at international engineering and construction contractor Zauner Anlagentechnik GmbH, the application of this advanced technology has been revolutionary.

As HSE Manager, he oversees 700 operatives representing 42 different trades that are responsible for delivering mechanical and electrical pipework and other infrastructure in a fast-paced, high-risk environment located in Norway.

With a significant number of lengthy risk assessment method statements (RAMS) and lifting plans to review before any construction work commences, D'Arco has deployed a range of AI assistants, such as Chat GPT, Grok and Gemini, to augment his existing OSH tools. He has found that the technology's impressively quick analytical and processing skills have freed up critical time, enabling him to focus on frontline operatives' safety.

Whereas previously it would have taken him hours to meticulously read through these detailed and weighty documents to ensure the data and procedures are correct and meet Norway's legislative requirements, the AI assistant can review them in a fraction of that time and then provide pointers for improvement.

  

Provide specific instructions

"With Chat GPT, I have specific project folders, and by providing very precise framing instructions, this allows the AI to run almost autonomously in terms of what I want," he says.

"My AI knows what I do because I edit all the time. It understands me, and it has that contextual memory that grows."

To give an example, D'Arco says he can ask his AI assistant (Aeris) to scan the work schedule and offer its thoughts on emerging risks and hazards.

"In Norway, we're slowly heading towards the coldest part of the year, so I'll ask it, 'How do you feel about our winter working plan? Is it robust?' It indexes everything."

David Towlson, Director of Learning & Assessment at NEBOSH, who recently delivered a webinar on responsible AI in health and safety, concurs that OSH professionals can build an "expert system" using AI to inform decisions positively, but caveats this by insisting managers must continually challenge the assistant's deliverables.

"AI can add insights, but it can also miss things, so that means that when you delegate to [the technology] without any degree of oversight, you're not a participant in what's going on," he warns.

"The key to being human is your critical analysis skills [so] you shouldn't delegate everything. You can easily over rely on AI and believe everything it tells you, so you need to be accountable and that means checking it."

As D'Arco explains, "AI hallucinations" occur when the predictive technology tries to push the answer into an incorrect shape [and] "that's why your prompting must be so specific," he insists.

To give an example of how this works, he talks about giving a set of RAMS to two different safety professionals, each using AI. One drops the documents into the AI tool blindly without issuing any instructions, and the other informs the assistant by building up a library of framework expectations and loading it with relevant industry standards.

"I've loaded about 150 sets of RAMS that we've reviewed, and that's all developed into Aeris' understanding of what is good, what is bad and what is missing," he says.

"It gives me seven areas to focus on. Then once it's done, I'll go back into the RAMS and, for example, if it hasn't reviewed hot works correctly even though it was in there, I'll pop it back in and it'll redo the assessment. You have to set up your AI to output these grey areas."

Proactive safety practices

Karl Simons OBE is the co-founder and Chief Futurist for FYLD, a startup business that has deployed its AI solution with more than 270 organisations worldwide. One of the benefits he associates with the technology's deployment is a shift away from reactive safety practices. He cites the way AI motivates individuals to move to the actual point of work to record activities.

"Through having a system that enables the fieldworker to record what they say and what they see, AI algorithms can then analyse the unstructured data received, through language models and computer vision, to then auto-populate a report that is sent back to the fieldworker to aid them in completing their assessment," he explains.

Because AI often "brings risks to the surface" that would otherwise be missed, Simons argues the tangible benefit this brings in terms of making the operative’s work easier encourages adoption of the digital system and its continued use.

This all sounds very positive, but what about small-to-medium-sized enterprises that don't have the budgets of larger operations that have embraced AI use? What practical steps can they take to assess any AI-related risks, especially in relation to its technical expertise?

As a starting point, Towlson points to these smaller operations to the EU Artificial Intelligence Act (https://artificialintelligenceact.eu), which applies a risk-based framework and is similar to the EU's approach to roduct safety legislation in that it specifies different risk categories and the required controls.

"It's about using it proportionately," he adds, "but each organisation has to really think about what they are trying to assess at the end of the day."

One of the big "game changers" is AI's ability to potentially prevent incidents from happening. Simons points to the intelligence drawn from wearable devices, equipment sensors, digital risk assessments and human factors, and how AI algorithms can identify risk precursors before any harm occurs.

"Through the use of Application Programming Interface (API) integrations, linking into internal and external systems of record, software solutions can connect, surface and push to fieldworkers information that influences their perception of the risk they face in real-time," he says.

Simons points to a fieldworker's handheld device and its GPS as an example. By connecting this to the Met Office's weather monitoring system, the AI tool can then assist the operative in identifying any important changes in the weather during the day, such as sudden high winds, which, if lifting operations are planned, would impact the safety of this equipment's use.

"Rather than treating them as passive data points, leading AI systems involve workers in shaping the models, providing input on task complexity, environmental conditions and perceived hazards," he argues.
"This two-way relationship not only improves model accuracy, but it also deepens worker engagement. It creates a feedback culture in which frontline experience informs safety strategy."

For D'Arco, there is no going back on AI's role in OSH decisions. "As a health and safety professional, I don't know everything. I know what I don't know, but I know where to look, and having AI as my sidekick has optimised everything I do," he says.

"AI isn't going to replace humans. A human using AI will replace the human not using AI. It's as simple as that."