2026 EHS Tech Consensus: Interrogating the Vendor AI Data

Apr 30, 2026

Stack of three EHS survey reports from Cority, Enablon, and Quentic with minimalist brand-coded covers

Enablon, Quentic, and Cority just released their 2026 EHS maturity and technology surveys. They present a unified front: the industry is adopting AI, consolidating platforms, and increasing budgets.

But when you cross-examine their own data, the narrative collapses.

The data shows a massive gap between what people say they are doing and what they have actually built. While EHS professionals are enthusiastic about AI, they are not talking about the same thing. The surveys mix three completely different technologies under one label:

  • The Catalyst: A strategic tool for predicting and mitigating risk (identified as the top AI benefit by 30% of Enablon respondents).
  • The Copilot: An efficiency tool for low-value administrative tasks (valued by 26% of respondents in the Enablon survey).
  • The Agent: Autonomous system that makes and acts on decisions without waiting for a human to approve each step (which Cority found only 15% of respondents trust).

Because the surveys treat these three tools as the same thing, the results contradict themselves. Here is what the data actually says about our readiness to deploy predictive safety technology.

1. The Adoption-Maturity Paradox

The biggest contradiction in the data is the distance between "Usage" and "Integration."

While 97% of EHS professionals report using AI in some form, only 5% of organizations have AI fully embedded across their workflows (Cority). These figures come from different survey questions, so the 92-point gap isn't exact math. But it sends a clear signal: the distance between testing AI and actually embedding it is massive. If AI isn't embedded in your core system, the "usage" is likely superficial — like using ChatGPT to summarize a meeting.

These surveys measure excitement, not capability. The 97% usage figure lumps everything together. The professional running a live incident-prediction tool, the one looking at an automated dashboard, and the one asking a chatbot to write an email are all counted exactly the same. You cannot claim 97% adoption when Cority's own data shows 85% of the industry still relies on manual or disconnected tools. Because the survey mixes basic administrative automation with high-value risk prediction, you cannot use the data to prove that AI actually prevents incidents.

2. Who They Actually Asked

These reports reveal a massive disconnect between boardroom expectations and shop-floor reality.

Because the surveys primarily ask senior leaders (Cority surveyed 2,000 executives), they measure what executives believe, not what their organisations can actually do. This creates a massive blind spot: leadership assumes digital maturity is high, while the shop floor is still using paper.

Senior leaders report "strategic priorities," but frontline workers handle the actual data entry. When leaders call AI a "top priority," but Cority's own data shows 85% of the industry still relies on manual tools, the disconnect is obvious. The clean data required for AI is generated at the frontline, but these surveys reflect the optimistic vision of the executive suite. Any safety prediction is only as good as the data workers actually enter. If workers aren't using the tools, the leaders' claims about AI readiness are just noise.

3. Lagging Inputs for Leading Outputs

The industry claims it wants to predict the future, but it is exclusively digitizing the past.

According to Enablon, 30% of respondents say "Predict and Prevent" is the top benefit of AI. But there is a massive definition problem here: someone who calls an automated incident dashboard "predictive AI" will endorse this goal, even though they are just describing a lagging-indicator reporting tool. This 30% figure is inflated because people confuse basic automation with true actual incident prediction. Even so, Enablon's own data exposes the reality: the primary things being digitized are Safety Data Sheets (58%) and static compliance records. Digitalization lags significantly in behavioral controls, like permit-to-work systems (30%).

You cannot predict future incidents using static compliance documents. As any safety professional knows, lagging indicators—incidents, near-misses, and audit findings—only record what already went wrong. Predictive models require leading indicators: behavioral observations, permit-to-work compliance, and real-time equipment conditions. To actually predict incidents, organizations need dynamic, high-frequency behavioral data. You are effectively trying to build a weather forecasting model using only history books.

Without this data, AI is just a high-priced reporting tool. The real markers of risk live in the messy data of daily worker behavior—exactly the data these platforms fail to capture. The problem isn't that modern EHS software lacks the features to capture this data. The problem is that Cority's data shows 85% of the workforce relies on manual tools instead of using the software. The technology isn't the barrier; the fact that the tools are too hard to use in the field is the barrier.

4. The Shadow AI Problem

So where is this "97% usage" actually coming from? The answer is Shadow AI — workers using public tools like ChatGPT outside the systems their employer has approved. According to Cority, 95% of leaders say their teams do exactly this. Yet only 5% of organizations have restricted it. This proves the point: you cannot compare someone asking ChatGPT to rewrite a sentence with an organization running a live incident-prediction tool, but the surveys count them exactly the same.

This lack of control creates massive legal risks, and the severity depends entirely on what workers are typing into the prompt. A worker using ChatGPT to rephrase a generic procedure is a usability problem. A worker feeding sensitive incident reports or contractor liability data into a public model is a major legal exposure. You have no record of where the data went, who owns it, or how the model is using it. The surveys don't distinguish between these cases. They call this "rapid adoption." In reality, it just means you have lost control of your data.

This "Shadow AI" usage isn't malicious hacking. It is a desperate cry for better software. Workers bypass approved tools because typing a 500-word hazard description into a clunky enterprise app takes ten minutes, while dictating it to ChatGPT takes ten seconds. But when a worker uses a public AI tool to rewrite a messy field observation, the critical details of the physical hazard are often deleted. The AI replaces facts with confident, fabricated text just to make the grammar flow. The "AI Co-pilot" isn't in your enterprise dashboard. It is in the worker's pocket, quietly changing your safety data.

5. The Budget vs. Infrastructure Problem

Organizations are buying the idea of AI, not the infrastructure to run it.

Quentic's data shows 82% of organizations expect AI budget increases, but Cority reports that 85% of the industry still relies on manual tools.

Organizations are attempting to build the roof before pouring the foundation. They are throwing money at AI models without fixing the connections between their data systems. This problem won't last forever: AI will eventually be capable of reading messy, unstructured field notes directly. But that capability is not here at scale yet. Organizations that fix their data foundations now will be positioned to use those advanced tools when they arrive. You cannot run predictive safety models on a fractured foundation of spreadsheets and disconnected databases. In any data project, 80% of the work is cleaning the data before any analysis can start; these reports suggest the EHS industry is trying to spend 100% of the budget on the final 20% of the work. Any AI deployed on top of disconnected tools will just generate unreliable results.

Finally, there is a massive trust problem. AI use is near-universal, but only 15% of respondents trust autonomous AI to make decisions (Cority). If you deploy AI at scale but require human experts to verify 85% of its outputs, you haven't gained any efficiency. You have just created a new administrative bottleneck.

Summary Table of Conflicts

Metric High-Level Claim The "Hidden" Reality What It Actually Means
AI Adoption 97% usage (Cority) 5% embedded (Cority) The usage number is inflated because it mixes three different tools. The 5% figure is the only reliable baseline because it measures actual system integration.
Tooling 82% expect budget growth (Quentic) 85% rely on manual tools (Cority) A massive infrastructure gap: you cannot run predictive AI on spreadsheets and paper forms.
Governance 95% use outside tools (Cority) 5% restrict outside use (Cority) A total lack of control over where safety data goes and how public models use it.
Trust 30% want prediction (Enablon) 15% trust autonomous AI (Cority) The "Black Box" problem: users want predictive insights but don't trust math they can't verify.

Conclusion: "AI-Aware" but not "AI-Ready"

These surveys show an industry that is "AI-Aware" but not "AI-Ready." The 97% adoption figure just measures excitement. If you want to know what the industry has actually built, look at the 5% integration figure. That is the true baseline.

The Fix: Design the Information, Then Buy the Software

These reports show an industry racing to buy "intelligence" without doing the hard work of information design — deciding what data gets captured, by whom, and in what form. Stop buying AI for your databases. Start designing how your workers physically collect data before you buy the software.

Safety culture isn't a training problem; it's an information design problem. Stop funding abstract "AI budgets" and start funding data structure. Do not use AI to predict fatalities on messy data. Use it as an extraction tool to organize the chaotic, real-world notes generated by the 95% of workers who bypass your official system.

You don't need an IT degree to design this. You just need to ensure the system forces the worker to capture the exact physical hazard before they can click submit. No software update will fix a workforce that doesn't accurately report what they see. Fix the data collection first. A centralized platform cannot fix a chaotic reporting culture.

In most organizations, EHS professionals don't control how platforms get built — IT and procurement do. Your first task is not to design the system yourself, but to make the business case to the people who control the build. EHS data quality is not a reporting inconvenience; it is a safety-critical infrastructure problem. If EHS doesn't have a seat at the IT purchasing table, getting one is your prerequisite to everything else on this list.