How it works
Step 1 — Lag correlation

For each metric, correlation (Pearson or Spearman) is computed between the metric at month t and the outcome at month t + lag, for lags 0 through your chosen maximum.

Step 2 — Classification

A metric is Leading if its peak correlation is at lag ≥1, at least 0.08 stronger than lag 0, and negative (higher activity predicts fewer incidents). Forewarning if the same temporal conditions apply but the correlation is positive — the metric rises before incidents rise, signalling risk accumulation rather than prevention. Concurrent if it correlates but peaks at lag 0 or the gain is below 0.08. Weak if |r| < 0.30 at every lag.

The 0.30 floor follows standard weak-correlation conventions. The gain threshold (default 0.08) prevents a trivially stronger lag-1 from overriding a dominant lag-0 signal. For datasets under 24 months, consider raising it to 0.10–0.12 to reduce false positives.

Step 3 — Ranking

Results are sorted: Leading first, then Forewarning, then Concurrent, then Weak. Within each group, sorted by absolute correlation strength descending.

Step 4 — Review

Check Concurrent metrics against domain knowledge — that is: does this metric exist to prevent incidents, or did it appear because incidents happened? Metrics that react after incidents (e.g. investigation closure rates) will appear Concurrent but are actually lagging.

Upload your dataset
One row per month, one column per metric. Name your first column Month — values can be any consistent date format (e.g. "Jan 2024", "2024-01"). Minimum 12 months recommended.

Drag & drop your CSV here or click to browse

Accepts .csv files · UTF-8 encoding
No file yet? Load sample EHS dataset
How to prepare your CSV
Month Safety Walk Frequency PPE Compliance Rate (%) Recordable Incidents
Jan 2024 10 94.2 3
Feb 2024 8 91.5 5
  • Column 1 — Month: one row per month, in any consistent format (e.g. "Jan 2024", "2024-01"). At least 12 months recommended.
  • Metric columns: any numeric safety activity metrics you track — counts, rates, percentages, scores. All in a single file. Tip: exclude reactive metrics that exist because of incidents (e.g. investigation closure rates, first aid counts) — these will classify as Concurrent by definition.
  • Outcome column(s): the injury or incident metric you want to predict — e.g. Recordable Incidents, LTIR. Include it as a regular column; you'll select it below.
Not sure about the format? Load the sample dataset to see a real example.

Run against your primary incident metric first. Lag distances may differ across outcomes.

Lags 1–3 are standard. Treat lag 4–6 as exploratory.

For <24 months of data, consider 0.10–0.12.

Use Spearman if your incident column averages <1 per month or contains significant outliers.

Processing…

Screening Results

Leading — peaks at lag ≥1. Moves before outcomes. Actionable predictor.
Forewarning — peaks at lag ≥1, positive r. Rises before incidents rise. Early warning of risk accumulation — not a control measure.
Concurrent — peaks at lag 0. Moves with outcomes. Review with domain knowledge.
Weak — |r| < 0.30 at all lags. No predictive signal detected. Learn more →
Negative r = more Metric activity, fewer incidents — the desired direction for most safety metrics. Positive r warrants domain review.
Concurrent may include true lagging indicators. A metric that logically precedes incidents but peaks at lag 0 may reflect reporting delays, not simultaneity. Check the series against domain knowledge before treating it as a coincident metric.
Short datasets increase false positives. Requires ≥12 paired observations per lag. With <36 months of data, random noise can push |r| above 0.30 by chance — treat borderline results (0.30–0.45) as provisional until confirmed on additional data.
Multi-site organisations: run per site, not on aggregated data. Pooling data across sites with different risk profiles, headcounts, or incident rates will suppress or distort lag signals. Each site dataset should be screened independently.

What to do with your results

  • Leading Metrics: These are your actionable predictors. Prioritise them on your safety dashboard and set intervention thresholds. Consider building a composite Safety Performance Index (SPI) by weighting each one by its correlation strength.
  • Forewarning Metrics: These metrics rise before incidents rise — they signal risk accumulation, not prevention. Do not treat them as controls. Use them as early warnings: a sustained rise gives you the lag window to reduce exposure before the system fails.
  • Concurrent Metrics: Do not discard them yet. Apply domain knowledge: if a metric logically precedes incidents (e.g. inspection completion rates), the lag-0 peak may reflect data collection timing rather than true simultaneity. Re-examine the series or extend your dataset.
  • Weak Metrics: No statistical signal at any lag. Either the metric is genuinely uninformative for your context, it is measured too inconsistently to carry a signal, or your dataset is too short to detect one. Retire it from your predictive set — not necessarily from your programme.