Skip to main content
5 answers
5
Updated 342 views

How do we bring advanced analytics into safety work in a way that actually helps predict serious incidents without creating new problems because the data isn’t perfect?

Im more interested in how AI can play a big part in calculating safety controls before or when an accident is underway. Just because AI can be useful does not mean it's willing to make a right decision based on human expressions.

#Fall25


5

4 answers


0
Updated
Share a link to this answer
Share a link to this answer

Nitasha’s Answer

Hi
There is huge investment in industry on AI guardrails and security measures. Due to rapid shift in this era it’s expected to see remarkable innovation in this area soon! However for advanced analytics and data engineering treat AI as your partner to simply your workflows and make your day to day easy & efficient so your role can focus more in driving product strategy & business decisions instead of managing operational overhead for maintaining & management analytical work. Analytics is soon going to become commodity but what won’t change in its end to end workflow is data governance & quality. There is huge necessity in fixing core data assets by data engineers and analytics engineers else AI will hallucinate & become less reliable!

as analytics engineer I would advice focus on treating AI as a pair programmer or partner in getting your day to day productive so you can focus more in driving product innovation for any domain you work for! Hth’s
0
0
Updated
Share a link to this answer
Share a link to this answer

Sumitra’s Answer

Dear Sergio,
AI can support safety work, but it can’t replace human judgment, especially because safety data is often incomplete or collected after incidents. The best way to use advanced analytics is to treat AI as an early-warning assistant, not a decision-maker. It can look for patterns humans miss, spot risky combinations of factors (fatigue, near-miss logs, equipment issues), and alert teams before something becomes serious. But humans still interpret those alerts, verify the situation, and choose the right action. In other words, AI predicts where to look, while people decide what to do. If organizations pair AI insights with strong reporting systems, safety culture, and human oversight, it becomes a powerful tool without creating new problems from imperfect data.
Hope this helps! ☺️
0
0
Updated
Share a link to this answer
Share a link to this answer

Sandeep’s Answer

Hello Sergio,

Integrating AI into safety starts by accepting that data is never perfect. Success comes from using diverse, high velocity data to find patterns and generate quantified risk scores that predict serious incidents.

To avoid problems with human expression interpretation, AI must be a decision support tool, not the decision-maker. Its recommendations must be transparent and focused on measurable safety violations. A human must always remain in the loop to apply ethical judgment to the final high-stakes decision.
0
0
Updated
Share a link to this answer
Share a link to this answer

Nathalie’s Answer

AI is amazing at handling large amounts of data quickly, like spotting tiny changes in sensors almost instantly. However, ensuring safety is ultimately up to us as humans. Here's why we need to stay in control:

1. Data isn't always accurate: AI relies on sensors, which can sometimes give false readings if they malfunction. Humans can notice odd results and check the actual equipment to confirm.

2. Understanding context: AI is great with numbers but doesn't grasp situations like a busy shift or bad weather. It can provide probabilities but not understand the reasons behind events.

3. Handling new situations: AI learns from past data, so it might struggle with completely new or chaotic situations. Humans can adapt and think on their feet, while AI is better with routine tasks.

4. Responsibility: You can't hold a computer program accountable for mistakes. Important decisions need a person who understands their impact.

AI is great for sorting through data and pointing out risks, but humans should make the final decisions. Let the machines handle the calculations while we do the critical thinking.
0