Can safety make sense and still fail to manage risk?
CRUNCH TIME FOR THE SAFETY INDUSTRY
We are at a critical point within our industry. We seem to have more safety than ever before, but I don’t think we are proportionally safer. Some data suggests we are having fewer injuries each year, but we also appear to have more mental ill health than even before. Most large organisations have more data, systems, training, meetings, and documents than ever, and yet the workforce often report that they are more disconnected than ever from what it means to do their work.
I’m not sure I have ever seen so much discrepancy between what the workforce thinks of us as an industry, and what we think we do as an industry.
Of course I’m not suggesting everything we are doing is wrong, but somewhere at some stage we stopped getting the value out of doing more. We have doggedly held on to the idea that risk is bad, that systems are what keeps people safe, and that people need to be fixed and told how to be safe.
BUT WANTING TO BE SAFE MAKES SENSE
Of course it makes sense that we want to prevent harm, but somewhere safety moved from being about the output, to the process. We lost the ability to discern and engage in dialogue about what works and doesn’t. It’s like any attack on a safety strategy (like zero harm) is taken to be an attack on safety (if you’re against zero you must be for harm). It’s a great argument strategy, but it’s really bad when we are trying to learn and understand.
SAFETY IS A WICKED PROBLEM
I’ve talked before about wicked problems. These are problems without solutions. They are problems we tackle instead of solving, and because of their complexity they are problems that often create unexpected trade-offs and byproducts as they are being tackled, and so require ongoing dialogue and reflection to help stay on track. Safety doesn’t have a solution, it’s a wicked problem, but we just don’t seem to have space for this type of thought.
Let’s look at just a few issues:
- Safety as the expert – The problem with thinking you are an expert is that you think you are an expert (google Dunning-Kruger effect). It means we must be the holders of all safety knowledge. It might be OK in law, engineering or medicine, but maybe safety isn’t like those industries. Maybe safety isn’t a science.
- No space for uncertainty – Within safety there is no box for “I’m not sure”. We always have to find a reason and corrective action. It makes sense of course because we hate the idea that we can’t control everything, even though that is what life is like. Imagine being a “zero” organisation and accepting that there are some things you can’t control… like people.
- People as the problem – People as the problem makes sense (particularly in the ideology of behaviour-based safety). It solves so many problems because to fix it all you need to do is to get people to realize how their unsafe acts were wrong, and then they will change. I know, it sounds funny when you say it out loud right? But what if people are actually the source of safety? Just have a think about how many incident and injuries a work crew prevents each day? How many little (or big) things would they unconsciously notice and avoid or fix as they do their work? Assuming they aren’t totally distracted with all the safety stuff they have to think about of course.
So is safety the absence of harm, or is it something that it created within the loosely coupled social arrangements between people? If you want some aspiration goals around safety try striving for greater connection, understanding and engagement (or wisdom even), instead of not harming anyone (the absence of something). I’m pretty sure you have a lot more chance of getting closer to those things than closer to zero, and that the journey on the way has a lot more meaning.
This is where the models of high reliability organisations (Karl Weick) and generative safety culture (Patrick Hudson) are pointing us. Where there is a genuine understanding of cultural and sub-cultural (Rob Long) influences. These are organisations with increased mindfulness, trust and connection that do crazy things like removing formal safety processes so that the people doing their job can be more connected to what they actually do, and the people around them. They do this even though it will reduce their ability to control and monitor from above because they also know that the systems and data provide a false sense of protection anyway.
More safety clearly makes sense on the surface, but it will also increase disconnection and disengagement, and therefore risk.