There’s a range of discussions starting to occur across the mining industry on ethics and artificial intelligence. The need to determine risk-based protocols for machine decision making is fast becoming a point of interest for safety and information technology personnel that support the mining industry.
The industry’s increasing rate of automation is no doubt driving a need for specification of machine ‘risk-based’ decision criteria, however, there are currently no known standards or regulations for which ethical decisions will be made by automated machines.
Consider a scenario where an automated mine vehicle is presented with multiple accident scenarios at one instantaneous point in time. An automated haul truck approaches an intersection and within that intersection, there is a person stopped in a light vehicle, another vehicle on the wrong side of the road and a mine worker crossing the haul road by foot to examine some ground control issues.
While improbable that the scenario could exist, we have seen many recent events that defied traditional safety logic and probabilities.
The problem for the automated haul truck is that it is confronted with a decision based on the range of instantaneous sensory information that it has gathered. If it is travelling too fast to stop at speed and all collision targets are potentially within range, which one will it chose to avoid and which one will it chose to hit if it is left with no alternative. A human would traditionally make a rapid decision on the basis of ethics if sufficient time was available.
The need for an ethical standpoint on making life-threatening decisions has been recently highlighted in a study commenced by MIT in 2014 entitled the Moral Machine. Over the last four years, the study has been collecting a range of data from millions of participants worldwide. The study’s premise was to create a game-like platform that would crowdsource people’s decisions on how self-driving cars should prioritize lives in different variations of the “trolley problem.”
The outcome would be that the data generated by the study would provide insights into the collective ethical priorities of different cultures globally.
The study published in Nature in October last year has drawn some conclusions from its research into the potential skew of ethics and artificial intelligence decisions based on culture.
In the introduction, the authors highlighted that “We are entering an age in which machines are tasked not only to promote well-being and minimize harm, but also to distribute the well-being they create, and the harm they cannot eliminate. Distribution of well-being and harm inevitably creates tradeoffs, whose resolution falls in the moral domain1,2,3. Think of an autonomous vehicle that is about to crash, and cannot find a trajectory that would save everyone. Should it swerve onto one jaywalking teenager to spare its three elderly passengers?”
The study has interesting implications for countries like Australia currently testing self-driving road vehicles and using autonomous vehicles in mining applications. Preferences may shape the design and regulation of automated vehicles, but currently, we aren’t quite there yet in defining what those preferences actually are.
The study implied that carmakers may find, that Chinese consumers would more readily enter a car that protected themselves over pedestrians whereas Western nations may choose alternative courses of action.
The study also highlighted a key issue for both society and regulators alike.
Never in the history of humanity have we allowed a machine to autonomously decide who should live and who should die, in a fraction of a second, without real-time supervision. We are going to cross that bridge any time now, and it will not happen in a distant theatre of military operations; it will happen in that most mundane aspect of our lives, everyday transportation. Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them.
It is clear that safety decision protocols for autonomous vehicles in the mining industry should be considered a component of regulation but, we would guess that the mining regulators have significant work to do in coming to terms with the future before it arrives.
For now, though, it appears left up to machine designers who program artificially intelligent machines to apply ethics and artificial intelligence principles. There is a need for Company Directors, Managers & Supervisors and safety professionals to start asking those tough questions and modelling the scenarios with software designers and engineers. Who lives? Who dies? are questions we must ask.
My take on it (for what it’s worth) is that the identification and specification of ethics and artificial intelligence protocols for autonomous mining equipment risk-based decision making should be on the national mining safety agenda before it becomes on the agenda of a coronial inquiry.
Read more Mining Safety News
Add Comment