How system safety can make Machine Learning systems safer in the public sector

News - 26 September 2024 - Webredactie

Machine Learning (ML), a form of AI where patterns are discovered in large amounts of data, can be very useful. It is increasingly used, for example, in chatbot Chat GPT, facial recognition, or speech software. However, there are also concerns about the use of ML systems in the public sector. How do you prevent the system from, for example, discriminating or making large-scale mistakes with negative effects on citizens? Scientists at TU Delft, including Jeroen Delfos, investigated how lessons from system safety can contribute to making ML systems safer in the public sector.

“Policymakers are busy devising measures to counter the negative effects of ML. Our research shows that they can rely much more on existing concepts and theories that have already proven their value in other sectors,” says Jeroen Delfos.

Learning from other sectors

In their research, the scientists used concepts from system safety and systems theory to describe the challenges of using ML systems in the public sector. Delfos: “Concepts and tools from the system safety literature are already widely used to support safety in sectors such as aviation, for example by analysing accidents with system safety methods. However, this is not yet common practice in the field of AI and ML. By applying a system-theoretical perspective, we view safety not only as a result of how the technology works, but as the result of a complex set of technical, social, and organisational factors.” The researchers interviewed professionals from the public sector to see which factors are recognized and which are still underexposed.

Bias

There is room for improvement to make ML systems in the public sector safer. For example, bias in data is still often seen as a technical problem, while the origin of that bias may lie far outside the technical system. Delfos: “Consider, for instance, the registration of crime. In neighbourhoods where the police patrol more frequently, logically, more crime is recorded, which leads to these areas being overrepresented in crime statistics. An ML system trained to discover patterns in these statistics will replicate or even reinforce this bias. However, the problem lies in the method of recording, not in the ML system itself.”

Reducing risks

According to the researchers, policymakers and civil servants involved in the development of ML systems would do well to incorporate system safety concepts. For example, it is advisable to identify in advance what kinds of accidents one wants to prevent when designing an ML system.

Another lesson from system safety, for instance in aviation, is that systems tend to become more risky over time in practice, because safety becomes subordinate to efficiency as long as no accidents occur. “It is therefore important that safety remains a recurring topic in evaluations and that safety requirements are enforced,” says Delfos.

Read the research paper.