Trust, safety and security in robotic control

Robots coupled with AI constitute complex and adaptable systems that hold great potential for society and industry. But how do we ensure that these systems are safe, secure and trustworthy? In this blog we will have a closer look at these issues.
A robot hand with a blue gradient and a modern letter M for Mimer

Written by: David Eklund, Mimer

Why is this important for your business?

At Mimer we are committed to trustworthy, safe and secure AI development in compliance with current regulations. As AI systems make their way into energy, transport, healthcare and autonomous machines, it is of uttermost importance to ensure their reliability and trustworthiness. In this blog we will have a look at these issues in the context of robotic control.

Beyond compliance with current regulations, safety and trust may provide competitive advantages for your AI business case. Are you or your organization engaged in safety and security issues for AI applications? Please contact our Trustworthy AI team for support, guidance and tools that can help you on your journey.

Robots and AI

Our AI and domain experts at Mimer are actively engaged in many areas of AI research. In this blog we will have a look at one of these areas: AI for robotic control. How do we ensure that AI models used to control robots are safe and secure? How do we ensure predictability and reliability of their behavior? These are some of the questions we will dive into below.

A common method used to teach robots to carry out tasks is reinforcement learning (RL). This is an AI technique that can be used to improve robotic control by enabling systems to learn complex behaviors through interaction with their environment, rather than explicit programming. This adaptability makes RL useful for tasks in logistics, manufacturing, healthcare, disaster response and even social tasks and household chores. However, as machines become more autonomous, it is important to also ensure that they are safe, secure and comply with laws and regulations.

Reinforcement learning for robotic control is an active research area and along side it are efforts to develop tools and methods to validate that trained AI models behave as expected. In this context, one should also not fortget the critical importance of sound cybersecurity practices in robotics.

Both opportunity and risk

Traditional control systems rely on transparent models and rules, which makes them attractive from a safety standpoint but limits their flexibility in dynamic environments. RL changes this by allowing robots to learn policies that map observations to actions, often using AI and neural networks. These policies enable robots to adapt to uncertainty and optimize their performance. Yet, flexibility comes at a cost in that AI controllers can behave unpredictably, which raises serious safety concerns and may lead to damaged equipment, violation of operational constraints, or even endanger humans.

Cybersecurity and connected robots

Modern robots are rarely isolated, rather they communicate with networks, cloud services, and other devices. This connectivity introduces vulnerabilities that attackers can exploit. Remote hacking is one of the most severe threats, where poor network security may allow malicious actors to take control of a robot, access its sensors, or install persistent malware. Another threat is sensor spoofing, which means that someone tricks the robot into misinterpreting its surroundings by feeding it false data. Yet another potential disruption is denial-of-service attacks which can disrupt communication. This emphasizes the need to follow established cybersecurity practices when developing robotic systems.

Validating safety and correctness

Cybersecurity alone is not enough. We must also ensure that RL-trained controllers behave safely and reliably under a wide variety of operating conditions. Simulation and testing are good steps toward this end. However, AI models such as neural networks can display adverse behavior in scenarios that are very hard to find manually or with random tests. An interesting complement to testing is counterexample-guided optimization, which is a powerful technique to find problematic scenarios where the robot controller does not behave as expected. In addition to this, formal verification can provide mathematical proof of correctness, although it should be said that this is a computationally demanding process.

Regulation and legislation

Safety and security are both technical regulatory challenges. Under the EU Artificial Intelligence Act, many applications of robotic control systems incorporating AI are classified as high-risk AI systems. This imposes strict requirements on for example risk management, technical documentation, robustness, accuracy, cybersecurity, and human oversight.

Conclusion

Reinforcement learning makes possible adaptability for robotic systems, but it also amplifies the challenges of safety, security, and regulatory compliance. Addressing these issues requires a holistic approach that combines rigorous testing, optimization, formal methods, and real-time monitoring with robust cybersecurity practices and adherence to emerging legislation. As robots become more autonomous, ensuring their reliability is essential for safe operation of the next generation of intelligent machines.


Get access to AI development support and infrastructure!

Mimer offers both expert support for your AI projects and GPU infrastructure. Browse to see what we offer or contact us!