Suzhou International Expo Centre

Medtec Innovation Suzhou

2024.12.23-24 | B1 Suzhou International Expo Centre

EN | 中文
   

Linux and Security for Today’s Embedded Medical Devices

Scot, how can Linux open-source software be used for medical device safety?

Linux has been deployed safely in a wide variety of medical devices, but to use Linux in a medical device that has a safety requirement, embedded developers need to follow the process defined by the certification standard for compliance and certification.

So, can Linux be pre-certified for use in a medical device?

Not really. Certain real-time operating systems (RTOSs) such as the Nucleus RTOS from Mentor, can be acquired pre-certified, as can other embedded software components from a number of vendors. To achieve this kind of pre-certification, the vendor must be able to show that the complete software development process—requirements, design, development, testing and verification, and all of the steps of development—has been performed to medical industry standards such as ISO 13485 and/or IEC 62304. Linux and other open-source components aren’t developed to these standards, so they’re not pre-certified.

There have been efforts to show conformance of Linux to the over-arching concepts of functional safety, like mapping to IEC 61508, from which many industry standards are derived, including IEC 62304. While this approach isn’t successful, Project ELISA, a current process, is showing promise by improving the processes for open-source software development, and in mapping the higher-quality output to these standards. However, this promise is likely years away from being completely realized.

What are embedded developers relying on now to ensure the safety of their Linux-based medical devices?

Instead of pre-certification, Linux is generally handled using a concept from IEC 62304 called Software of Unknown Providence, or SOUP, for today’s medical devices. Under these guidelines, Linux is considered as part of the risk assessment of the overall device, and potential failures of Linux as used in the device must be considered, and mitigated, if they might cause harm to a patient. This risk assessment must meet the requirements of the FDA’s pre- and post-submission guidance.

So, on the front end, it requires considerations on the use of Linux in the design, implementation, testing and verification of the device. Then, the use of Linux, and all open-source software, must consider the possibility that issues will be found after product release. Certifiers are taking a very close look at this aspect of open source, especially as far as security issues are concerned.

Both safety and security are necessary when we’re looking at medical devices. We hear that you can’t have safety without security, but why is that?

Security is something that can be looked at as standalone. Even in medical devices, not all aspects of security are tied to safety. For example, when we talk about protecting someone’s personal information, this is an aspect of security that doesn’t overlap with safety. But when we talk about safety, things that could go wrong will impact the patient’s health. If the device isn’t secure, it makes it possible for bad actors to make these negative impacts happen either accidentally—or purposefully.

Does the use of Linux and other open-source software help protect these devices?

Linux is the most heavily used operating system for devices with a large, worldwide developer base. This global developer community focuses on ensuring that Linux works as expected in all conditions, and that it’s as secure as possible.

As the most studied operating system in the world, the vast majority of developers are conscientiously working on improving Linux and other open-source packages. But there’s also a small number of actors who are looking at ways to break into Linux for their own purposes. The security of applications using open source is a constant tug-of-war between these opposing forces. Without security, you can’t have safety.

How is it possible that Linux can have so many security flaws that we’re always finding more?

Linux and other major open-source packages like OpenSSL or SQLite are large packages that can have unpredictable interactions with other software running in the same system. This is combined with the fact that many flaws are hard to find in code reviews, normal testing or by static analysis. And they’re undetectable unless software is combined with task switching and inter-process communication. Best practices will not identify every possible flaw or exploit, and much of the open-source software that we rely on was not originally developed with today’s best practices in place.

However, the most important pieces of open-source software used in devices worldwide are much more stable and secure now, compared to five years ago. This is mainly due to the hard work and diligence of engineers all over the world in identifying avenues of exploitation, fixing those when they find them, and with the worldwide community looking for similar issues in their own projects. The work will never be complete, but it’s becoming harder and harder to find exploitable flaws in this important infrastructure software.

What happens when a security issue is found in Linux?

Security issues in Linux, including other important software like OpenSSL, are found by engineers either by happenstance, like a bug that they uncover during a project, or through concerted efforts to find exploits, like “white hat” hacking. Or, an exploit will be found during a post-mortem analysis of an attack, but that’s uncommon.

In either case, the exploit discoverer will notify the community of the offending open-source component. Then, the discoverer or Linux community member will notify the Common Vulnerability and Exposures (CVE) group, run by MITRE, an organization closely related to the U.S. National Vulnerability Database (NVD) that’s managed by the National Institute of Standards and Technology (NIST).

Once a vulnerability is understood and a fix is available, the CVE is publicized by inclusion in these lists. If the exploit is sufficiently serious, the issue is discussed by the security community worldwide. This is the point where devices are potentially most vulnerable. Since most vulnerabilities are found by the “good guys,” the bad actors will find out about them as will the rest of the world. These bad actors can then deploy exploitations that take advantage of the newly found vulnerability.

That said, this publicity is very important, since it alerts the worldwide community of both the issue and the fix. Thus, an organization can determine if a particular exploit might affect their devices, and if it is, they can mitigate the issue before it may be attacked. Of course, not everybody will be able to update their devices, which will leave them open to attacks. But since there are no real secrets in the world, this openness prevents more issues than it causes.

Back to safety. How safe is Linux?

An operating system like Linux doesn’t directly do anything to make a device safer. The operating system doesn’t prevent a failure from occurring, nor does it make the system recover when a failure occurs. When you put Linux on a system with no other application and turn it on, Linux boots; however, it just sits there at a login prompt. It’s not doing anything until applications that leverage Linux are running, and it’s those applications that contribute to the overall safety of the device. While an operating system isn’t a safety mechanism, it enables them and is considered to be a safety element.

With today’s advanced and affordable microprocessors, how does a multiprocessor system affect safety?

Today’s microprocessors are powerful and complex, designed to support heterogeneous multiprocessing. They comprise powerful general-purpose cores to run an OS like Linux, and more specialized cores to handle other functions. Designing for safety is an integrated systems issue, not just hardware or software.

To take full advantage of the board BOM costs and higher integration of components in an advanced multiprocessor for a safety-sensitive design, applications must be kept separate—what’s known as mixed-safety criticality.

Simultaneously, the safety-critical portion of the system runs on a separate cluster dedicated to real-time processing. It has features like tightly coupled data and instruction memory with extremely low fetch cycles, and highly deterministic performance, or lockstep mode for error detection.

Advanced multiprocessing systems contain hardware-enforced isolation that keep the application world and the safety-critical world separate. However, the software designer must use middleware such as the Mentor Hypervisor or Mentor Multicore Framework to take advantage of those hardware features. These software packages enable important system-level functions like secure Inter-processor Communication (IPC) between the processor clusters to be possible.

Scot, thank you. Where can our readers learn more about Linux and Mentor’s embedded software?

Our website www.mentor.com/embedded-software/ provides a broad range of white papers and on-demand webinars on topics such as Linux, mixed-criticality, safety and security to enhance embedded development.

Scot Morrison is the general manager of the Platform Business Unit, Mentor Graphics Embedded Systems Division, overseeing the Linux, Nucleus and AUTOSAR product lines; middleware; and professional services. Prior to joining Mentor Graphics in 2012, Morrison served as GM and SVP of Products at Wind River Systems Inc., where he previously served as VP of Engineering. He joined ISI in 1986, where he spent 14 years in various management positions, last serving as a VP and GM of the design automation solutions business unit in 1999, responsible for operating systems and associated middleware and tools.

X