Safety standards like ISO 26262, DO-178B, DO-178C, IEC-61508, and EN-50128 require identifying functional and non-functional hazards and demonstrating that the software does not violate the relevant safety goals.
Some non-functional safety hazards can be critical for the correct functioning of the system: violations of timing constraints in real-time software and software crashes due to runtime errors or stack overflows. Depending on the criticality level of the software the absence of safety hazards has to be demonstrated by formal methods or testing with sufficient coverage.
The document “General Principles of Software Validation” applies a broad definition of validation which implies software inspection, analysis, testing and other verification tasks. It recommends that software validation and verification activities are conducted throughout the entire software life cycle.
Verification and validation are precisely defined in Sec. 3.1.2: “Software verification provides objective evidence that the design outputs of a particular phase of the software development life cycle meet all of the specified requirements for that phase.” It looks for “consistency, completeness, and correctness of the software” and provides support for considering the software validated. Verification activities include software testing, static and dynamic analyses, inspections, walkthroughs, and other techniques.
Software validation is considered a “confirmation by examination and provision of objective evidence that software specifications conform to user needs and intended uses, and that the particular requirements implemented through software can be consistently fufilled.” Hence software verification is one necessary part of software validation. The document emphasizes the aspect of confidence in verification results, which shows the importance of soundness: “Software verification and validation are difficult because a developer cannot test forever, and it is hard to know how much evidence is enough.” It is necessary to develop a “level of confidence” that the device meets all requirements and user expectations. The necessary level of confidence depends on the safety risk imposed by the automated functions of the device (Sec. 3.1.2.).
Section 4 gives a list of principles of software validation, including:
Section 5 addresses validation activities and tasks throughout the different stages of the software development life cycle, here denoted as quality planning, requirements, design, coding, testing. Sec. 5.2.2. gives a list of requirements relevant for validation, including timing requirements (bounds on timing, response times), valid ranges and limits for program values, and, in general, safety-related requirements. All requirements should be evaluated for “accuracy, completeness, consistency, testability, correctness, and clarity”. Among others that means that “fault tolerance, safety, and security requirements are complete and correct”. Sec. 5.2.3. (design) states that the software design specification should include “development procedures and coding guidelines”. It also lists “analyses of control flow, data flow, complexity, timing, sizing, memory allocation” as important elements of software design evaluations.
In the coding stage the document recommends invoking the compiler with the most rigorous level of error checking to inform the developer about potential residual problems in the code. Furthermore there should be documentation of the compilation process and its outcome: all compiler warnings or messages should be justified.
The source code should be checked for compliance to specified coding guidelines. Static analysis by code inspection and walkthroughs is recommended as a very effective means to detect errors and to help focusing dynamic testing. Testing should be applied at the unit and the system level.
One of the goals of the testing stage (Sec. 5.2.5) is to identify “dead” (unreachable) code that is never executed when the program is run. Different coverage metrics can be applied; path coverage is described as most comprehensive metrics and deemed not generally achievable due to the possibly huge number of execution paths. Still the amount of path coverage should be established based on the risk or criticality of the software under test.
Tool qualification is addressed in Sec. 6.3. The medical device manufacturer is responsible for validating third-party software tools, ideally supported by appropriate documentation by the tool vendor. A key element of the validation effort is testing, ideally based on validation suites. Tool vendors should be able to provide information about their development and verification processes.
To summarize, the document emphasizes that all requirements affecting safety have to be identified and validated. Non-functional aspects like timing, memory usage, and validity of value ranges are explicitly listed as relevant requirements. Coding guidelines should be specified and checked. The importance of defect prevention is emphasized. The limitations of testing methods are repeatedly described, in particular, incompleteness, testing effort, and lack of safe test end criteria. Verification and validation activities include static analysis and dynamic testing. Verification and validation activities have to be performed continuously throughout the development process. Path coverage is described as desirable yet unrealistic goal. Sound static analysis tools provide path coverage and can guarantee to detect all defects from the class of defects under consideration. They are a perfect match for the verification activities recommended by the document.
aiT, StackAnalyzer, and Astrée provide support for meeting the FDA guidelines:
aiT, StackAnalyzer, and Astrée are sound static analysis tools: in contrast to testing they provide full data and control coverage and provably report all potential defects from the class of defects under consideration (timing violations, stack overflows, and runtime errors, respectively). This makes it possible to completely prevent these kinds of defects, so the “defect prevention” principle is fully satisfied.
Automatic static analysis tools improve efficiency and error detection rates in comparison to a purely manual analysis. They provide an independent, automated review of source code, stack consumption and timing behavior. Their analysis scope is global, i.e. they can be applied after code changes and take the effects of the changes on the entire software project into account.
All tools support continuous verification; configuration and report files are available in XML format. Jenkins plugins are available. AbsInt’s Qualification Support Kits and Qualification Software Life Cycle Data reports are well-suited for tool validation as described in Sec. 6.3.
|3.1.2||Verification and validation||+||+||+|
|4.3||Time and effort||+||+||+|
|4.7||Software validation after a change||+||+||+|
|4.9||Independence of review||+||+||+|
|5.2.4||Construction or coding||+||+||+|
|5.2.5||Testing by the software developer||+||+||+|
|6.3||Validation of off-the-shelf software||+||+||+|