Safety standards like ISO 26262, DO-178B, DO-178C, IEC-61508, and EN-50128 require identifying functional and non-functional hazards and demonstrating that the software does not violate the relevant safety goals.
Some non-functional safety hazards can be critical for the correct functioning of the system: violations of timing constraints in real-time software and software crashes due to runtime errors or stack overflows. Depending on the criticality level of the software the absence of safety hazards has to be demonstrated by formal methods or testing with sufficient coverage.
The ability to handle non-functional program properties is one determining factor for selecting a suitable modelling or programming language. The standard requires that realtime software and runtime error handling must be supported. It also states that “criteria that are not sufficiently addressed by the language itself shall be covered by the corresponding guidelines or by the development environment”.
Table 1 of ISO 26262 suggests the use of language subsets to exclude language constructs which could result in unhandled runtime errors. However, for typical embedded programming languages like C or C++, runtime errors can be caused by any pointer or array access, by arithmetic computations, etc.
This means that the absence of runtime errors has to be ensured by appropriate tools as a part of the development environment. In addition, timing and stack behavior is not captured in current programming language semantics and has to be addressed by specific tools.
In general the specification of the software safety requirements considers constraints of the hardware and their impact on the software. Among others, safety requirements apply to functions that enable the system to achieve or maintain a safe state, and to functions related to performance or time-critical operations.
The standard explicitly lists some requirements that are part of the software safety, including the hardware-software interface specification, the relevant requirements of the hardware design specification, and the timing constraints.
Hardware- or configuration-related errors (e.g., stack overflow and runtime errors like erroneous pointer manipulations) can cause globally unpredictable behavior affecting all safety functions. Thus these errors have to be taken into account. Timing constraints include the response time at the system level with derived timing properties like the worst-case execution time.
The following table lists the sections of Part 6 of ISO 26262 referring to non-functional program properties and static verification techniques.
|Section||WCET analysis||Stack analysis||Runtime-error analysis|
The architectural design has to be able to realize the software safety requirements. The requirements listed in ISO 26262 include verifiability, feasibility for the design and implementation of the software units, and the testability of the software architecture during integration testing. In other words, the predictability of the system is one of the basic design criteria since predictability of the software as executed on the selected hardware is a precondition both for verifiability and for testability.
Section 7.4.17 of ISO 26262 explicitly demands that “an upper estimation of required resources for the embedded software shall be made, including the execution time, and the storage space“. Thus, upper bounds of the worst-case execution time and upper bounds of the stack usage are a fixed part of the architectural safety requirements.
The importance of timing is is also reflected by the fact that “appropriate scheduling properties“ are highly recommended for all ASIL levels as a principle for software architecture design. All existing schedulability algorithms assume upper bounds on the worst-case execution time to be known, as well as interferences on task switches either to be precluded or predictable. Thus the availability of safe worst-case execution and response times belong to the most basic scheduling properties.
Software architectural design requirements also explicitly address the interaction of software components. “Each software component shall be developed in compliance with the hightest ASIL of any requirements allocated to it.“ Furthermore, “all of the embedded software shall be treated in accordance with the highest ASIL, unless the software components meet the criteria for coexistence […]“ (cf. ISO 26262, Section 7.4.9.–7.4.10). Freedom of interference is an essential criterion for coexistence. Freedom of interference also is addressed by Annex D. It discusses timing properties like worst-case execution time or scheduling characteristics as well as memory constraints. For memory safety, corruption of content, as well as read or write accesses to memory allocated to another software elements have to be excluded. Such accesses can be caused by stack overflows or runtime errors like erroneous pointer manipulations and dereferences. As a technique to show the absence of memory faults, Annex D lists static analysis of memory accessing software.
Table 6 of ISO 26262 lists the methods for verification of the software architectural design. Control flow analysis and data flow analysis are recommended for ASIL-A and ASIL-B and highly recommended for ASIL-C and ASIL-D; formal verification is recommended for ASIL-C and ASIL-D. This can be done separately for the modelling and the implementation level. However, with model- based code generation there is a synergy between model-level and implementation-level analysis (cf. Section 2.1.4). Since the semantics of the model is given by the generated implementation, source or binary code analyses can be used to verify the architectural design by propagating analysis results from the implementation level to the modelling level. This is well supported by static analysis techniques.
Chapter 8 of ISO 26262, Part 6, Product development at the software level, defines the three goals of software unit design and implementation as:
Thus, static verification plays a very prominent role in the design and implementation stage; it should always precede dynamic testing which should focus on properties not statically verified. Table 9 lists the methods for verification of software unit design and implementation, including formal verification, control flow analysis, data flow analysis, static code analysis and semantic code analysis. As detailed in Chap. 1 all these techniques can be considered as aspects of general static analysis. Data and control flow analysis is a natural part of semantics-based static analyzers. With sound static analyzers proofs of data and control flow behavior can be obtained; they can be counted to the formal methods. A sound static analyzer for runtime error analysis like Astrée also provides safe data and control flow analysis without additional effort. The mentioned static analysis techniques are (highly) recommended for all ASIL levels.
In the structure of the ISO 26262 there is a differentiation between implementation, addressed in Chapter 8, and testing, addressed in Chapter 9. The absence of runtime errors (division by zero, control and data flow errors, etc.) is considered as a robustness property (Section 8.4.4) which has to be ensured during implementation. Resource usage contraints like timing or stack consumption are addressed in Chapter 9 (Software Unit Testing). This distinction corresponds to the boundary between source code and binary code; apparently the underlying assumption is that source code can be analyzed and binary code has to be tested, which does not account for static analyses working at the binary code level. The requirements become consistent again when static analysis techniques are counted to the testing methods.
During testing it has to be ensured that the properties of the system under test match the properties of the system to be shipped. This is a problem for dynamic testing techniques which rely on code instrumentation since they can affect the system behavior in ways which are hard to asses, especially regarding the timing behavior. Static analysis methods can be applied to the final binary code without code instrumentation, hence in a completely non-intrusive way. Actually static analysis can be seen as an exhaustive testing method providing full coverage — a view shared by related safety standards like DO-178B, DO-178C, IEC-61508, and EN-50128.
The software integration phase has to consider functional dependences and the dependences between software integration and hardware-software integration. Again the non-functional software properties have to be addressed; robustness has to be demonstrated which includes the absence of runtime errors, and it has to be demonstrated that there are sufficient resources to support the functionality which includes timing and stack usage (Section 10.4.3).
Just like for the unit testing stage, ISO 26262 points out typical limitations of dynamic test and measuring techniques: when dynamic tests are made the test coverage has to be taken into account and it has to be shown that code modification and instrumentation does not affect the test results (Sections 9.4.5.–9.4.6, 10.4.5–10.4.6). It is important to perform WCET analysis in the unit testing stage to get early feedback of the timing behavior of the software components. However, it has to be noted that since the WCET depends on the memory addresses in the program and can be influenced by linking decisions, it also has to be applied in the integration stage on the final executable. The same considerations apply to interactions or interferences between software components.