Software Trustworthiness Best Practices for IIoT

Screen Shot 2020-04-09 at 12.44.30 PM

The Industrial Internet Consortium (IIC) recently released their white paper “Software Trustworthiness Best Practices” which outlines a set of approaches for risk management mitigation for software developed or acquired in Industrial Internet of Things (IIoT) systems. The paper covers the complete software lifecycle - of which development is a subset – that includes software update and end of life strategies.

The use of the term “trustworthiness” helps encapsulate the intertwined requirements for IIoT systems to be reliable, secure, respect privacy, resilient and safe. It’s helpful to think in terms of the level of confidence or trust in the software since it’s easy to think in terms of only one or two aspects such as reliability and safety. The IIC defined trustworthiness as follows:

Trustworthiness is the degree to which the system performs as expected in the face of environmental disturbances, loss of performance quality and accuracy, human  errors, system faults and attacks. Assurance of trustworthiness is the degree of confidence one has in this expectation. A system must be assured as being trustworthy for a business or organization to have confidence in it.

In the end, the combination of quality factors that provide the confidence required that an IIoT system behaves as expected when deployed in the real world and that its development has followed the amount of rigor required with due diligence.

Managing the Software Lifecycle

In many cases developers only consider the software lifecycle of the software they’re currently writing to include stages from inception to delivery. From this point of view the project is “done”, time to move on to the next one. However, software often has a very long life out in the field and the entire lifecycle needs to be considered. This lifecycle includes the acquisition of third-party software, transfer and delivery of software, software in execution and at rest (loaded in devices but not yet executing.) The paper provides a good summary of this as a table:

 

Term

Definition

Software lifecycle

The process of managing software throughout its entire lifecycle, from stakeholder requirements, system-level requirements, architecture, design, implementation, testing and assurance, deployment, storage, operational use, modifications and updates, through to retirement. Software lifecycle management can be made more trustworthy by using suitable cryptographic techniques to ensure the authenticity of any component of the software or associated information during the initial creation as well as updates. This gives system integrators the ability to track, trace and validate the source, processes and testing used at each step, especially when updating software already in operation.

Software-as-written

Source code is created by software developers during the software development process to guide the behavior of the system, so confidence in the software quality and correct operation is essential. Source code needs to be managed and is typically stored in a source control system at the organization that has authored the software, in an open source repository, when part of an open source project or as a combination of both.

Software-in-delivery

includes the transmission of software to the system on which it will be deployed, including over the air (OTA) updates, as well as installation, configuration and any other actions associated with provisioning the software onto the device on which it will execute. The analysis of trustworthiness of software delivery needs to consider plugins, end-user extensions and configuration by the end user as well as transmission and basic provisioning.

Software-at-rest

This is the software that is ready to execute, either loaded on the device or ready to be loaded on the device. This can be a binary format or source code for interpreted languages (e.g., Python), languages that are compiled into byte code (e.g., Java) or just-in- time compiled. Various cryptographic techniques have been designed to ensure that the software-at-rest has not been modified.

Software-in-operation

This is the software as it is executing on the device. It consumes memory, has an execution state and is configured into its environment.

Software-end-of-support

Software that is no longer actively maintained by the software development or maintenance teams.

Software-end-of-life

When the software is no longer used operationally by the organization.

Source: Software Trustworthiness Best Practices - Table 3-1: Software Lifecycle Terms

I won’t delve into much more detail here but encourage readers to refer to the whitepaper for more. For this post, I’m going to concentrate on the software assurance aspect of software trustworthiness and the role of code reviews and static analysis tools.

Software Assurance

The stance of the IIC whitepaper is that software assurance is required for trustworthiness both in terms of validating a system does what is intended and only what is intended without unpredictable behaviour. In addition, assessment techniques such as testing, reviews and risk analysis need to be performed throughout the lifecycle both early on as possible and well into the maintenance and end-of-life phase of the software. For the sake of brevity, I’m covering two important techniques; code reviews and static analysis.

Code Review and Static Analysis

The Software Trustworthiness Best Practices whitepaper defined code reviews and static analysis as follows:

Term Definition

Code Review

A review of software code against a design and requirements to identify completeness and correctness, proper style and documentation and to identify software flaws and vulnerabilities.

Static Analysis

Analysis of source code to identify flaws and vulnerabilities, often an automated process.

   

Source: Software Trustworthiness Best Practices - Table 3-4: Software Assurance Assessment Technique

Reviews throughout the development process are critical and code reviews are no exception. As we covered in an earlier post, static analysis plays an important part in code reviews as well as other aspect of software assurance such as unit, functional and security testing. Code reviews are enhanced with static analysis tools. Human-led inspection complements tool automation nicely and static analysis tools reduce time and costs for reviews while enhancing the outcome of the process. In other words, these two best practices are separate but overlap in practice.

We’ve covered benefits of static analysis in multiple posts but I’ll summarize the importance of Static Application Security Testing (SAST) tools like GrammaTech CodeSonar in improving the trustworthiness of software. In particular, it provides some of the following benefits:

  • Continuous source code quality and security assurance
  • Tainted data detection and analysis to uncover complex security vulnerabilities
  • Detect complex issues that are obfuscated and span process/file boundaries
  • Third-party code assessment of source and binary code
  • Coding standard enforcement

The application of static analysis to IoT (and IIoT) is covered in much more detail in our whitepaper “A Four-Step Guide to Security Assurance for IoT Devices.”

Summary

Trustworthiness combines the various quality, safety and security aspects of IIoT into a convenient umbrella term. The new IIC whitepaper outlines best practices for building in trustworthiness across the entire product lifecycle including various stages and states of software, well beyond the traditional view of SDLC developers might use. Software assurance is, of course, a critical part of building trustworthy software. Code reviews and static analysis tools are an important best practice for continuous quality assurance throughout.