Software Supply Chain Security TerminologyApril 28, 2022 Tweet
In light of recent high profile software supply chain security issues such as the SolarWinds attack and the Log4j open source vulnerability, we found it important to identify and explain some key terminology. We will also state our particular definitions for these terms in the context of GrammaTech products and our approach to improving software supply chain security. This is not an exhaustive list but it does include some of the most popular phrases to be aware of.
Software Supply Chain Security
The idea of managing supply chains isn’t new in general although in software development it’s a relatively new concept that is increasing in importance. Supply chain management and the concept of the bill of materials is considered one of W. Edwards Deming’s legacies from his lifetime’s work in quality improvement.
Deming outlined 14 principles for transforming companies with what became known as Total Quality Management (TQM). The first of these principles is “develop consistency of purpose in improving the product, service and operations throughout the chain that would help in developing a competitive and efficient supply across the chain.” This speaks to the need for determined effort by an organization to improving quality and then expecting the same through their suppliers. It’s from this legacy of supply chain management in other industries that software supply chain management emerged, with security being one of the key aspects.
Over time, software licensing became an important focus for software supply chain management to concerns over meeting the obligations of open source licensing terms. Interest was heightened when the first GPL violation lawsuit was filed in the U.S. in 2007 against Monsoon Media by the developers of the open source BusyBox project. The outcome of this case forced Monsoon Media to compensate the developers and release the source of their modified version as required by the GPL. For companies reusing and distributing open source software, this case was a revelation. In the wake of this, software composition analysis grew with an emphasis on supply chain transparency and software bill of materials (SBOMs) as a catalog of software components and licenses with the aim of minimizing legal risk for non-compliance.
Interest in the security aspect of the software supply chain coalesced around 2010 with the popularity of commercial software composition analysis tools. The importance to cybersecurity intensified around 2013 when the OWASP included “A9: Using components with known vulnerabilities” in their Top 10 list of vulnerabilities. More recently, the cybersecurity Executive Order issued in 2021 has reemphasized the need for software supply chain security.
Open Source Software Security
Open source software is open in the sense that source code is fully available to be viewed and, in most cases, to be compiled and run locally by users. Reusing open source software in your own products depends very much on the license the software is published under. Some of the common licenses are GNU Public License (GPL), MIT, Apache, BSD, etc. These licenses usually allow for free use of the software and reuse of the software in applications but usually with caveats that must be met in order to meet the terms of the open source license.
Open source software is usually developed “in the open” where code, changes, problem reports and discussions are available to the public. For example, many open source projects are developed on GitHub. Users can access source and builds and developers can use the repository for development.
This free access is what has made open source such a powerful movement in the software industry which now forms the backbone of software we depend on such as web servers, databases, operating systems, tools, and programming languages. This open access and ubiquitous usage also introduces a large potential security issue when you choose to reuse software in your products. You inherit all of the security vulnerabilities of reused software, open source or not.
Access to the source means that vulnerabilities can be easily discovered by potential attackers, or purposely added into the code as malware. Open source projects might not test for security either with SAST or DAST tools nor actively design with security in mind. Despite this, high profile open source projects take security seriously and respond to vulnerability disclosures in a timely manner. In addition, security isn’t just the responsibility of the project developers. The larger issue with open source security is software organizations not updating their open source dependencies with up-to-date or patched versions. Therefore, it becomes important to understanding what open source dependencies exist in your software (or software you buy or use). It’s critical to know what versions and vulnerabilities that exist with the specific versions used.
Commercial Off the Shelf (COTS) Software Security
Commercial software (or commercial off the shelf (COTS) software) is often called closed source to make the distinction versus open source software. Commercial software, in most cases, isn’t developed in the open nor is the source code made available. Users of commercial software are also at the mercy of the security issues of the software they buy. Buyers assume, rightly or wrongly, that the company they buy from has taken the necessary precautions to secure their software. Experience has shown that you can’t make assumptions about security so it’s critical to understand the potential impact of COTS risk on your own security.
Although source code isn’t readily available for COTS software, it’s no less susceptible to security vulnerabilities and relying on “security by obscurity” is a practice. Although it appears as a “black box” to attackers, it appears the same way to customers. As with open source software, any COTS software you deploy, you inherit the security state of that software.
The black box, closed source nature of COTS software further complicates this problem. Understanding what potential issues that reside in your COTS software is critical for security. Binary code analysis has proven to be invaluable in “cracking the egg” of the black box of COTS software. Using sophisticated binary matching algorithms, tools like GrammaTech CodeSentry can identify the open source component make up, build a software bill of materials (SBOM) and automate vulnerability detection for COTS software (more on this later.)
Software Composition Analysis (SCA)
Software is increasingly complex and relies more on third-party and opne source components both of which increase security risks. Software components whether they are open or closed source, commercial or free, create a tangled web of dependencies. Understanding just what these components are, their characteristics, licensing and security issues is the goal of software composition analysis (SCA). As with most code analysis, automation is the key success, so SCA tools are relied upon to create a component list for your software (most often in the form of a software bill of materials, or SBOM.) Often, there is more information than simply a component list such as known (N-day) and unknown (Zero-day) security vulnerability information, version numbers, and software licenses.
Binary Software Composition Analysis
The most common usage of SCA tools is by software teams vetting their open source and third-party software as part of the security practice through the software development lifecycle (SDLC). SCA tools, however, such as GrammaTech CodeSentry, aren’t exclusively software development tools. Many of the software operations in an enterprise are outside the scope of the software development teams.
It’s important to consider the scope of security analysis beyond current software development. SCA tools using binary analysis can help organizations understand their true software bill of materials (SBOM) or the ingredients of the software, even when source code isn’t available, and the security vulnerabilities associated with each component.
CodeSentry uses deep, scalable binary analysis to identify open source components, create a detailed software bill of materials (SBOM) and detect known vulnerabilities in the identified components, including any dependencies. The SBOM can be embedded along with each application making audit requests more reliable. The type of analysis used yields high precision and recall meaning the identification of more components and less missed vulnerabilities and less false positives.
Software Bill of Materials (SBOM)
The Presidential cybersecurity executive order issued in 2021 has tasked the National Telecommunications and Information Administration (NTIA) and the Commerce Department with defining the minimum elements of a software bill of materials or SBOM, considered critical to improving transparency and security in the software supply chain. In this post, I am going to rely on the NTIA definitions and sources since they are the current focal point of the software industry for defining and standardizing SBOM definitions and how they are to be used and exchanged.
The NTIA definition of a SBOM is as follows:
A Software Bill of Materials (SBOM) is a formal record containing the details and supply chain relationships of various components used in building software. These components, including libraries and modules, can be open source or proprietary, free or paid, and the data can be widely available or access-restricted.
Source: NTIA SBOM FAQ
The important parts of the definition are “formal record,” “supply chain relationships” and “open source or proprietary.” Formal is important because the SBOM needs to be a precise software development artifact designed to be interchanged with other companies, the government and available for security audits. Supply chain relationships are important because we need to understand where software comes from, is it open source? Third-party? Commercial? What versions? Lastly, SBOMs apply to more than just open source software as commercial off the shelf (COTS) software may actually be even more important because of hidden risks to organization that have deployed such software.
The NTIA has been tasked to define the minimum requirements of an SBOM and we’ll use that here. Included in this work is not just the data requirement but automation support (i.e. enabling tools support) and associated practices and processes.
- Author Name: The author of the SBOM, usually the organization supplying the software.
- Supplier Name: The name of the software supplier and should include aliases. Supplier and author might be different if the supplier is making a claim on behalf of the author.
- Component Name: The name of the software component and possible aliases.
- Version String: The format of the version information is free form but should follow common industry usage.
- Component Hash: The best way to identify a component is the use of a cryptographic hash that acts like a unique identifier. The specifics of these are usually defined by the interchange formats agreed upon by the industry.
- Unique Identifier: A unique identifier is needed for each component.
- Relationship: The relationship field defines the relationship between the component and the software package. In most cases, this relation is “includes” as in software package X includes component Y.
Often more information is needed and tools such as GrammaTech CodeSentry include vulnerability information with each identified component. These include:
- Component Match: This is the degree of confidence from the matching algorithm used by CodeSentry. Since the SBOM is automatically generated from binary code, matching to known components does have some degree of error.
- Security score: Based upon identified components and discovered vulnerabilities ranked by criticality, a secure score is generated to highlight the risk of the software application.
- Path: The file path of the component.
- CVE Distribution: The distribution of discovered vulnerabilities by criticality (critical, high, medium and low.)
CodeSentry creates a vulnerability report as part of the SBOM generation which identifies vulnerabilities in the components. These vulnerabilities are uniquely identified and include descriptive information:
- Severity: The vulnerability severity from its CVE entry based upon CVSS scoring.
- CVSS score: The common vulnerability scoring system value, between 0.0 and 10.0, which is used to prioritize vulnerabilities. The higher the score, the more likely the vulnerability is to be exploitable, have a large impact and inflict damage in a large area of the application or product. Critical vulnerabilities are in the 9.0-10.0 range.
- CVSS version: The CVSS has been updated over time so the version is important when looking at vulnerabilities with the same score.
- CVE ID: The unique identifier for a vulnerability’s entry in the national vulnerability database.
- Description: The text description provided by the CVE entry.
Software Security Risk Management and Analysis
As with any risk to the enterprise, software security needs to be addressed with the same risk management practices already in place. In some cases, these are informal (or non-existent in the case of small businesses or start-ups) which is a lability. You can only take security seriously when your organization has a formal risk management process in place. It’s at this point that software security becomes part of risk management and subject to the same policies.
Risk management is a large topic in itself, but the basic concept is to assess risks, respond to them as appropriate to the risk level, and monitor the risks over time. Within this framework is establishing the risk environment which is unique to each organization. This context would describe assumptions, constraints, priorities and trade-offs. For example, in terms of software security, a data breach is a high priority risk for companies that handle credit card data but a lower risk for companies that store little to no PII (personal identifying information.)
The biggest security risk to any organization is not taking security seriously enough. Security must be a first-class citizen in all aspects of product management, development, deployment and support. Despite the obviousness of this risk, many companies still struggle with security seriously and managing the risks associated with software. Risk analysis needs to take into account the priority that security has at all levels of the organization, especially senior management.
Software security risk management needs to include security controls which guide and enforce the “rules” that software development and cybersecurity team must follow. For example, security controls can include the use of SAST tools to detect and prevent security vulnerabilities during code check-in. The severity and types of vulnerabilities would also be documented in these controls. The security risk management process within the company must take into account the use of open source and other third-party software. This might translate into security controls that describe the process for software acquisition, the use of SCA tools and management of SBOMs.
Software Security Risk Analysis (a.k.a. Risk Assessment)
A key ingredient to corporate software security risk management is an end-to-end security assessment and analysis. Most applications are part of a larger ecosystem, so understanding the potential security issues at a system level is critical. A threat assessment includes taking stock of the various physical, network and virtual connections, potential losses, threats, and the difficulty of the attack. Importantly, addressing these threats needs to be prioritized based on likelihood and potential impact.
Cyberattacks are perpetrated through the various connections an application has with the outside world. Although a network connection is a high-profile interface, designers should not ignore all possible connectivity. Possible vectors for attack include the user interface, USB and other I/O ports, and serial and parallel connectors, for example. Since security incidents occur from internal and external sources, physical connections to devices also need to be considered. Equally important is the context of the device's connectivity. Is it physically available for people to access? If networked, is it behind a firewall and gateway? How many IP addresses, ports, and protocols are planned to be used?
If an attacker gets unauthorized access to your application (or the platform on which it runs), what are the potential impacts of the access? For example, in an industrial control system, would unauthorized access cause damage or unintended functioning of the system? Would access compromise critical data? Would access disable the device or decrease its reliability? In some cases, erroneous input on an external connection can be enough to crash the system, causing an outage or damage.
In an enterprise environment, what would be the impact of an attack on a vulnerable application? Does that application process sensitive info such as financial data, intellectual property or personal identifiable information (PII)?
It's important to consider every possibility and the impacts of such unauthorized access.
When considering each impact, categorize into key areas and then assign an impact value to each of these metrics. For example, in most industrial control systems, loss of confidential data is not as severe as loss of integrity (which could include serious malfunctioning of the device). NIST defines three loss metrics as follows:
- Confidentiality - unauthorized theft of sensitive information.
- Integrity - unauthorized alteration or manipulation of data. In embedded devices that control systems in the real world, this can include manipulation of command and control. For enterprise software, it could mean a data breach.
- Availability - loss of access or loss of use of the device and/or software application.
In light of these loss metrics, a relevant impact value is assigned based on the type of application, its context, and its operational usage. Industrial control systems, for example, that control physical systems that have potential to injure (i.e. a robot) or to impact people and property (i.e. an energy grid, nuclear power plant, or water processing plant) or contain sensitive information such as e-commerce applications, would place a high impact on loss of integrity and availability.
The impact of a cyberattack on a product or application is a function of both its potential impact (loss metric, as above) and the possibility of the attack, which, in turn, is a function of motivation and intent. If we look at Stuxnet as an example, the attack was sophisticated and relatively difficult to achieve; however, it was accompanied by a high level of motivation and intent. A threat assessment is performed by looking at the motivations and intents of potential attackers, their possible avenues to attack your system, and the probability of them being successful in that attack:
- Attack sources and motivations - threats can be insiders, activists, terrorists, criminals, state-sponsored actors, or competitors. Understanding the sources of attacks and their motivations help you understand the goals of an attack and whether such a group could achieve the attack. For example, industrial control systems may not be interesting to criminals if there is no direct monetary gain from an attack.
- Roles and privileges of authorized users - identifying users and their access rights is essential to enforcing a key security principle of least privilege. Limiting access of operational users to prevent dangerous operation or leakage of important data prevents insiders and attackers from gaining more than their privilege level allows. In doing so, gaining access to a user-level account may not be dangerous in a properly designed system.
- Identify potential electronic attack vectors - typically network connections and other peripheral I/O provide intrusion access to attackers. In some cases, the attack vector may be internal from local access to the user interface or local area network. In other cases, access via a wide area network or even the Internet is a possibility. Understanding the scope of connectivity the device or application is needed to understand the potential vectors.
- Assess attack difficulty - the loss assessment indicates which services and functions would have the most impact when attacked. The relative difficulty of these attacks must be evaluated based on the attackers and their intrusion vector.
- Assign a threat metric - it's not possible to foresee every attack, nor is it efficient to attempt to protect against every possible attack. Attacks from outside the defendable network segment, for example, that have a large impact (loss metric) and a low attack difficulty would have a high threat metric. Scoring each combination of source and motivation, attack vector, and attack difficulty is required.
Software Security Analysis – Prioritizing Defense
The relative priority of security defenses is derived from the loss and threat assessments - a combination of high loss and high threat yields the highest priority. This seems like common sense, however, without the extensive analysis of loss and threats, a team couldn't objectively evaluate the priority of each defense. The results of this priority calculation lead to specific security requirements for the application, test and "abuse" cases for evaluation. This is also the inputs into the security controls discussion earlier. It's important to note that this is an iterative process, not all threats will be known at design time, and attackers and vectors may change over time. Building in threat assessment into the software lifecycle is key to continuous security assurance over the life of a product.
Part one of our terminology blog focused on common software and application security terms. You can read that post here