What is Static Application Security Testing (SAST)?Tweet
We often get the question from developers and engineering managers: “What is SAST?” often followed by “Ok, what do SAST tools do exactly for security?” Many people know the acronym as static application security testing or more commonly as static code analysis. Developers are usually familiar with static analysis as well but often don’t know what state of the art tools do for security. In this blog I will describe the classes of security issues that can be found with static analysis with a couple of real-world examples of security issues found in code using the tools.
What is SAST? Consider the definition from Technopedia:
Static application security testing (SAST) is a type of security testing that relies on inspecting the source code of an application. In general, SAST involves looking at the ways the code is designed to pinpoint possible security flaws. Source: Technopedia
Strictly speaking, any kind of inspection of source (and binaries) is considered static testing. However, most practical applications include the use of automated static analysis tools, such as GrammaTech CodeSonar. Therefore, the crux of the answer to “what is SAST?” is what do static analysis tools do to improve the security of the code they are inspecting?
Security Warning Classifications
Any problem that impacts a software systems Confidentiality, Integrity and Availability is typically considered a security problem. These three attributes are often referred to as the CIA Triad (https://en.wikipedia.org/wiki/Information_security). In this blog we will looks at some examples of problems that can impact Confidentiality, Integrity and Availability (CIA) and we have subdivided them in the following categories: memory issues, programming errors, dangerous function calls and tainted data.
Memory issues are generally dangerous and can either leak potentially sensitive information (confidentiality) if the problem is related to reading memory and/or can be used to subvert the flow of execution if the problem is related to writing memory (Integrity). Examples of these problems are buffer overrun/underrun, use-after-free, type overrun/underrun, null string termination, not allocating space for string termination, and negative character value.
Even the best of programmers violate these rules by accident occasionally. From our own experience, we frequently find violations when we demonstrate CodeSonar on our customers code. Sometimes we even find these types of problems in extremely well tested, safety certified code, an attestation to the fact that these problems are hard to find.
These problems can be found by static analysis tools, but there is a lot of complexity to it as the static analysis tool will have to perform data flow analysis and symbolic execution for best results. CodeSonar is particularly good at finding complex static and dynamic memory issues.
This class of errors are mainly due to incorrect use of the C/C++ language such as uninitialized variables, double freeing of pointers, implicit conversion between signed and unsigned, etc. These errors may not manifest themselves during testing since error conditions may not be triggered. However, they might be exploitable and have an impact on Confidentiality, Integrity as well as Availability.
Dangerous Function Calls
Certain API functions are considered potentially harmful and unsecure. gets is a great example, as it can easily overflow the destination buffer, leading to buffer overrun and hence impact Integrity. Other functions may have implementation specific behavior. These types of dangerous function calls are easy to fund by static analysis tools as they can simply analyze the code and search for a list of dangerous functions.
Cryptography functions are important to keep data, in motion as well as at rest, confidential. However, few people are experts in its use and misuse of C library cryptographic functions can lead to problems. For example. The use of weak cryptography such as DES, MD5, or use of functions such crypt (). Other examples are hardcoded keys or salt data for hashes. Problems in this category impact Confidentiality as well as Integrity and are easy to find with static analysis tools.
Data injection vulnerabilities are hard to detect using conventional testing methods. You’d have to do adversarial testing or pen-testing to find these types of vulnerabilities. Often the data input needs special formatting and extraneous input beyond what is expected from normal user input. For example, SQL injection exploits contain SQL statements embedded in the input that are intended to be interpreted by an underlying database in the application. Problems in this area typically impact Integrity, but can also impact Confidentiality.
Static analysis can alert the developer of these weaknesses early in the development cycle. Data that flows into the system through some form of input from a user, a device, a socket, anything is traced from source (where it enters the software) to its sink (it’s eventual use). Before that data is used in API calls, or used in any part of logic, it needs to be validated. If not, it could lead to serious vulnerabilities such as format string injection, LDAP injection, SQL injection, or other type of data injection exploit. You need to have a strong static analysis tool to find these problems using dataflow analysis and symbolic execution.
Tainted data can ﬂow through a program in unexpected ways, so an automated tool can also play an important role by helping programmers understand these channels. In CodeSonar, the location of sources and sinks can be visualized and program elements involved in ﬂows can be overlaid on top of a regular code view. This can help developers understand the risk of their code and aid them in deciding how best to change the code to shut down the vulnerability.
A buﬀer overrun warning generated by CodeSonar where the underlining shows the eﬀect of tainted data.
In this example, first note the blue underlining on line 80. This indicates that the value of the variable pointed to by the parameter passed into the procedure is tainted by the file system. Although this fact may help a user understand the code, the most interesting parts of this warning are on lines 91 and 92. The underlining on line 91 indicates that the value returned by compute_pkgdatadir() is a pointer to some data that is tainted by the environment.
The call to strcpy() then copies that data into the local buﬀer named full_fle_name (declared on line 84). This, of course, transfers the tainted data into that buﬀer. Consequently, on line 92, the red underlining shows that the buﬀer has become tainted by a value from the environment. The explanation for the buﬀer overrun confirms that the value returned by compute_pkgdatadir() is in fact a value retrieved from a call to getenv(). A user inspecting this code can thus see that there is a risk of a security vulnerability if an attacker can control the value of the environment variable.
SAST is the inspection of source and binary code to detect possible security vulnerabilities. However, in practical applications it relies on the use of automate static analysis tools that can uncover a wide range of security issues that impact a systems Confidentiality, Integrity or Availability. Static analysis is particularly well suited to detecting coding errors, misunderstanding of programming language semantics or use of known unsecure library functions and poor cryptography libraries. Advanced tools such as CodeSonar can also detect more sophisticated vulnerabilities with tainted data analysis. SAST tools are an important part of security improvement plan and a comprehensive development automation tool chain to improve quality and security in particular.
Interested in how this applies to your software? Go to go.grammatech.com and sign up for a free evaluation. One of our engineers will happily assist you to perform a scan on your code and show you the security violations that are present.
Interested in learning more? Read our guide on Integrating Static Application Security Tools (SAST) in DevSecOps