Resources

When static analysis falls short.

Understanding the limitations of SAST and why runtime testing matters.

March 16, 2025 • 6 min read

Static Application Security Testing (SAST) has earned a permanent place in most development workflows. It catches hardcoded secrets, flags dangerous function calls and enforces coding standards before code reaches production. For teams without a dedicated security function, it often serves as the primary security control.


That role is too broad for what the technique can actually do. SAST reads source code. It does not execute it. And the gap between what code says and what a running application does is exactly where most exploitable vulnerabilities live.


What SAST Actually Does

SAST tools parse source code or compiled bytecode into an abstract representation, then run pattern-matching rules against it: dangerous function calls, missing input validation, insecure cryptographic configurations, known bad idioms. The analysis happens without executing the code, which makes it fast and easy to integrate into a CI/CD pipeline.


The problem is that legibility in source code and exploitability at runtime are not the same thing. Static analysis can only see what is written. It has no model of what the application does when it runs with real users, a live database, deployed configuration and active sessions.


That is not a limitation that better tooling can resolve. It is structural. The technique operates on source code, and runtime behavior is something completely different.


AI-Powered SAST

The limitations described above apply to rule-based SAST tools where a reasonable question is whether AI-powered code analysis changes the picture. If we take into consideration latest Anthropic's Claude Code Security, it is a representative example worth examining directly.


Claude Code Security goes meaningfully beyond pattern matching. Rather than matching code against known signatures, it attempts deeper semantic analysis of data flows and component interactions: tracing data flows across files, understanding how components interact, and identifying complex vulnerability patterns that rule-based tools miss. In early testing against open-source projects, the system reportedly surfaced hundreds of previously unreported issues, including subtle memory corruption issues that had survived years of expert review.


That is a genuine capability improvement over conventional SAST tools. But the structural boundary remains: Claude Code Security analyzes source code and does not analyze running application. The vulnerability categories that are invisible to static analysis remain outside its scope regardless of how sophisticated the reasoning engine is. The constraint is not intelligence. It is that the input is code, not a running system.


The practical implication is that AI-assisted code analysis and dynamic testing address different parts of the attack surface, while strong coverage requires both.


The Vulnerabilities That Fall Through

The gap becomes concrete when you look at what static analysis cannot cover. The common thread running through all of it is the same: these are things that only exist when the application is running.


Some vulnerabilities are invisible from source code because they are properties of behavior, not implementation. The code may be written correctly, no suspicious patterns, no obvious mistakes, and the vulnerability still exists. It only surfaces when the application is interacting with real requests, real sessions and a live environment.


Others are invisible because they emerge from complexity. Input passes through enough layers, components interact in enough ways or sequences of operations combine in enough edge cases that no static view of the codebase can capture what actually happens end to end. The connection between cause and consequence only becomes traceable when something is run through it.


And some are invisible simply because they do not live in the codebase at all. Dependencies, configuration, runtime state, network behavior are part of the attack surface but not part of the source files a static tool reads.


Each of these categories shares the same root cause. The vulnerability does not live in the code. It lives in the behavior.


Attackers Don't Read Code

The gap described above is not academic. It maps directly to how real-world breaches happen. This is not a criticism of SAST tools as they were not designed to answer runtime questions, and holding them to that standard is a category error. The problem lies in the assumption that tends to follow their use. Teams that run static analysis and consider their security posture addressed have made a logical leap that the tooling does not support. Source code coverage and security coverage are not the same thing, and in most of the categories that matter to attackers, they do not overlap at all.


That distinction matters because of how attackers actually operate. They do not audit source code. They interact with running systems, sending requests, observing responses, probing boundaries, looking for behavior that diverges from what the application is supposed to allow. The surface they attack is the deployed application, and that is the surface that needs to be tested.


Where Dynamic Testing Fits In

Dynamic Application Security Testing (DAST) operates against a deployed application. It sends HTTP requests, observes responses, and draws conclusions from actual behavior rather than inferred behavior. Critically, it requires no source code access and no knowledge of the underlying language or framework, which means it tests what is actually reachable and exploitable from the outside, the same vantage point an attacker has.


The coverage it provides directly closes the gaps described earlier. Vulnerabilities that are invisible to static analysis because they are properties of behavior, not code, such as authorization enforcement, session integrity, runtime configuration and how the application responds under real conditions are exactly what dynamic testing is designed to surface. Because findings are based on observed behavior rather than code patterns, every finding reflects the deployed state of the application, not a theoretical risk inferred from source files.


This is also the answer to the obvious objection: if SAST is already in the pipeline, why add another tool? Because they are not testing the same thing. SAST catches implementation mistakes early, before they reach a running environment, and that is genuinely useful. But it cannot tell you whether your deployed application can be bypassed, probed, or manipulated by someone interacting with it from the outside. DAST can. The two tools together cover the full surface. Either one alone leaves a significant part of it untested.


Closing the Gap with Roguesight

The case for runtime testing is clear. The barrier has typically been operational: enterprise DAST tools carry significant setup overhead, require dedicated security expertise to configure and interpret, and are built for organizations with security teams to match. For startups and SMBs, that overhead has made runtime testing feel out of reach.


Roguesight is built to remove that barrier. It is a DAST platform that analyzes deployed applications directly, with no source code access required, no agent to install and no per-target configuration process. Analysis runs on a schedule, on demand or triggered after a deployment where findings include enough context to act on without a security background.


The goal is to make runtime testing a routine part of how applications are maintained, not a periodic exercise that requires specialist involvement to run. If your current security program stops at static analysis, Roguesight is the layer that tests what your code does, not just what it says.


In Summary

Static analysis tests code. Dynamic testing tests behavior. They answer different questions, and the questions that matter most to an attacker is whether authorization can be bypassed, whether a payload can be injected or whether an endpoint leaks data are answered by running the application, not reading it.


Teams that rely on SAST as their primary security control have significant coverage gaps in the categories that account for most real-world breaches. Adding runtime testing closes those gaps, not by replacing static analysis, but by testing the layer it cannot reach.


Test your running application, not just your code.

How secure is your application, really?

Run a comprehensive assessment and find out before attackers do.

Get Started