You just got the latest SAST report. 1,247 issues. Critical: 3. High: 12. Medium: 89. Low: 1,143. Your team spends two sprints addressing them. You push to production feeling secure. Six weeks later, you’re explaining a data breach to the board because an attacker chained two medium-severity flaws the tool barely noticed.
This isn’t a hypothetical. It’s the daily reality for engineering teams relying on out-of-the-box static analysis. The tools aren’t broken. Your configuration and workflow are. You’re scanning for the wrong things.
“SAST tools are designed to find bugs, not breaches. The gap between those two concepts is where your real risk lives.” – Senior Security Engineer, FinTech Unicorn
Traditional SAST excels at finding well-defined coding errors: buffer overflows in C, SQL injection patterns in string concatenation. Modern web applications, built on frameworks like Spring, Django, or Express, rarely have those classic flaws. The vulnerabilities have moved up the stack. They’re in the business logic, the architecture, and the interaction between components.
Your tool is flagging missing Javadoc comments as a “low-severity” issue while silently passing a misconfigured authentication filter that allows privilege escalation. This guide provides a tactical, step-by-step method to retrain your focus and reconfigure your tools.
Step 1: Kill the Default Rule Set
Your first action is the most drastic. Open your sonar-project.properties, your .eslintrc.js, your Checkstyle XML, or your Semgrep configuration. Find the line that says something like ruleset = "standard" or extends: recommended. Comment it out.
These default sets are designed for breadth, not depth. They aim to please a generic “developer” by catching everything from security bugs to style guide violations. For security scanning, this is poison. The noise drowns out the signal, leading to alert fatigue and critical misses.
Start from zero. Build a rule set specific to your tech stack, framework, and attack surface.
// BAD - The default approach
// sonar.java.source = 11
// sonar.sources = src
// sonar.java.spotbugs.ruleSet = "basic.xml"
// GOOD - The intentional approach
sonar.java.source = 11
sonar.sources = src
sonar.java.spotbugs.ruleSets = /config/security-only.xml
sonar.exclusions = **/test/**, **/config/**
Step 2: Map Your Actual Attack Surface, Not Your Codebase
Draw a box around your application. What can an attacker actually touch? List them:
- REST API endpoints (especially public/unauthenticated ones)
- Authentication and authorization filters
- File upload handlers
- Data export functions
- Admin panels or internal APIs exposed via misconfiguration
- Third-party library entry points (e.g., custom plugins)
Now, trace the data flow. For a REST endpoint, follow the HTTP request from the controller, through the service layer, to the database query and back. This is your critical code path. This is where 90% of your scanning effort should focus.
Let’s look at a deceptively simple Spring Boot controller:
@RestController
@RequestMapping("/api/user")
public class UserController {
@Autowired
private UserService userService;
@GetMapping("/profile/{userId}")
public ResponseEntity getProfile(@PathVariable Long userId, HttpServletRequest request) {
// Step A: Incoming request with userId from path
Long sessionUserId = (Long) request.getSession().getAttribute("authenticatedUserId");
// Flaw 1: Missing authorization check. Does sessionUserId equal userId?
// A default SAST rule might not catch this. It's a logic flaw.
UserProfile profile = userService.getUserProfile(userId); // Step B: Call to service
return ResponseEntity.ok(profile);
}
@PostMapping("/update")
public ResponseEntity updateProfile(@RequestBody ProfileUpdateDTO updateDto, HttpServletRequest request) {
Long sessionUserId = (Long) request.getSession().getAttribute("authenticatedUserId");
updateDto.setUserId(sessionUserId); // Flaw 2: Blind trust of client-set fields? The DTO has its own userId field.
boolean updated = userService.updateUserProfile(updateDto);
return updated ? ResponseEntity.ok("Updated") : ResponseEntity.status(500).build();
}
}
A standard SAST scan on this might flag nothing, or maybe a trivial “injection” warning if it sees a string in `updateDto`. It misses the two architectural security flaws entirely.
Step 3: Write Custom Rules for Framework-Specific Flaws
This is the core of the shift. You need rules that understand Spring Security contexts, JWT validation flows, or Express middleware chains.
Using a tool like Semgrep, which allows custom rules in a readable YAML format, you can target these patterns. Let’s create a rule for the missing authorization check from Step 2.
rules:
- id: missing-user-id-authorization
patterns:
- pattern: |
$SESSION_ID = (Long) $REQUEST.getSession().getAttribute("authenticatedUserId");
... // any code in between
$PROFILE = $SERVICE.getUserProfile($PATH_ID);
- pattern-not: |
... if ($SESSION_ID.equals($PATH_ID)) ... || ... if ($SESSION_ID == $PATH_ID) ...
message: "Controller method fetches user profile by path variable without verifying it matches the authenticated user's ID. This is a Broken Object Level Authorization (BOLA) vulnerability."
languages: [java]
severity: CRITICAL
This rule looks for the specific pattern of retrieving a session ID and a path variable ID, calling a service method with the path variable, and not having an equality check between them. This is a BOLA flaw, #1 on the OWASP API Security Top 10.
Another critical custom rule: detecting JWT signing algorithm misconfiguration, which can lead to token forgery.
rules:
- id: jwt-algorithm-none-possible
patterns:
- pattern-either:
- pattern: Jwts.parser().setSigningKey($KEY).parseClaimsJws($JWT);
- pattern: Jwts.parserBuilder().setSigningKey($KEY).build().parseClaimsJws($JWT);
- metavariable-regex:
metavariable: $KEY
regex: "(?i).*none.*|.*key.*|.*secret.*"
message: "JWT parser is configured with a key, but the key variable name suggests a hardcoded or weak secret. Also, ensure the parser is explicitly set to reject the 'none' algorithm."
languages: [java]
severity: HIGH
Step 4: Integrate Scanning into the Developer Workflow, Not Just CI/CD
Finding a critical flaw during a nightly build is late. Finding it during a PR review is better. Finding it as the developer writes the code is best.
Configure your custom rule set to run in three places:
- IDE Plugin (Real-time): Plugins for IntelliJ or VS Code run your Semgrep or SonarLint rules locally. The developer sees a squiggly line under the flawed code as they type.
- Pre-commit Hook (Gatekeeping): A Git pre-commit hook runs a fast subset of the most critical rules. It can block commits that introduce clear, high-severity vulnerabilities.
- CI/CD Pipeline (Comprehensive): The full scan runs here, including deeper data-flow analysis that might be too slow for local work.
Your CI pipeline configuration (e.g., GitHub Actions) should reflect this prioritization:
name: Security Scan
on: [push, pull_request]
jobs:
semgrep-sast:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Run Semgrep (Critical Rules Only on PR)
run: |
if [ "${{ github.event_name }}" = "pull_request" ]; then
semgrep scan --config /rules/critical-aws-logic --error # Fails the build
else
semgrep scan --config /rules/full-security-audit --sarif > results.sarif
# Upload for dashboard, don't fail nightly build automatically
fi
Step 5: Triage Based on Exploitability, Not Severity
Your tool says “High Severity: Potential SQL Injection.” Your job is to ask three questions:
- Is the source user-controllable? Is the string in question built from a `@RequestParam`, or is it a hardcoded constant like `"SELECT * FROM countries"`?
- Is there a sanitizing sink in the data flow? Does the data pass through a JPA repository method (`findById()`), a parameterized query (`PreparedStatement`), or a strict ORM that makes injection impossible?
- What is the context? Is this in the public login API or an internal admin microservice behind three network firewalls?
Create a simple triage matrix for your team:
| Tool Severity | User-Controllable? | Sanitized/Safe Sink? | Context | Actual Priority |
|---|---|---|---|---|
| CRITICAL | No (Hardcoded) | N/A | Internal Service | Ignore |
| MEDIUM | Yes (URL Param) | No (String concat) | Public API | CRITICAL (Fix Now) |
| HIGH | Yes | Yes (Parameterized Query) | Public API | LOW (Verify & Document) |
This manual triage step is non-negotiable. It transforms your SAST from a noisy liability into a precision-guided tool. Platforms like Codequiry, while primarily focused on provenance and similarity, understand this principle—their value is in accurate, context-aware results, not volume.
The New Pipeline in Action
Let’s walk through the new flow with a developer, Sam, adding a “download report” feature.
1. While Coding: Sam’s IDE highlights a line where they build a file path using a user-provided `reportId` without sanitization. The custom rule for “Path Traversal” fires instantly. Sam fixes it, using a sanitization library.
2. At Commit: The pre-commit hook runs the “critical” rule pack. It passes.
3. In PR: The CI runs the critical rules again. A new, more subtle flaw is caught: Sam’s download method checks if the user is logged in, but not if this user is authorized to download *this specific* report. Our custom BOLA rule from Step 3 flags it. The PR is blocked. Sam adds the authorization check.
4. Nightly Full Scan: The full audit runs. It finds a medium-priority issue in Sam’s code: the log statement prints the full report ID. The triage team assesses it: the report ID is not secret, it’s in the URL already. Context: low risk. They mark it as “Accepted” with a comment, avoiding noise for the dev team.
The result? One critical flaw fixed early. One non-issue intelligently ignored. Zero false positives bothering the team. This is what effective static analysis looks like.
Stop scanning code. Start scanning for exploitable vulnerabilities in your specific application. Toss the generic rule book. Write your own. Your production database will thank you.