Blog

What to Threat Model -- Continuous Threat Modeling

  • 29 August 2022
  • 0 replies
  • 68 views
What to Threat Model -- Continuous Threat Modeling
Userlevel 4

This article "What to Threat Model -- Continuous Threat Modeling" is part of the What to Threat Model series. This article targets Security Champions/Architects conducting threat modeling exercises.

Author: John Steven

 

As organizations threat model applications and infrastructure in earnest and become more confident with what to threat model, questions about "what to model" don't go away. Instead, those questions evolve into "What should threat modeling output?" or "What does ‘done’ look like?" In many ways these questions are just different ways to tackle "What's in and what's out of our threat model?"

 

Continuous is Key

Throughout this and other content, one concept consistently re-emerges: threat modeling is not a one-off point-in-time activity but instead is a capability, continuously applied.  This, more than any other factor or technique, helps security champions understand "what to output" and "what done looks like". The last article, What to Threat Model in Each Software Release Cycle - For SSG Owners, encouraged modelers to align threat models with the planned functionality of a software lifecycle: model what will be released and what's changing in support of that release.  The implication: by aligning threat modeling with software release -- a continually iterative delivery model -- modeling itself becomes naturally continual and iterative as well.

 

Deliver Threat Models Aligned to Delivery Teams' Sprints

The prior blog entry presented the following table as a means of tracking threat modeling activities against analogous software delivery activities.

Functional Area:

Threat Modeling Activity

User Stories Elicitation

Identify adversaries, elicit misuse/abuse cases

Diagramming

Identify attack surfaces, diagram process-flows, annotate controls

Architecture/Design

Threat enumeration, flaw identification, secure design

Defect Discovery

Flaw-hypothesis testing, Security-test planning

 

In a continuous and incremental model, what/when to threat model becomes significantly less stressful because work is organized into the same lifecycle phases socialized by engineering teams and using familiar terms. Doing so changes the delivery model: The activities of threat modeling occur serially at the same cadence as their functional counterparts. 

Just like with feature streams, delivering involved threat modeling streams is likely to take multiple sprints. Development teams never promise all release features within a sprint, so security champions shouldn't fret about not being able to threat model a whole system within a sprint either. And they certainly shouldn't suggest holding a sprint or release up to do so. Instead, Security Champions should use the above table to express the alignment between threat modeling and planned feature streams, setting delivery expectations accordingly.

 

Deliverables Should Impact the Critical Path to Delivery

As an example, imagine being tasked to "Threat Model the Banking Application". Maybe the business is planning a big 3.0 release of the new "bank anywhere" platform and a threat model hasn't been done since 2.0 -- so there's a lot of work to do. As Security Champion, you consider the 3.0 roadmap, its two-week sprints, and focus particular attention on the first four sprints -- the horizon at which things become significantly more hand-wavy and uncertain.

As with this or any example, a group of sprints is likely to have a mix of activities: design spikes, resolving unanswered questions; backlog driving implementation; and that ever present raft of slack discussions, meetings, and other important decision-making that comes outside the ticketed flow. Using the chart above, align the threat modeling ask accordingly:

  • Misuse/abuse case model slack discussions regarding use. Deliver those abuse cases to engineering along with user stories for their engineering wiki;
  • Diagram and identify attack surfaces in the functionality undergoing design spike as a participant, incorporating attack surface and process flow diagrams into the team's documented designs;
  • Conduct vulnerability hypothesis testing, delivering code review guidance and augmenting security test plans where sprint planning selects backlog for implementation.

To experienced practitioners, the above may seem radical: threat modeling aligned with a delivery lifecycle and woven intimately into their artifacts and deliverables. But this is the goal of SecDevOps: stop producing reports the organization doesn't read and act on. Instead, work within the cadence of delivery, and where possible even accelerate it. Make an impact on the posture of each release.

 

What’s Deliverable Changes

Realize that there’s a tradeoff: doing modeling early-enough to be truly proactive means the specificity or availability of key decisions/artifacts may not be sufficient to discover concrete flaws and propose concrete guidance. As we saw in the previous discussion of lifecycle phases and the appropriate threat modeling activities, input and output evolve. But while Misuse/Abuse Cases, Diagrams are important outputs, they’re more work-product than deliverable. 

Yes, a list of flaws and secure designs that mitigate those flaws are a primary and important deliverable. But the availability/sufficiency tradeoff means that it will be uncertain as to whether or not many flaws are credibly discoverable and exploitable, or that sufficient context and information exists to solve for their mitigation. That’s OK. Practitioners should not attempt to report on uncertain findings -- this erodes developer trust. Continuous threat modeling has an answer for this: 

IT’S OK FOR THREAT MODELING TO PRODUCE FURTHER QUESTIONS. Where flaws are unverifiable or where the design isn’t specific-enough for a vulnerability hypothesis to be explored, continuous threat modeling removes these flaws from the secure design process or any escalated report and places them on the backlog for appropriate Defect Discover exercises. Threat Modelers then coordinate with engineers those vulnerability hypotheses to be verified or disproved by the best means. This may include:

  • Conducting a static scan of config or source (e.g.  There’s a flaw if these services share certificate identity);
  • Adding a checklist item to PR/MR peer review (e.g. Make sure folks don’t introduce a different persistent storage mechanism); or 
  • Conducting a targeted manual source code review of 1st, 2nd, or third party code (e.g. The posture depends on how Pivotal Cloud Foundry handles JWT verification). 

Some hypotheses are best proved with testing. This may include:

  • Validating cloud resource state/configuration (e.g. As long as operators uphold planned namespace and security group compartmentalization);
  • Dynamic security testing (e.g. ...presuming the system remains immune to forced browsing attacks);
  • Custom security test planning (e.g. Validate system end-points don’t expose XXX); or
  • Directives for expert penetration testing (internal or external) (e.g. When you test, attempt to craft form or API request data that places objects on the following SNS queue...).

When follow-on activities occur, tactical security defects may be uncovered, to be addressed by the normal auspices; or the existence of a hypothesized flaw may be confirmed. If confirmed, the flaw is added to the release, triaged for impact and likelihood, and then addressed accordingly.

Pushing threat modeling hypotheses to code analysis or security testing is not ‘kicking the can’. It’s continuously threat modeling: acknowledging that work done in security design flows through the secure software lifecycle just as functional pursuits do. Some vulnerability hypotheses may not be verifiable proactively through defect discovery. In these cases, practitioners work with engineers and operators to place logging, monitoring, or other auditing in the operational environment. 

 

 Reporting Out Instead of Output Report

But wait! Executives wanted Banking 3.0 threat modeled fully, won't continuous and incremental modeling be disappointing? They shouldn’t be. By evolving away from a deliverable document report to critical path lifecycle deliverables described by the previous paragraphs, security practitioners actually improve system security posture. Practitioners who have historically taken the ‘report with findings’ approach to threat modeling will recognize the same content from their report in the above output -- both artifacts and hypotheses. They’re just an ‘exploded view’ of the report.

Executives will still request a ‘roll up’ view of the ‘threat model’. Many still prefer a “read and react” decision-making  model as part of a major release milestone or gate and that’s OK.  Roll the information threat modeling produced up and report out/up on its current state at that milestone. The skeleton of such a report out/up might be:

“As part of release platform 3.0, we’re tracking approximately 10 potential risks. Two (2) of those risks are being mitigated within this release. They were introduced when we moved from our datacenter virtualization platform to the Azure-based persistence tier. We can’t afford to release without mitigating that exposure. We’ll brief you on another one (1) that will dominate the next few sprints of the 3.1 release and may affect the way we handle MFA enrollment for customers -- but we don’t presently see reason to delay or halt 3.0. These other three of the risk are tied to our use of the new development framework, and engineering is tracking ticketed mitigations and compensating controls through pull request reviews and their custom SAST rules. The remainder we’ve assigned to QA and they’re verifying whether exploitation is possible. If so, we’ll take action in the 3.1 release.” 

 

This style of ‘report out’ shows that security is working with engineering, using the collective tools and activities at their disposal to handle each risk as type and impact dictate. There’s a very obvious posture change evident as well: security isn’t saying “No.” but stepping in to handle secure design where it can, validate design with defect discovery naturally, or escalate where complex.  Executives engage constructively. In the example above, they’ll key into the one flaw affecting 3.1: “How much of an impact on the roadmap can we expect from security? Does what we know about the flaw change the way we interact with customers and have operational implications?” Or on those two risks working through the critical path of 3.0: “What does the feature slip look like because we’ve chosen to mitigate those identified risks? How will we avoid this in future releases, and in our other application teams?”

 

Conclusion

Continuous threat modeling is a natural and essential evolution of practice into a DevSecOps model. It increases the value of security activities by aligning them -- and their focus -- with critical path delivery concerns. It produces the same holistic deliverables, but over time and in concert with the SDL. 


0 replies

Be the first to reply!

Reply