mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#threatmodel

0 Beiträge0 Beteiligte0 Beiträge heute

#FediHelp
I need to talk with someone skilled about #threatModel (digital side) specifically about 'downloads' / archiving / wget (mirroring) and online/offline for field activities (logistics / investigation ) and activist groups (water, mud, soil investigation within sampling and DIY analysis & data production)

I need to talk so do not point me any NGOs (I already now them). And I've been there too.

It's about holistic security approach in this very specific nudge.
Downloading things, offline access first, sharing (see Kiwix and kiwix itw at APC.org)
Being up to a mountain or down to a river or sewers system or so.
Or around floods in streets / towns / cities / lands.
Radio (SDR) scanning in the field and emergency data transmission / copy.

If it's not a clear and not understandable claim, I'm so sorry and please feel free to bake he with your asking and thoughts.

Very very important: carbon-mascu-male alpha-stupid-surviving-boyz are not welcome in this discussion and I'm sure you get the point my dear fedizens (no techbro / no cryptobro and more away)

cc @DigiDefenders @rysiek @onepict
@APC
@iffybooks @hackstub @lacontrevoie

Looking at some #AI generated #threatmodel output and it listed stealing a user's credentials and using them in the "Spoofing" category. I was uncertain. Is that spoofing or elevation of privilege. So I wander over to a #microsoft page on #stride.

They say it's spoofing, which is fine. It's reasonable. I don't care as long as we all agree.

But in that table, that's literally the only example of spoofing. There are a LOT of other kinds of things that could be called spoofing. If you're gonna have only one example of spoofing, I don't think stealing credentials is the best example.

learn.microsoft.comThreats - Microsoft Threat Modeling Tool - AzureThreat category page for the Microsoft Threat Modeling Tool, containing categories for all exposed generated threats.
Fortgeführter Thread

Lastly, there's the training data. I work for #AWS (so these are strictly my personal opinions). We are opinionated about the platform. We think that there are things you should do and things you shouldn't. If you have deep knowledge of anything (Microsoft, Google, NodeJS, SAP, whatever) you will have informed opinions.

The threat models that I have seen, that use general purpose models like Claude Sonnet, include advice that I think is stupid because I am opinionated about the platform. There's training data about AWS in the model that was authored by not-AWS. And there's training data in the model that was authored by AWS. The former massively outweighs the latter in a general-purpose, trained-on-the-Internet model.

So internal users (who are expected to do things the AWS way) are getting threats that (a) don't match our way of working, and (b) they can't mitigate anyway. Like I saw an AI-generated threat of brute-forcing a cognito token. While the possiblity of that happening (much like buying a winning lottery ticket) is non-zero, that is not a threat that a software developer can mitigate. There's nothing you can do in your application stack to prevent, detect, or respond to that. You're accepting that risk, like it or not, and I think we're wasting brain cells and disk sectors thinking about it and writing it down.

The other one I hate is when it tells you to encrypt your data at rest in S3. Try not to. There's no action for you to take. The thing you control is which key does it and who can use that key.

So if you have an area of expertise, the majority of the training data in any consumer model is worse than your knowledge. It is going to generate threats and risks that will irritate you.

4/fin

Fortgeführter Thread

Threat models evolve over time, the same as your software does. Nobody is building a save/load feature into their AI powered threat model. Getting deterministic output from consumer-grade LLMs is not a given. So even if you DO create save/reload capability, it's imperfect.

All the tools I've seen start every session from a blank sheet of paper. So If you're revisiting an app that you threat modeled before, because you want to update your model, you're going to start from scratch.

3/n

Fortgeführter Thread

Related to this, nobody seems to account for the fact that LLMs bullshit sometimes. If you pin someone down and say "the user of your AI-powered threat modeller: do they know how to do a threat model without AI?" Many people will say "yes." Because to say "no" is to admit that the people will be blindly following LLM output that might be total bullshit.

The goal, however, of many of these systems is to make threat modeling more accessible to people who don't know how to do it. To do that, though, you'd have to be more skeptical about your user, and spend some time educating them. Otherwise, they leave the process no smarter than they began.

Honestly, I think a lot of people think the threat model is going to be done entirely by the AI and they want to build a system where the human just consumes and uses it.

2/n

I have seen a lot of efforts to use an #LLM to create a #ThreatModel. I have some insights.

Attempts at #AI #ThreatModeling tend to do 3 things wrong:

  1. They assume that the user's input is both complete and correct. The LLM (in the implementations I've seen) never questions "are you sure?" and it never prompts the user like "you haven't told me X, what about X?"
  2. Lots of teams treat a threat model as a deliverable. Like we go build our code, get ready to ship, and then "oh, shit! Security wants a threat model. Quick, go make one." So it's not this thing that informs any development choices during development. It's an afterthought that gets built just prior to #AppSec review.
  3. Lots of people think you can do an adequate threat model with only technical artifacts (code, architectuer, data flow, documentation, etc.). There's business context that needs to be part of every decision, and teams are just ignoring that.

1/n

Some of my colleagues at #AWS have created an open-source serverless #AI assisted #threatmodel solution. You upload architecture diagrams to it, and it uses Claude Sonnet via Amazon Bedrock to analyze it.

I'm not too impressed with the threats it comes up with. But I am very impressed with the amount of typing it saves. Given nothing more than a picture and about 2 minutes of computation, it spits out a very good list of what is depicted in the diagram and the flows between them. To the extent that the diagram is accurate/well-labeled, this solution seems to do a very good job writing out what is depicted.

I deployed this "Threat Designer" app. Then I took the architecture image from this blog post and dropped that picture into it. The image analysis produced some of the list of things you see attached.

This is a specialized, context-aware kind of OCR. I was impressed at boundaries, flows, and assets pulled from a graphic. Could save a lot of typing time. I was not impressed with the threats it identifies. Having said that, it did identify a handful of things I hadn't thought of before, like EventBridge event injection. But the majority of the threats are low value.

I suspect this app is not cheap to run. So caveat deployor.
#cloud #cloudsecurity #appsec #threatmodeling

> You and your team should incrementally update your threat model as your system changes, integrating threat modeling into each phase of your SDLC to create a Threat and Risk Analysis Informed Lifecycle (TRAIL). Here, we cover how to do that: how to further tailor the threat model we built, how to maintain it, when to update it as development continues, and how to make use of it.

**Continuous TRAIL - The Trail of Bits Blog**

blog.trailofbits.com/2025/03/0

The Trail of Bits Blog · Continuous TRAILYou and your team should incrementally update your threat model as your system changes, integrating threat modeling into each phase of your SDLC to create a Threat and Risk Analysis Informed Lifecycle (TRAIL). Here, we cover how to do that: how to further tailor the threat model we built, how to maintain it, when to update it as development continues, and how to make use of it.

#DuckDuckGo is now offering free, #anonymized access to a number of fast #AI #chatbots that won't train in your data. You currently don't get all the premium models and features of paid services, but you do get access to privacy-promoting, anonymized versions of smaller models like GPT-4o mini from #OpenAI and open-source #MoE (mixture of experts) models like Mixstral 8x7B.

Of course, for truly sensitive or classified data you should never use online services at all. Anything online carries heightened risks of human error; deliberate malfeasance; corporate espionage; legal, illegal, or extra-legal warrants; and network wiretapping. I personally trust DuckDuckGo's no-logging policies and presume their anonymization techniques are sound, but those of us in #cybersecurity know the practical limitations of such measures.

For any situation where those measures are insufficient, you'll need to run your own instance of a suitable model on a local AI engine. However, that's not really the #threatmodel for the average user looking to get basic things done. Great use cases include finding quick answers that traditional search engines aren't good at, or performing common AI tasks like summarizing or improving textual information.

The AI service provides the typical user with essential AI capabilities for free. It also takes steps to prevent for-profit entities with privacy-damaging #TOS from training on your data at whim. DuckDuckGo's approach seems perfectly suited to these basic use cases.

I laud DuckDuckGo for their ongoing commitment to privacy, and for offering this valuable additional to the AI ecosystem.

duckduckgo.com/chat

duckduckgo.comDuckDuckGo AI Chat at DuckDuckGoDuckDuckGo. Privacy, Simplified.
Antwortete im Thread

@aleidk I replaced “mobile phone account“ with “mobile phone provider account” to be clearer about what I meant.

For banks (in the EU), AFAIK there is a strong reason why they never even mention FIDO2: for a transaction at least, the device where validation is performed must give basic info on the transaction: seller and amount.

Another point: the software support depends on site, browser (e.g., Firefox desktop != Firefox mobile), type of key, physical communication protocol (like USB vs. NFC). I made a lot of tests with various sites and my USB-A and USB-C keys, sometimes using NFC, other times USB. Some combinations don't work, or worked at some point and not later (or worked with Chrome but not Firefox, etc.). This can be quite stressful or even dangerous if this is for an important account and you have no backup plan (⇒ don't). And if the backup options are 1) exploitable in your threat model and 2) not very secure, this obviously reduces or nukes the advantage of using a security key in the first place.

A typical backup option which is not insecure from my POV if well handled is a set of recovery codes, but for this you need to store them very carefully, safely... and not forget how to access them in x years! In these conditions, setting up a new account requires “some work”.

And I say all this despite wishing FIDO2 great success, 'cause SIM swapping attacks in particular are quite scary given how much important stuff still depends on codes sent by SMS. 😐

Antwortete im Thread

@ct_Magazin

Threat Modelling ist hier extrem relevant.

Tails hat ein bestimmtes #ThreatModel
- amnesic
- live
- incognito

Da ist kaum etwas mit Prozessisolierung, wie es #Flatpak und #Bubblejail tun, und #QubesOS meistert

Und dass man damit auf einem beliebigen PC sicher sein kann ist leider auch ein falsches Versprechen. #Coreboot ist essentiell weil es minimal ist. Auf unterster Ebene sollte kaum Code laufen. Intel ME sollte aus sein. #Heads ist auch wichtig.

@3mdeb @novacustom @tlaurion