mastodontech.de ist einer von vielen unabhängigen Mastodon-Servern, mit dem du dich im Fediverse beteiligen kannst.
Offen für alle (über 16) und bereitgestellt von Markus'Blog

Serverstatistik:

1,5 Tsd.
aktive Profile

#xfs

1 Beitrag1 Beteiligte*r0 Beiträge heute

"[…] we observed that the df command shows higher space utilization compared to du when many small files are copied. Over time, the outputs of both df and du converge. This happens because #XFS initially reserves additional space for these files.

The feature that causes this behavior is Dynamic Speculative End of File (EOF) Preallocation. This feature allows files to dynamically reserve more space to prevent fragmentation in case the file is grown later on. This blog post explores what this feature is, how it works, and how it can be beneficial for certain use cases. […]"

blogs.oracle.com/linux/post/du

#Linux#Kernel#LinuxKernel

A bunch more manual xfs repairs over the past week. In contrast, there's been exactly zero ext4 or btrfs manual fsck's needed for the same environments. All with some flavor of EL8 or EL9 on two different storage platforms (Ceph, Longhorn). Still no idea what's causing it, but xfs continues to be the outlier.

Antwortete im Thread

@stefano This was also #xfs. I am aware that when it has problems, it *really* has problems lol. But, I've also used XFS on #rhel and #centos 6 and 7 and #Debian without any issues. You are probably correct that it was #suse.

When I ran Tumbleweed and Leap 15 on an older laptop, I purposely went with ext4 and it was unquestionably a smoother experience.

Just curious, which version of openSUSE did you use?

Antwortete im Thread

@janl The purpose is to warn bystanders to invest in technological #complexity that seems to be very attractive for its advanced features without acknowledging the risks or efforts associated.

Its learning curve doesn't even allow for an easy start.

As with so many awesome tools, this is something for specific experts and not for new/occasional/advanced users.

BTDT and I've had my fair share of bad experiences.

Current pain in my setup: #NixOS. Instead of providing an abstraction layer to keep away certain OS setup & maintenance problems for good, I got into so many little & bigger troubles that I try to tell people only to use it when they are ready to invest its required learning effort all the way.

From my point of view, this also holds true for "advanced" file systems like #ZFS, #XFS, ... YMMV.

Here’s a thought I’ve just had on file systems for #Linux: if I have no need for advanced features like subvolumes or snapshots or built-in support for multiple devices, is there really a point to thinking about which file systems is good for an Average Jane such as myself? I don’t exactly consider myself lacking in bandwidth, so surely the difference only starts to matter at scale? #btrfs #ext4 #xfs

hey hey #Linux #FileSystem #ZFS #RAID #XFS entities! I'm looking for extremely opinionated discourses on alternatives to ZFS on Linux for slapping together a #JBOD ("Just a Bunch Of Disks", "Just a Buncha Old Disks", "Jesus! Buncha Old Disks!", etc) array.

I like ZFS
but the fact that it's not in tree in-kernel is an issue for me. What I need most is reliability and stability (specifically regarding parity) here; integrity is the need. Read/write don't have to be blazingly fast (not that I'm mad about it).

I also have one
#proxmox ZFS array where a raw disk image is stored for a #Qemu #VirtualMachine; in the VM, it's formatted to XFS. That "seems" fine in limited testing thus far (and quite fast, so it does seem like the defaults got the striping correct) but I kind of hate how I have multiple levels of abstraction here.

I don't think there's been any change on the
#BTRFS front re: raid-like array stability (I like and use BTRFS for single disk filesystems but) although I would love for that to be different.

I'm open to
#LVM, etc, or whatever might help me stay in tree and up to date. Thank you! Boosts appreciated and welcome.

#techPosting

Is there any know-how about creating filesystems on top of RAID1? Both XFS and ext4 top out at 35 MB/s for me even though the block device below it is capable of 135 MB/s (yes, 4x difference!) This even with an absurdly large I/O request sizes like 512KB.

The disks are old-ish SATA drives with 512-byte sectors. XFS seem to use 512b sectors while ext4 picked 4k. They're connected via USB DAS, with dm-integrity on top of each disk, which are then combined into mdadm raid1, and encrypted with LUKS.

I confirmed with fio that the slowdown happens on the filesystem level, not LUKS or anywhere below it.

The only advice I found online is about alignment, but with 512KB requests it shouldn't be an issue because that's way larger than any of the block/sector sizes involved. I must be missing something, but what is it?

#mdadm#raid#ext4

#Debian question: my systems are all using the... not-non-user-hostile... defaults of encrypted LVM partitions, so I have ~250MB of /boot with #ext4. My / is #XFS so I can't move /boot. I have closed #nvidia drivers via #dkms, maybe that matters.

I used to be able to juggle two kernels, one installed, one to be installed. That fails now, I am stuck.

Are there any good and modern docs on reducing #linux #kernel footprint in /boot?

I can find old stuff, empty stuff, and whataboutism, no docs...