Need rollback? BastilleBSD supports ZFS snapshots to protect and restore your jails fast. Create, snapshot, and clone—all from the CLI. #FreeBSD #ZFS #automation
Need rollback? BastilleBSD supports ZFS snapshots to protect and restore your jails fast. Create, snapshot, and clone—all from the CLI. #FreeBSD #ZFS #automation
Hi,
About a month ago I noticed read errors on one drive in my zfs pool. The pool consists of 4 vdevs, each vdev is a 2 drive mirror.
zpool status
reported that the previous scrub (early in May) had corrected some errors (I believe it was a few megabytes worth).
So I ran a extended SMART tests on the drives in the mirror, and one had some concerning results and a read failure. The other drive was fine, so I didn’t panic.
But, since I don’t have much experience with interpreting SMART errors, I asked chatgpt for help.
Here’s the chat (with more details about the SMART errors):
ChatGPTChatGPT - SMART UNC error explanation
Shared via ChatGPT
As you can see near the bottom of the chat, I have since moved all the drives to a new computer/server, in a new case with better cooling, and I also replaced all SATA cables in the process. After doing all this I ran a new extended SMART test, and the drive now reports no errors! (I realize that making all these changes together makes it harder to pinpoint what the exact cause of the errors was.)
I have also ordered two new drives, which at first I intended to use to replace the failing drive + its mirror buddy.
But, since the “failing” drive now reports that it’s fine, I’m considering just adding the new drives to the pool as a new vdev, to gain a whole lot more free space (which I need).
ChatGPT thinks the old drives should be fine, and its assessment seems solid (to me), but I also don’t want to blindly trust AI… and I’ve never experienced this kind of “transient error” before.
Any input / thoughts on this situation? Do you agree with ChatGPT’s assessment? Should I replace the drives, or keep them a while longer? Can I trust the successful SMART test? The two drives were bought together and added to the pool at the same time, and have been running more or less 24/7 for almost 3 years. (And yes, I’m gonna set up automated regular SMART tests + alerts asap, which is something I’ve neglected for far too long…)
Thanks in advance for any advice or input
2 posts - 2 participants
As I start to explore the ZFS filesystem in more detail on FreeBSD, this post on snapshot basics is very helpful:
https://klarasystems.com/articles/basics-of-zfs-snapshot-management/
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟲/𝟮𝟯 (Valuable News - 2025/06/23) available.
https://vermaden.wordpress.com/2025/06/23/valuable-news-2025-06-23/
Past releases: https://vermaden.wordpress.com/news/
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟲/𝟮𝟯 (Valuable News - 2025/06/23) available.
https://vermaden.wordpress.com/2025/06/23/valuable-news-2025-06-23/
Past releases: https://vermaden.wordpress.com/news/
For anyone who wonders about the absence of system requirements for FreeBSD:
― the two bug reports linked from https://www.reddit.com/r/freebsd/comments/1lcm1ze/freebsd_system_requirements/ may be of interest.
I reopened the first report following premature rejection.
Rejection of the second report is a concern, but not my concern. In the absence of documentation, everyone loses.
I created a script to compare a file with its original in a zfs snapshot
https://paste.sr.ht/~marcc/14fef6c8c1c4e792cced133f8622ec1cdd61a1bf
Use it with patch to easily restore a file to its original:
zfs-diff-file ./my-file | patch -R ./my-file
I have a long-running Debian server (myserver
)that’s had everything but the OS moved into a zpool (output of command redacted, file sizes won’t add up):
user1@myserver:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank 3.02T 505G 104K /tank
tank/data 1.50T 505G 6.28M /tank/data
tank/data/home 511G 505G 96K /tank/data/home
tank/data/home/user1 259G 505G 259G /home/user1
tank/data/home/user2 30.8M 505G 10.7M /home/user2
tank/data/vm 321G 505G 200G /tank/data/vm
tank/ephemeral 51.6G 505G 4.40G /tank/ephemeral
tank/reserved 700G 1.18T 96K /tank/reserved
I’m replicating the server to another one but I’m also doing periodic backups to a USB HDD wd202501
which has a single zpool called wd202501
and a dataset called bkmyserver
. I formatted the drive and backed up my server. No problems.
I also have a laptop (mylaptop
) onto which I recently installed ZFSBootMenu and Ubuntu:
user1@mylaptop:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 375G 1.39T 192K none
zroot/ROOT 10.8G 1.39T 192K none
zroot/ROOT/ubuntu 10.8G 1.39T 10.8G /
zroot/home 364G 1.39T 7.62M /home
zroot/home/user1 364G 1.39T 344G /home/user1
zroot/home/user2 952K 1.39T 952K /home/user2
I would like to also back up my laptop to wd202501
, in a dataset called bkmylaptop
.
I attached he drive to my laptop and ran zpool import
:
user1@mylaptop:~$ sudo zpool import -d /dev/disk/by-id/ata-WDC_REDACTED-REDACTEDREDACTEDREDACTED-part1 wd202501 -N
Before I ran zfs mount -l wd202501/bkwilson
I ran zfs list
and saw this:
user1@mylaptop:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
wd202501 1.30T 2.21T 104K /mnt/wd202501
wd202501/bkmyserver 1.20T 2.21T 200K /mnt/wd202501/bkmyserver
wd202501/bkmyserver/data 1.20T 2.21T 1.36M /mnt/wd202501/bkmyserver/data
wd202501/bkmyserver/data/home 496G 2.21T 96K /tank/data/home
wd202501/bkmyserver/data/home/user1 248G 2.21T 248G /home/user1
wd202501/bkmyserver/data/home/user2 2.77M 2.21T 2.77M /home/user2
wd202501/bkmyserver/data/vm 111G 2.21T 67.8G /mnt/wd202501/bkmyserver/data/vm
wd202501/reserved 100G 2.31T 96K /mnt/wd202501/reserved
zroot 375G 1.39T 192K none
zroot/ROOT 10.8G 1.39T 192K none
zroot/ROOT/ubuntu 10.8G 1.39T 10.8G /
zroot/home 364G 1.39T 7.62M /home
zroot/home/user1 364G 1.39T 344G /home/user1
zroot/home/user2 952K 1.39T 952K /home/user2
Both zroot/home/user1
and wd202504/bkmyserver/data/home/user1
have a mountpoint set to /home/user1
. It does not appear that wd202504/bkmyserver/data/home/user1
overwrote zroot/home/user1
, but I had not run zfs mount
yet.
[The mountpoint for wd202501/bkmyserver/data/home
points to /tank/data/home
rather than /home
but I’m not sure if that’s important for this question.]
My questions:
If I run zfs mount
will wd202501/bkmyserver/data/home/user1
clobber zroot/home/user1
on mylaptop
?
If I replicate mylaptop
’s zroot
to wd202501/bkmylaptop
, will the two mountpoints conflict with each other? Will they conflict if I replicate mylaptop
to my replication server?
Should I just not be doing these mountpoints in the first place?
Where do the mountpoint settings reside? Are they part of the dataset or the zpool?
Is it possible to keep the mountpoints on the source zpool but unset them on the backup zpool? Will they get re-set whenever I replicate a snapshot?
1 post - 1 participant
Edit: original post seemed disjoined…so I redid it.
I’ve been thinking about migrating my root file system, currently on my Debian 12 desktop/server, over to OpenZFS with zfsbootmenu. Snapshots and rollbacks would be incredibly cool and useful. I’ve already been running OpenZFS on my big storage pool for a while, but not on root. Looking back, if I’d been more forward thinking, I would have set this up during the initial Debian 12 install years ago, but alas, I didn’t.
Now, I’m considering how to migrate a root file system (and boot) without completely messing things up. Has anyone here done this and not hated themselves for the hassle? My initial research suggests it’s a pretty risky and heavy lift. I wouldn’t even dream of doing an in place conversion. That’s just crazy talk in my mind. Instead, I’d buy another SSD, create the root and boot pools there, and then attempt to copy everything over.
I’m also wondering if it’s worth the effort. I know that’s subjective. EXT4 isn’t a bad file system, but I can’t create a mirrored pool with it, which is what I’d really like for my root/boot SSDs. My plan is to migrate to a single pool root/boot SSD first, then buy another one to add for mirroring.
Thanks for any experiences or insights you might share if you’ve attempted this!
1 post - 1 participant
This is the place to discuss zVault, a community fork of TrueNAS.
1 post - 1 participant
I’m in the process of migrating a Solaris 11 fileserver to FreeBSD. zfs send wasn’t an option given the divergence in the ZFS implementations and I only copied the latest data because our backup-backup fileserver had died so I was in a hurry. I’m now looking to backfill at least the yearly snapshots. If I hadn’t already transferred the latest data, the sensible approach would have been to work forwards, first transferring the data in the oldest yearly snapshot and then using rsync
to go forwards a year at a time creating snapshots as I go. I could now do likewise but go backwards. However, I’d then end up with the snapshots in the wrong order.
Is there any creative way to reverse the ordering? I was first thinking it might be possible with zfs promote
but that only transfers dependencies with no ordering inversion. zfs send
gives an error if you ask it to create a send stream from a newer to an older snapshot - doesn’t seem very Unixy for it to not simply do as it’s told. Are there any other features I might (ab)use? Scanning the documentation, I find one or two features that I don’t understand well enough to know if they might help such as zfs recv -o origin=
.
Reversing changes might be something the zstream
command could be adapted to do. Do you know enough about the form of ZFS streams or ZFS in general to say whether that might be impossible, trivial or something in-between? Unless it’s trivial, I’d probably be better-off doing more transfers and wasting the computer’s time rather than mine.
1 post - 1 participant
I don’t think I know how to add a new disk to an existing pool on TrueNAS…
Running U24.04. I’m experiencing audio glitching when my filesystem does syncs. I’m running a 9950X with 128GB RAM. When doing somewhat meager things like building a python venv, suddenly all the CPUs light up and it glitches all sound from Firefox. Compression is set to zstd.
I’ve looked through ZFS doesn't respect Linux kernel CPU isolation mechanisms · Issue #8908 · openzfs/zfs · GitHub and that does not appear to have any working advice. Is there anything else I should consider?
2 posts - 2 participants
Any recommendations for decent used external SAS units? I have literally filled my tower with drives and need moar space. Something used I can get on the cheap is great, something new that isn't awful is also fine. I need to ideally reclaim some space in the unit, for that matter... I can get creative, but in doing so I'll be in "these aren't spots where hard drives were meant to go" territory.
4 to 8 is ideal, I'll never complain about larger.
#SASDrives #RAID #ZFS #NAS #techPosting #techRecommendations
I’m still muddling along in building my desired home NAS/desktop.
I started chasing what I thought was a simple recommendation by Mr. Salter. Putting /var on a non-NVMe pool. I’ve found the rabbit hole to be surprisingly complex. Since ZFS chooses when to load the mountpoint, there seem to be a few ways to do it. Has anyone else done ZFS PAM configurations, ZFS - ArchWiki
This approach also seems to solve how to decrypt all of my encrypted drives. Mount order would allow rootpool/ROOT/ubuntu to mount and then rustpool/var to mount after / , but before the OS tries to do anything.
This was solely to reduce the extensive logging writes to my NVMe pool, but I’m almost ready to pull the rip cord and put /var in a single pool just to ensure I can complete a build.
One of the suggestions will be to incorporate the rust pool into a single overall pool. For now, I want to keep them separate while I’m learning the day to day of ZFS.
All recommendations welcome.
1 post - 1 participant
Hello, I multi-boot, I have a 3 diskz1 pool with basically my local data that I have been sharing between some Void installations using
/etc/rc.local
#!/bin/sh
# Default rc.local for void; add your custom commands here.
#
# This is run by runit in stage 2 before the services are executed
# (see /etc/runit/2).
zpool import lagoon
&
/etc/rc.shutdown
#!/bin/sh
# Default rc.shutdown for void; add your custom commands here.
#
# This is run by runit in stage 3 after the services are stopped
# (see /etc/runit/3).
zpool export lagoon
How would I do this in systemd distributions? Debian, Ubuntu, Mint, CachyOS etc?
I have found you can start a script with roots crontab, but what about the shutdown-export side?
1 post - 1 participant
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟲/𝟭𝟲 (Valuable News - 2025/06/16) available.
https://vermaden.wordpress.com/2025/06/16/valuable-news-2025-06-16/
Past releases: https://vermaden.wordpress.com/news/
Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟲/𝟭𝟲 (Valuable News - 2025/06/16) available.
https://vermaden.wordpress.com/2025/06/16/valuable-news-2025-06-16/
Past releases: https://vermaden.wordpress.com/news/