[Discuss] File systems that support file cloning

Rich Pieri richard.pieri at gmail.com
Tue Nov 22 18:30:48 EST 2022


On Tue, 22 Nov 2022 17:27:21 -0500
"Dale R. Worley" <worley at alum.mit.edu> wrote:

> Uh, yes.  I've never noticed ext2, ext3, or ext4 as needing periodic
> maintenance.  Of course, I've never used disks in a
> performance-demanding environment.  Or rather, the only demanding
> factor was using a filesystem at very near completely full.

ext2/3/4 do require running fsck on a periodic basis. It's rare for
ext3 to lose data but ext4 has been pretty bad over the years. Not as
bad as ReiserFS but still pretty bad.


> Yes, that's correct.  But on other file systems, if I delete a file
> that's a large fraction of the disk space, I see df increase by about
> the size of the file at that time.  The few tests I ran with btrfs,
> the freed space nowhere near showed up in df, at least, not in the
> short run.

Like I wrote, df does not measure what you think it's measuring.
Especially on Btrfs. If you want accurate reports about Btrfs file
systems then you need to use the correct tools: btrfs fi, btrfs su, and
friends.


> for the past 5 years, and crashes never damaged it either.  If people
> actually experience btrfs failing over a period of a few years, that's
> noticeably worse.

This is just echo chamber effect. Nobody complains when a file system
works properly. But don't take my word for it: openSUSE which uses
Btrfs as the default OS file system is #11 on DistroWatch for the past
year. If Btrfs were as unreliable as you suggest based on the complaints
you have seen then it wouldn't rate nearly this high.

Then again, dig enough and you'll find reports of bugs in every file
system. I have been involved in the diagnosis and reporting (to Red
Hat) of:

* Two catastrophic (data loss or worse) XFS bugs.
* Three catastrophic ext4 bugs.
* One catastrophic kernel VFS layer bug (we originally thought it was an
  XFS bug but Red Hat traced it up to a memory allocation bug in the VFS
  layer).


> I wasn't exact enough.  I meant to say "has additional management
> overhead *in my brain*".  I now "frame it in the context of
> traditional file systems and volumes" and keeping it that way is the
> least trouble, or at least, the least work.

Welp. I can't help you with that. But in practice? ZFS has *less*
management overhead compared to traditional storage. Once you pretend
to forget everything you know about traditional file systems and start
using ZFS the way it's designed? You'll want to never go back.

-- 
\m/ (--) \m/


More information about the Discuss mailing list