Another way to check is to
strace cp testfile testfile2
and the sequence in which the message is printed and operations performed can be studied.
It’s perhaps a lot to read, but linux tracing tools are worth learning!
Another way to check is to
strace cp testfile testfile2
and the sequence in which the message is printed and operations performed can be studied.
It’s perhaps a lot to read, but linux tracing tools are worth learning!
Too bad, would’ve considered it as a viable option to mdadm + BTRFS.
Currently I’m using bcachefs with LVM (which can do raid, but I currently only have one NVME SSD), though it indeed does have RAID1/0/10 support. But overall I expect it not to not make the same silly default choices as btrfs, such as not being able to start the system if a RAID1 component of your root filesystem is missing. And, supposedly, when the RAID5/6 becomes stable, it won’t have the write hole problem.
It said the code base was build on something stable, but it didn’t say what, do you happen to know what FS this project is a fork of?
It’s based on bcache :) by the same author, but of course bcache is not really a file system but rather some kind of object storage layer for the purpose of caching slower block devices and absorbing write load.
Bcachefs might be coming soon to the mainline kernel, so that’s going to make it a lot easier to try out. Personally however I have lost one bcachefs (that FS was readable, though, and I have good backups), but I have also lost a btrfs before and seen reiserfs bugs, so I don’t too heavily count it against it; overall I enjoy its stability when using basic functionality. I haven’t dared trying snapshots with it yet…
Depends on how much you change per time unit.
I take full system backups every three hours, but the backups are thinned so that there are previous 24 hourly ones, previous n daily ones, previous m monthly ones, etc. Similar approach can be used with snapshots.
I don’t currently use snapshots—I don’t run btrfs anymore—but when I did, I did a snapshot every hour and kept them for 24 hours. But then I backed up the latest snapshot, which gives consistent backups, versus regular backups where files can change while you’re doing them. I’m nowadays using bcachefs, but I don’t quite trust its snapshots yet so I haven’t started using them ;).
I believe you’re completely right here, except that snapd can be configured to point to another store, though it’s not very well documented… I did find the piece of information once :).
But the thing is that the client still only supports one app backing site at a time. So if you pick another one, you lose visibility to the other store. I doubt even updates work as they should.
So it’s really about building technology that is geared towards centralized control, whereas basically anyone can host flatpak packages and give ref links to them.
You can use the web ui remotely.
Personally I use it from command line, though, and my only complaint is that it’s too easy to start a backup you didn’t intend to… Buut if you’re careful about usong the kopia snapshot
command then it’s fine.
Kopia has served me great. I back up to my local Ceph S3 storage and then keep a second clone of that on a raid.
Kopiahas good performance and miltiple hosts can back up tp it concurrently while preserving deduplication – unlike borgbackup.
You will reconsider calling strategy a backup should the filesystem get corrupted for whatever reason.
I’ve tested my full system backup restore once with btrfs. Worked out fine.
I do think the idea behind snap isn’t all about pushing the Linux platform as such forward, but to specifically gain a market advantage to Ubuntu.
Why else is finding documentation for changing the default store so difficult? And I don’t think you can even have multiple “repositories” there–quite unlike all other Linux packaging systems out there. (Corrections welcome!)