Solaris Linux NAS with ZFS filesystem, /capsule zfs dataset volume with disk quota and rolling snapshots, up to 7 iterations plus zfs scrub on a schedule, zfs send to delta copy snapshots over ssh to a second NAS with ZFS filesystem and also a 3rd NAS in remote data center. Linux style Time Capsule emulation via mDNS/Bonjour w/AFP later SMB file share. Same way all commercial NAS appliances support Time Machine. macOS APFS sparse bundle internals as seen by the underlying ZFS filesystem. Many more band / mapped binary files, I trimmed the example for brevity. This is what ZFS snapshots. M1.sparsebundle/ ├── bands │ ├── 0 │ ├── 1 │ ├── 10 │ ├── fd │ ├── fe │ └── ff ├── com.apple.TimeMachine.MachineID.plist ├── com.apple.TimeMachine.Results.plist ├── com.apple.TimeMachine.SnapshotHistory.plist ├── Info.bckup ├── Info.plist ├── lock ├── mapped │ ├── 0 │ ├── 1 │ ├── 10 │ ├── fe │ └── ff └── token The APFS sparse bundle is using APFS snapshots but my underlying ZFS filesystem has it's own snapshots. Unlike APFS, ZFS let's you create datasets which look like a folder and snapshots only affect the dataset and not the entire disk which is what APFS does. Datasets are like APFS volumes in a way but more like a folder as it can be at any level of a dataset tree hierarchy. You can apply various parameters to a ZFS dataset such as disk quotes or sharing it as SMB, etc. ZFS came pretty close to being the filesystem for macOS Server before the licensing between Oracle and Apple fell through. That's when Apple started working on APFS. They didn't implement all the ZFS features which are mostly suitable for a server. ZFS is extremely bullet proof and it's certainly much better than BTRFS (Synology, etc). APFS is quite good but focused on workstations and not servers and best on SSD. ZFS works best with as much RAM as you can fit in your server. If you have multiple machines pounding the ZFS array, it is recommended to setup a couple SSDs for read / write cache with write cache battery backed up. 10GbE or higher recommended. In addition, you could have HDD ZFS arrays and SSD arrays for specialized high speed needs like video editing over a network share. ZFS configured to create up to 7 snapshots and replaces older snapshots with new on rotation. ZFS can send delta copies of ZFS snapshots over SSH to another ZFS server. Then it restores the snapshots creating a perfect offline mirror of the Time Machine on the second server. Add a 3rd ZFS server offsite in remote data center. If anything went wrong, I could either rollback to a previous ZFS snapshot or recover from an onsite backup or the offsite backup. Yes this is overkill, but it was backing up many Macs processing mission critical data. I had a health check dashboard with statistics, storage space pie-chart, any failures with sync, etc. A list of Macs that are not backing up, etc. Obviously this wasn't cheap but it was all done with Solaris UNIX, and open source then later with Solaris Linux. The number of times something went wrong? Only once or twice and both times it was human error. The usual failed disk replacement and resilvering the array but beyond that, it was solid as a rock. TrueNAS / TrueNAS Scale as well as QNAP NAS use ZFS and provide a very nice web management console so you don't need to setup, configure ZFS from the command line. I've heard of people getting ZFS working on Synology but I don't think Synology supports ZFS so you'd be on your own support wise. I still use a home lab smaller scale ZFS solution at home to backup work Macs and personal Macs and to provide network storage, run containers, etc. Originally built it from scratch on Ubuntu Linux around the time Drobo was popular. I've been eyeballing various solutions from QNAP, TrueNAS, to 45homelab.com's HL8 or HL15.