Finding a Native ZFS Replication Partner for SmartOS
An existing client running SmartOS with ZFS in a production environment needed a reliable backup partner, off-the-shelf solutions didn't fit. Here's how we engineered a backup solution that exceeded their expectations.
client environment
SmartOS / Joyent
solution
KISS Cloud ZFS Mirror
replication type
Native ZFS Send / Receive
location
Australia — Tier 4 DC
Δ Only
changed blocks per snapshot cycle
100%
dadicated isolated zones per client
Full
zfs dataset admin permission retianed
0
Bandwidth throttling on restore
The Conversation That Started It
During a regular review with one of our long-standing clients, the discussion turned to data protection. Their production environment was built on SmartOS – a hypervisor-grade operating system originally developed by Joyent, with ZFS baked into its core. It’s a mature, battle-tested stack, but an uncommon one. That uniqueness created a specific problem: finding a backup destination that could speak the same language.
General-purpose cloud backup products weren’t going to cut it. The client didn’t just need somewhere to push files – they needed a genuine ZFS replication partner that could honour the protocol natively.
They didn’t just need somewhere to push files – they needed a replication partner that could speak ZFS natively.
Why Native ZFS Replication is Different
Most backup tools – even capable ones like rsync work by reading all source data, comparing it to the destination, and transferring differences. This is functional, but it comes with an overhead: every cycle involves a full read of the dataset, regardless of how little has actually changed. ZFS native replication works differently. Using zfs send and zfs receive, the system captures a point-in-time snapshot and then, on subsequent runs – transmits only the delta between snapshots. No full reads. No redundant comparisons. Just the changed blocks, compressed and streamed over SSH.
For this to work correctly, there’s a non-negotiable requirement: admin-level permissions on both sides of the transaction. The source system needs to be able to create, send, and manage datasets on the target — operations that typically require root or equivalent access. In most public cloud environments, that level of access simply isn’t available.
The Architecture Problem — and Our Solution
Most backup tools – even capable ones like rsync work by reading all source data, comparing it to the destination, and transferring differences. This is functional, but it comes with an overhead: every cycle involves a full read of the dataset, regardless of how little has actually changed. ZFS native replication works differently. Using zfs send and zfs receive, the system captures a point-in-time snapshot and then, on subsequent runs – transmits only the delta between snapshots. No full reads. No redundant comparisons. Just the changed blocks, compressed and streamed over SSH.
For this to work correctly, there’s a non-negotiable requirement: admin-level permissions on both sides of the transaction. The source system needs to be able to create, send, and manage datasets on the target — operations that typically require root or equivalent access. In most public cloud environments, that level of access simply isn’t available.
How ZFS Send/Receive Works
A zfs snapshot captures the exact state of a dataset at a point in time. When a second snapshot is taken later, ZFS calculates the precise difference between them. The zfs send -i command streams only that incremental delta to the receiving side via zfs receive. The target doesn’t need to re-read its existing data — it simply applies the incoming stream. The result is dramatically lower bandwidth consumption, reduced disk I/O wear on both ends, and faster backup windows.
The Architecture Problem — and Our Solution
This is the challenge we had to solve. Giving a client root-level ZFS access on a shared storage system isn’t viable — you can’t hand one customer the keys to infrastructure shared with others.
Our answer was architectural: rather than adapting the client to fit a shared model, we built the service so that each client who needs native ZFS replication gets their own dedicated, isolated zone with its own dedicated dataset. Inside that zone, they have full administrative permissions – create, send, receive, snapshot, clone, destroy – just as they would on their own hardware.
From the source SmartOS system’s perspective, it’s replicating to a device it fully controls on the other end. The handshake works. The protocol is satisfied. No other client’s data is anywhere near theirs.
What the Client Can Now Do?
- Create and manage child datasets within their zone as their environment grows
- Set and maintain their own snapshot retention policies independent of ours
- Send and receive data on demand – not just on a scheduled cycle
- Transmit only the changed blocks between snapshots, slashing bandwidth and backup windows
- Eliminate the full-read overhead associated with tools like rsync
- Remotely mount and access their data directly in the event of a disaster at their primary site
- Optionally hand management of the environment back to us as a fully managed, turnkey service
Why This Matters Beyond SmartOS
While this case involved SmartOS specifically, the same model applies to any ZFS-backed system:
TrueNAS, Ubuntu with ZFS, Solaris, or any other platform that supports native ZFS send/receive. If your production environment already runs on ZFS, you deserve a backup destination that meets it at the protocol level – not one that forces a translation layer in between.
“If your environment already runs on ZFS, your backup destination should meet it at the protocol level.”
Infrastructure You Can Trust
The KISS Cloud ZFS Mirror service runs on ZFS arrays built to a minimum of raidz2 – tolerating two simultaneous drive failures per vdev. The platform is hosted in an Australian Tier 4 data centre, all data remaining under Australian data sovereignty laws. There is no bandwidth throttling on restore — your full connection speed is available when you need it most.
And unlike hyperscalers where support means a ticket queue, you’re working with a local Australian team who can get on a call, understand your environment, and help you set up and validate your replication jobs from day one.
Running a ZFS Environment?
Talk to us about setting up a native replication partner that actually understands your stack.
More case Studies
Challenge Reelbox digitisation experts have preserved over 700,000 video tapes, over 3 Million feet of film and 5 Million photos to DVD, USB and Hard Drive. With no central location…