A bit more about the design philosophy and how ByteSync may differ from other synchronization tools:
ByteSync is meant to give users control and visibility over what’s happening.
Instead of syncing in the background, it operates through explicit sessions — you decide when and how synchronization occurs, selecting which changes to apply and in which direction before any transfer happens.
This makes it particularly useful for data management, backups, or multi-location setups where you need to understand what’s being moved before it happens.
Each session defines one or several DataNodes (repositories) and DataSources (folders or files).
For example, you might have a local workstation with a DataNode containing project folders, a NAS with backups and archives, and a remote server for off-site replication.
All of them can be part of the same session, compared and synchronized through end-to-end encrypted exchanges.
This model keeps things explicit while still supporting flexible topologies — local or remote, single or multi-participant.
When synchronizing over the internet, transfers go through a temporary encrypted buffer in the cloud, used only as a relay for data exchange.
All content is protected with end-to-end encryption (E2EE) — nothing is stored or accessible on the relay, and data exists there only for the duration of the transfer.
This allows remote synchronization to work seamlessly without VPNs, firewall rules, or manual network setup.
Under the hood, ByteSync relies on full desktop clients for Windows, Linux, and macOS, along with cloud components that handle orchestration and temporary relaying when needed.
A command-line mode is also planned, and the design work for it is already underway.
Performance-wise, ByteSync uses a two-stage inventory process:
an initial indexation phase that collects file size and modification timestamps, followed by a comparison phase that computes block-level signatures only for files that show differences.
This avoids redundant network round-trips and dramatically improves performance in remote scenarios where latency would otherwise make full scans impractical.
A bit more about the design philosophy and how ByteSync may differ from other synchronization tools:
ByteSync is meant to give users control and visibility over what’s happening. Instead of syncing in the background, it operates through explicit sessions — you decide when and how synchronization occurs, selecting which changes to apply and in which direction before any transfer happens. This makes it particularly useful for data management, backups, or multi-location setups where you need to understand what’s being moved before it happens.
Each session defines one or several DataNodes (repositories) and DataSources (folders or files). For example, you might have a local workstation with a DataNode containing project folders, a NAS with backups and archives, and a remote server for off-site replication. All of them can be part of the same session, compared and synchronized through end-to-end encrypted exchanges. This model keeps things explicit while still supporting flexible topologies — local or remote, single or multi-participant.
When synchronizing over the internet, transfers go through a temporary encrypted buffer in the cloud, used only as a relay for data exchange. All content is protected with end-to-end encryption (E2EE) — nothing is stored or accessible on the relay, and data exists there only for the duration of the transfer. This allows remote synchronization to work seamlessly without VPNs, firewall rules, or manual network setup.
Under the hood, ByteSync relies on full desktop clients for Windows, Linux, and macOS, along with cloud components that handle orchestration and temporary relaying when needed. A command-line mode is also planned, and the design work for it is already underway.
Performance-wise, ByteSync uses a two-stage inventory process: an initial indexation phase that collects file size and modification timestamps, followed by a comparison phase that computes block-level signatures only for files that show differences. This avoids redundant network round-trips and dramatically improves performance in remote scenarios where latency would otherwise make full scans impractical.
Looks neat, a benchmark page comparing it to other tools would be great. Ie, how does it compare to the speed of rsync? Others?