You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Just throwing an idea out there, not sure how viable this is. Is it worth running the benchmarks two ways: (1) using default settings and (2) using settings optimized for this workload?
I ask because almost all these software have tweaks that can improve performance in certain aspects. For example, with Kopia you can use smaller chunks to enhance deduplication and thus reduce backup repository size. Kopia uses 4M chunks by default, because this is most efficient when the number of files gets large. But in a scenario where there are fewer files, changing it to 1M chunks may reduce in smaller backup repository size. I am sure the other programs have similar settings that can be tweaked.
The text was updated successfully, but these errors were encountered:
The idea ain't bad, but there are already some choices made.
I think I'll just optimize the backup arguments until I can squeeze the most out of each program, which I will also document.
Once I'm done with that, I may redo a vanilla benchmark for comparaison.
Just throwing an idea out there, not sure how viable this is. Is it worth running the benchmarks two ways: (1) using default settings and (2) using settings optimized for this workload?
I ask because almost all these software have tweaks that can improve performance in certain aspects. For example, with Kopia you can use smaller chunks to enhance deduplication and thus reduce backup repository size. Kopia uses 4M chunks by default, because this is most efficient when the number of files gets large. But in a scenario where there are fewer files, changing it to 1M chunks may reduce in smaller backup repository size. I am sure the other programs have similar settings that can be tweaked.
The text was updated successfully, but these errors were encountered: