0

When using 7-zip with 7z u -uq0 -ssw -mx9 -stl files.7z files (update-synchronize ultra-compression) I think I've seen a slight random size difference of the same archive triggering my backing-up program to classify the archive as new. I think I read somewhere about an interplay between the number of actual threads and the number of blocks which could result in a slight difference of the archive size between runs. For now I'm using 1 thread -mmt1 for a deterministic result, but it's slower and perhaps not really necessary?

An archive rebuilt from exactly same files would still be considered "new" if either its modification time (-stl fixes that) or its size (-mmt1 hopefully unnecessary?) differs from the cloud copy.

I could keep experimenting by myself but I hope there is a definite answer. Thanks

user612313
  • 121
  • 2
  • Any update to the files.7z archive should update its last modification time, and will likely alter the file size, IMO. Hopefully, your cloud backups are relying on file hash comparisons, rather than file modification timestamps and sizes alone. Anyhow, you may need to dig into other -m* compression method options in order to prevent 7z from automatically detecting certain settings between archive runs (eg, See: [What are the best options to use when compressing files using 7 Zip?](https://superuser.com/q/281573/1535708)) – leeharvey1 Jun 19 '22 at 15:02
  • @leeharvey1 No, the archive rebuilt from the same constituents is identical with the `-stl -mmt1` options. I'm asking whether I really need the `-mmt1`. Thanks – user612313 Jun 20 '22 at 09:10

0 Answers0