1

I have one file around 10 gb. It is changing a bit over time. I need to take snapshots of this file and compress them.

Currently I use 7z and each archive is around 700 mb. Are there any other archiver that could leverage the fact that new file is pretty similar to the already packed one and make smaller archive?

frlan
  • 736
  • 10
  • 27
misha nesterenko
  • 355
  • 2
  • 5
  • 17
  • possible duplicate of [Incremental backup with 7zip](http://superuser.com/questions/544336/incremental-backup-with-7zip) – agtoever Sep 03 '14 at 16:57
  • 3
    If it's a line-based text file, you can use **diff** to generate difference files which can later be passed to **patch** to recreate each version as required. The difference files should be quite small. You would need to keep a reference revision which you update whenever the difference files become too unwieldy. Be aware that **diff** and **patch** will take some time to run on a 10GB file, but zipping such a file will not be fast either. – AFH Sep 03 '14 at 17:57
  • @agtoever, I have seen that post, but it seems that `7z` works on per file basis. I can not use that because I have only one file – misha nesterenko Sep 03 '14 at 18:29
  • @mishanesterenko: ok. Sry. My bad. – agtoever Sep 03 '14 at 18:59
  • 7z alone can't do this. There is incremental backup in 7z, however it works on file level and detect changes by timestamps (which works great to diff directories). Misha needs tool that work on byte level and detect changes by hashing. – Vlastimil Ovčáčík Sep 04 '17 at 16:28
  • `xdelta3` worked great for me on 50 GB file, https://stackoverflow.com/questions/1945075/how-do-i-create-binary-patches – Vlastimil Ovčáčík Sep 05 '17 at 08:23

1 Answers1

1

You want a backup tool that is capable of doing delta block / block level differential backups. Google says Areca is one such tool.

ThatOneDude
  • 2,724
  • 15
  • 18