1

We have a directory of sparse files (du returns 240M for one of these files; ls returns 960M).

When we copy one of these files over to S3, we end up uploading the entire 960M, and on downloading from S3, du now returns the full 960M.

We've tried gzipping it, which reduces the upload/download sizes; but the extracted size is still 960M (using du).

Is there any way we can convert these files back into sparse files?

Jedi
  • 860
  • 1
  • 9
  • 19

1 Answers1

3

You can do this using fallocate --dig-holes given a recent version of util-linux.

Alternatively, cp --sparse=always will create a sparse file as the copy destination (then you can move it over the original).

Sparse files can be archived using the tar -S or tar --sparse option in GNU tar; e.g. tar -czSf foo.tar.gz foo if you also want compression, or tar -cSf foo.tar foo if you don't.

u1686_grawity
  • 426,297
  • 64
  • 894
  • 966