19

I am tying to copy one directory with a large number of files to another destination. I did:

cp -r src_dir another_destination/

Then I wanted to confirm that the size of the destination directory is the same as the original one:

du -s src_dir
3782288 src_dir

du -s another_destination/src_dir
3502320 another_destination/src_dir

Then I had a thought that there might be several symbolic links that are not followed by the cp command and added the -a flag:

-a Same as -pPR options. Preserves structure and attributes of files but not directory structure.

cp -a src_dir another_destination/

but du -s gave me the same results. It is interesting that both the source and destination have the same number of files and directories:

tree src_dir | wc -l
    4293

tree another_destination/src_dir | wc -l
    4293

What am I doing wrong that I get different sizes with the du command?

UPDATE

When I try to get sizes of individual directories with the du command I get different results:

du -s src_dir/sub_dir1
1112    src_dir/sub_dir1

du -s another_destination/src_dir/sub_dir1
1168    another_destination/src_dir/sub_dir1

When I view files with ls -la, individual file sizes are the same but totals are different:

ls -la src_dir/sub_dir1
total 1168
drwxr-xr-x     5 hirurg103  staff     160 Jan 30 20:58 .
drwxr-xr-x  1109 hirurg103  staff   35488 Jan 30 21:43 ..
-rw-r--r--     1 hirurg103  staff  431953 Jan 30 20:58 file1.pdf
-rw-r--r--     1 hirurg103  staff  126667 Jan 30 20:54 file2.png
-rw-r--r--     1 hirurg103  staff    7386 Jan 30 20:49 file3.png

ls -la another_destination/src_dir/sub_dir1
total 1112
drwxr-xr-x     5 hirurg103  staff     160 Jan 30 20:58 .
drwxr-xr-x  1109 hirurg103  staff   35488 Jan 30 21:43 ..
-rw-r--r--     1 hirurg103  staff  431953 Jan 30 20:58 file1.pdf
-rw-r--r--     1 hirurg103  staff  126667 Jan 30 20:54 file2.png
-rw-r--r--     1 hirurg103  staff    7386 Jan 30 20:49 file3.png
Boann
  • 1,156
  • 9
  • 19
Hirurg103
  • 313
  • 2
  • 8
  • 1
    Interesting question. Are the source and destination different drives/ I winder if this comes down to the block size of the filesystems. – davidgo Feb 01 '19 at 18:33
  • Hi @davidgo, the source and destination are different directories on the same drive. I updated the question with `ls -la` results. See UPDATE – Hirurg103 Feb 01 '19 at 18:35
  • 2
    What filesystem? It may be the directories themselves are larger (take more space) than they need to be. Compare [this question](https://serverfault.com/q/264124). New directories created by `cp` are exactly as large as they need to be. – Kamil Maciorowski Feb 01 '19 at 20:03
  • Use `ls -ls` to see how much disk space the files are using. – Barmar Feb 01 '19 at 23:20
  • Is it possible that your source directory tree contains hidden files that aren't being copied to the destination? – jamesqf Feb 02 '19 at 03:47
  • ls -l does not give the file size, it give the maximum file offset. For sparse files, this can be very different than the number of blocks used that you get from ls -s. Try running ls -als and I think you'll see some of the originial files had continuous 0 byte blocks that were not copied, but just offset. – mpez0 Feb 02 '19 at 16:55
  • 1
    recursive md5sum is your friend when you need to verify that all files are actually copied and contents are same. rsync is another tool that can both copy and verify whole structures and files, also speeds up process if some of the files are already in place. – Sampo Sarrala - codidact.org Feb 03 '19 at 15:35

2 Answers2

21

That is because du by default shows not the size of the file(s), but the disk space that they are using. You need to use the -b option to get sum of file sizes, instead of total of disk space used. For example:

% printf test123 > a
% ls -l a
-rw-r--r-- 1 mnalis mnalis 7 Feb  1 19:57 a
% du -h a
4,0K    a
% du -hb a
7       a

Even though the file is only 7 bytes long, it will occupy a whole 4096 bytes of disk space (in my particular example; it will vary depending on the filesystem used, cluster size etc).

Also, some filesystems support so-called sparse files, which do not use any disk space for blocks which are all zeros. For example:

% dd if=/dev/zero of=regular.bin bs=4k count=10
10+0 records in
10+0 records out
40960 bytes (41 kB, 40 KiB) copied, 0,000131003 s, 313 MB/s
% cp --sparse=always regular.bin sparse.bin
% ls -l *.bin
-rw-r--r-- 1 mnalis mnalis 40960 Feb  1 20:04 regular.bin
-rw-r--r-- 1 mnalis mnalis 40960 Feb  1 20:04 sparse.bin
% du -h *.bin
40K     regular.bin
0       sparse.bin
% du -hb *.bin
40960   regular.bin
40960   sparse.bin

In short, to verify all files were copied, you'd use du -sb instead of du -s.

Boann
  • 1,156
  • 9
  • 19
Matija Nalis
  • 2,616
  • 16
  • 24
  • 1
    not only [sparse files](https://en.wikipedia.org/wiki/Sparse_file) but compressed files and [inline files](https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Inline_Data)/[resident files](https://en.wikipedia.org/wiki/NTFS#Resident_vs._non-resident_attributes) also cause the size on disk to become smaller than the file size – phuclv Feb 02 '19 at 06:58
  • 1
    And weird results on btrfs/zfs. – val - disappointed in SE Feb 02 '19 at 11:19
  • 2
    @val: BTRFS compression doesn't affect `du` output: that would make compressed files look sparse to programs that use the usual algorithm of length != used blocks. https://btrfs.wiki.kernel.org/index.php/Compression#Why_does_not_du_report_the_compressed_size.3F – Peter Cordes Feb 02 '19 at 19:08
  • @PeterCordes But CoW stuff makes du output pretty senseless. – val - disappointed in SE Feb 02 '19 at 20:09
  • What about duplicate files? Can't modern systems save space by recognizing duplicate content? – FreeSoftwareServers Feb 02 '19 at 21:39
  • @FreeSoftwareServers: yes, ZFS can hash written blocks and opportunistically CoW-map them to the same physical block. This is called Deduplication. BTRFS can do it, too, with batch scans: https://btrfs.wiki.kernel.org/index.php/Deduplication. Patches for "inband" (on the fly) detection are apparently being worked on, but that requires a lot of RAM for hashes of known blocks. – Peter Cordes Feb 02 '19 at 21:47
  • @Matija Nalis actually I cannot use `-b` option on my Mac. `du: illegal option -- b. usage: du [-H | -L | -P] [-a | -s | -d depth] [-c] [-h | -k | -m | -g] [-x] [-I mask] [file ...]` – Hirurg103 Feb 04 '19 at 19:07
  • Odd on my Linux machine i see this: $ du -sb SystemSoftware* 10531849924 SystemSoftware 10531751620 SystemSoftware_copy I run Ubuntu Xenial with ext4 - but du should be the same across linux and mac? – Morten Sep 13 '19 at 09:38
12

It might be due to the size of the directory "files".

In most filesystems, on disk, a directory is much like a regular file (with just a list of names and node numbers, mostly), using more blocks as it grows.

If you add many files, the directory itself grows. But if you remove them afterwards, in many filesystems, the directory will not shrink.

So if one of the directories in your original tree had many files at some point, which were later deleted, the copy of that directory will be "smaller", as it only uses as many blocks as it needs for the current number of files.

In the listings in your update, there are 3 directories you haven't listed. Compare the size of those (or descendants of those) in your ls -al output.

To find where the difference is, you can try an ls -alr on both directories, redirected to a file, and then a diff of the two outputs.

Ramhound
  • 41,734
  • 35
  • 103
  • 130
jcaron
  • 1,748
  • 12
  • 19
  • 1
    Good catch for another possibility! However, in case of OPs `cp -a src_dir another_destination/` it is unlikely, as `another_destionation` would be newly created and thus optimized, while `src_dir` (which might have had some bigger directories from past creation/additions) could indeed be bigger than needed. However results show that `src_dir` is actually smaller (`1112 < 1168`). – Matija Nalis Feb 02 '19 at 06:44
  • @MatijaNalis Only the first example after "Update" shows that (1112 < 1168)... the example below that has the figures reversed, and the first example also shows the source larger (3782288 vs. 3502320). Possibly a typo by OP? – TripeHound Feb 02 '19 at 07:36
  • `> In the listings in your update, there are 3 directories you haven't listed`. Actually they are files, not directories. see the file names `> if one of the directories in your original tree had many files at some point, which were later deleted`. I copied the source directory from a remote server with the rsync command and didn't delete anything from it – Hirurg103 Feb 02 '19 at 08:42
  • 1
    @Hirurg103 the `.` entries show 5 links on the inode. One is the link from the parent directory to this one. Another is `.`. There are 3 more links, which should be `..` links from subdirectories. Unless I’m missing something very weird, there must be 3 subdirectories in those. Are you saying that those listings are the full output? – jcaron Feb 02 '19 at 12:06