2

When comparing the disk usage for a mounted harddrive the usage is completely different when compared between the host and the container. I am running Ubuntu 14.04 on the host and in the lxc container.

The hard drive is mounted on the host as confirmed by /etc/mtab which has the following entry: /dev/nvme0n1 /mnt/SSD ext4 rw 0 0. The drive however is not mounted using /etc/fstab. The drive is mounted inside the lxc container using fstab settings in: /var/lib/lxc/container_name/fstab.

From the host:

# du -hs /mnt/SSD/
20K     /mnt/SSD/

# df -h
Filesystem                 Size  Used Avail Use% Mounted on
udev                        63G  4.0K   63G   1% /dev
tmpfs                       13G  1.4M   13G   1% /run
/dev/mapper/sifr--vg-root  314G  241G   58G  81% /
none                       4.0K     0  4.0K   0% /sys/fs/cgroup
none                       5.0M     0  5.0M   0% /run/lock
none                        63G     0   63G   0% /run/shm
none                       100M     0  100M   0% /run/user
/dev/sda1                  236M  100M  124M  45% /boot
/dev/nvme0n1               1.1T   71M  1.1T   1% /mnt/SSD

From the container:

$ du -hs /mnt/SSD/
16G /mnt/SSD/

$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/sifr--vg-root  314G  241G   58G  81% /
none                       4.0K     0  4.0K   0% /sys/fs/cgroup
none                        13G  136K   13G   1% /run
none                       5.0M     0  5.0M   0% /run/lock
none                        63G  4.0K   63G   1% /run/shm
none                       100M     0  100M   0% /run/user
  1. How and why does the same drive shows two different usages?
  2. Which is the correct usage?

Update: After unmounting the drive using sudo umount /dev/nvme0n1, I now see 16GB disk usage in the host:

$ du -hs /mnt/SSD/
16G     /mnt/SSD/

I mounted another drive /dev/sdb using /etc/fstab and gave the container access to it using the same method: /var/lib/lxc/container_name/fstab. The second drive's usage is also consistent and the contents are available in both the container and the host.

The differences between the two drives are that /dev/nvme0n1 is a nvme drive which was mounted manually, whereas /dev/sdb is a magnetic drive and was mounted using /etc/fstab.

What could be causing the difference in the behaviour and how to make /dev/nvme0n1 available in the container?

Greg
  • 278
  • 1
  • 11
  • 23
  • 1
    Please [edit](http://askubuntu.com/posts/748135/edit) your post and append the output of `du -hs /mnt/SSD/` at the host **after** unmounting `/dev/nvme0n1` – cmks Mar 20 '16 at 14:11
  • @cmks The drive in now unmounted and the disk usage in the container is now consistent with the host. Does this mean that the container did not have access to the mounted drive? What could cause that? – Greg Mar 20 '16 at 23:17
  • Yes, I post my research in the answer – cmks Mar 21 '16 at 07:49

1 Answers1

1

The reason is, the host has access to the mounted drive, the container does not. This is because there is nothing mounted at /mnt/SSD in the container. So the host accesses and stores data on the SSD (/dev/nvme0n1) when it goes beyond /mnt/SSD while the container does access a directory on its root disk (/dev/mapper/sifr--vg-root).

To have this in the container you need a bind mount and you may let create a directory in the file system of the container. To do so you need to add the create=dir option in the fstab of the container:

/mnt/SSD      /moint/point/in/the/container    none   bind,create=dir

>

  • create=dir (will do a mkdir_p on the path)

  • create=file (will do a mkdir_p on the dirname + a fopen on the path)

cmks
  • 1,874
  • 1
  • 11
  • 17
  • The post is about `du -hs` displaying different results for the same drive depending on which location the command is run from. That is, if the command is run from the host, you can see the size is 20K (essentially empty), whereas when run from the container the size is displayed as 16G. – Greg Mar 20 '16 at 13:59