1

I am using dd to transfer a large kernel core file (4GB ~ 12GB) in a crash kernel that has a small amount of memory available (~400MB).

The problem is that dd may crash with OOM panic since it just dumps a big chunk of the vmcore into the socket which may cause the system to run OOM.

My question is: how can I throttle dd's speed based on available memory or limit its buffer size?

Thanks.

feeling_lonely
  • 249
  • 1
  • 4
  • 15
  • 1
    Is dd actually crashing, do you face, an actual problem or is this question just based on an unproven theory it may crash? – Ramhound Apr 20 '18 at 11:18

2 Answers2

2

You can try the nocache option e.g.

dd oflag=nocache if=infile of=outfile bs=4096 
Sam
  • 121
  • 1
0

Might I suggest using something like this instead of just calling dd?

#!/bin/sh
bsize=1048576
fsize=`stat -c %s ${1}`
count=$((${fsize}/${bsize}))
if [ $((${fsize}%${bsize})) -ne 0 ] ; then
    count=$((${count}+1))
fi
echo "About to copy ${fsize} bytes in ${count} chunks."
for i in `seq 0 $((${count}-1))` ; do
    dd if=${1} of=${2} bs=1048576 conv=sparse,notrunc count=1 seek=${i} skip=${i} status=none
    /bin/echo -e -n "\e[2K\e[0G[$((${i}+1))/${count}]"
done
echo

There's not much you can do to limit a single invocation of dd to some maximal memory usage without causing it to die. You can however pretty easily script it to copy the file block by block. The above script will copy the first argument to the second, one megabyte at a time while providing a rudimentary progress indicator (that's what that insane looking echo call in the for loop does). Using busybox, will run just fine with only 1.5MB of userspace usable memory. Using regular bash and the GNU coreutils, it should have no issue with keeping below 4MB of memory usage. You can also reduce the block size (by lowering the bsize value) to reduce the memory usage even further.

Austin Hemmelgarn
  • 8,960
  • 1
  • 19
  • 32
  • Is seeking supported when reading from /proc/vmcore? – feeling_lonely Apr 23 '18 at 17:13
  • @hebbo Yes, seeking is supported. The easiest way to think about `/proc/vmcore` in this case is that it's the reverse of a memory mapped file, it provides a regular file interface for a memory mapping. – Austin Hemmelgarn Apr 23 '18 at 18:06
  • I tried to get the size of the /proc/vmcore using stat and blockdev, but none worked. You have any idea on how to get its size? or even change the solution to make size agnostic? – feeling_lonely Apr 23 '18 at 20:43
  • @hebbo The nice progress indicator won't work for `/proc/vmcore` (it will show up as 2TB in size on x86_64), but the script does actually work fine (`dd` exits immediately once it starts reading past the actual data). – Austin Hemmelgarn Apr 24 '18 at 12:00
  • I am going to try out your solution. But still at this point I am not sure what condition I need to use to exit the loop. I understand what you meant about dd exiting when EOF is reached, but I do not know how to translate that into a meaningful loop exit condition. Also, I am not sure if dd would return a special code when EOF is reached. – feeling_lonely May 02 '18 at 19:33