8

When writing program, there are times when a runaway program slurps half of my RAM (generally due to practically infinite loops while creating large data structures), and bringing the system to become really slow that I can't even kill the offending program. So I want to use ulimit to automatically kill my program automatically when my program is using an abnormal amount of memory:

$ ulimit -a
core file size          (blocks, -c) 1000
data seg size           (kbytes, -d) 10000
scheduling priority             (-e) 0
file size               (blocks, -f) 1000
pending signals                 (-i) 6985
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) 10000
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 6985
virtual memory          (kbytes, -v) 100000
file locks                      (-x) unlimited
$ ./run_program

but why is my program still using more RAM than the given limit (yes, I'm starting the program in the same bash shell)?

Have I misunderstood something about ulimit?

Lie Ryan
  • 4,507
  • 3
  • 22
  • 26
  • As you can see, there are limits on several different kinds of memory. Figuring out towards which limits a particular allocation count is sometimes tricky. Try to get a “runaway” process and post the contents of `/proc/12345/status` where 12345 is the process ID (just the lines beginning with `Vm` are enough). – Gilles 'SO- stop being evil' Nov 02 '10 at 23:43
  • @Gilles: I've tried putting additional constrains on "max memory size", "virtual memory", "core file size", "data seg size", basically everything I can see in ulimit that is related to memory (I don't use much files). The problem with collecting from /proc/ is that my computer locks up in 2-3 seconds after the runaway started, and I have to struggle really hard to be able to kill the offending process (many times, I'd just use the power button). I'll try acquire one though. – Lie Ryan Nov 03 '10 at 01:28

3 Answers3

6

ulimit -m no longer works. Use ulimit -v instead.

The reason is that ulimit calls setrlimit, and man setrlimit says:

RLIMIT_RSS Specifies the limit (in bytes) of the process's resident set (the number of virtual pages resident in RAM). This limit has effect only in Linux 2.4.x, x < 30, and there affects only calls to madvise(2) specifying MADV_WILLNEED.

Yariv
  • 161
  • 1
  • 2
6

Your example should work like you think (program gets killed after consuming too much RAM). I just did a small test on my shell server:

First I restricted my limits to be REALLY low:

ulimit -m 10
ulimit -v 10

That lead to about everything getting killed. ls, date and other small commands will be shot before they even begin.

What Linux distribution you use? Does your program use only a single process or does it spawn tons of child processes? In the latter case ulimit might not always be effective.

Janne Pikkarainen
  • 7,715
  • 1
  • 31
  • 32
  • I'm using gentoo and my program didn't spawn any subprocesses, but it uses SDL and OpenGL, could that be the cause? Setting ulimit to an extremely low value do cause `ls`, etc and my program to get killed. The program requires between 4000-8000 on normal use (at 4000 the SDL library doesn't get loaded, and at 8000 the program doesn't get killed even though `top` showed that it ate more than half a gigabytes of RAM) – Lie Ryan Nov 02 '10 at 14:53
1

This only works in a single bash session unless you put it into your .bash_profile and won't apply for the already running processes.

What I find strange is that the:

 max memory size         (kbytes, -m) unlimited

is not present in /etc/security/limits.conf even tho it's only limits memory consumption per process not overall for 1 user account. Instead of them adding Cgroup, they should have just modify the existing unix commands to accomodate those new features.

snoemq
  • 11
  • 1
  • 1
    I knew that ulimit only work within the same bash shell for programs started after the limit is set (see how I started the program from my sample shell session), my question was that the limit still didn't work even after taking that into account. – Lie Ryan Aug 22 '14 at 11:10