0

I execute this command:

nohup bash /home/opc/TEST.sh >/dev/null 2>&1 $

and TEST.sh has nohup bash /home/opc/TEST.sh >/dev/null 2>&1 $ at the end of it, so it repeats itself unlimited times.

After a while RAM starts to overload, and from ps -ef | grep TEST.sh I get output full of "tails", remainders of each cycle of nohup bash …:

root      4312  4294  0 02:15 ?        00:00:00 bash /home/user1/TEST.sh $
root      4432  4312  0 02:50 ?        00:00:00 bash /home/user1/TEST.sh $
root      4594  4432  0 03:26 ?        00:00:00 bash /home/user1/TEST.sh $
root      4722  4594  0 04:01 ?        00:00:00 bash /home/user1/TEST.sh $
root      4796  4722  0 04:37 ?        00:00:00 bash /home/user1/TEST.sh $
root      4962  4830  0 05:05 pts/2    00:00:00 grep --color=auto TEST

How I can auto clean RAM and those "tails" of already executed nohup scripts? Maybe include some parameter in nohup to clean it after each execution.

This is the full script named TEST.sh:

#!/bin/bash
cd "$(dirname "$0")"
ffmpeg -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"
nohup bash /home/opc/TEST.sh >/dev/null 2>&1 $

The purpose of it is to create a looped ffmpeg stream which repeats itself unlimited times until the whole thing is killed on demand.

Kamil Maciorowski
  • 69,815
  • 22
  • 136
  • 202
Kostia
  • 15
  • 5
  • Kill all `TEST.sh` processes. And reconsider how you want to do the things – Romeo Ninov Sep 07 '22 at 05:21
  • Do you want to take care of the current mess? Or do you want to fix your script so the problem doesn't happen in the future? If the latter, then possibly `exec nohup …` instead of `nohup …` at the end. Invoke `help exec` in Bash to see what it does. It may or may not be what you want; without seeing the whole script I cannot be sure. Therefore not an answer. – Kamil Maciorowski Sep 07 '22 at 05:24
  • About the trailing `$` character in each of your commands: is it a genuine command line parameter you want `bash` to get? (it seems it does get it). Or did you want to use `&` (run the command asynchronously)? – Kamil Maciorowski Sep 07 '22 at 05:30
  • @KamilMaciorowski Whole script is just `ffmpeg.... ` then at the and `nohup ... ` to loop it purpose of it is endless ffmpeg stream. And it works, but problem with it that its overload RAM with mess above, so I need sometimes restart server and restart `nohup...` thanks, for your response. So, I need just replace `nohup......$` with `exec nohup.... &` ? – Kostia Sep 07 '22 at 07:01
  • @KamilMaciorowski I tested your advice about `exec nohup.... &` and it worked, thank you, very much! About `$` and `&` (at the end of `nohup...`) I didnt know difference between them before, looks like after `&` you can continue work in terminal, and after `$` you need restart terminal to continue work – Kostia Sep 07 '22 at 07:48
  • "after `$` you need […]" – No, no, no. Your sole `$` is just another argument to `nohup`, an arbitrary argument, which becomes an argument to `bash` (you can see it in the output of `ps -ef`, ultimately it's the first argument for your script. If your script does not use `$1`, `$@` or `$*`, then this argument is totally irrelevant ([cargo cult programming](https://en.wikipedia.org/wiki/Cargo_cult_programming)?). OTOH `&` is a terminator that makes the shell run the command asynchronously. `exec … &` at the end makes little sense because presumably either `exec …` or `… &` would work alone. – Kamil Maciorowski Sep 07 '22 at 09:39
  • I could write an answer and explain this, but not until I see the original script (in the question, [edit] it). Waving hands in comments does not count, because there may be subtleties and I don't want to mislead you by assuming too much (or too little) about the script. For now my impression is you may hardly know what you're doing in shell scripting (it's not an accusation, I'm here to help) and therefore I feel I should be careful and not build on top of code templates you use without understanding them maybe. If you think you do understand, then forgive me my wrong impression. – Kamil Maciorowski Sep 07 '22 at 10:00
  • @KamilMaciorowski no no you are totally right, I was using this script after googling in internet, without full picture of how all things are working, I have added original script in Question (only not include full output URL). Any help from you much appreciated! After your previous commentary, I modify its last line with `exec nohup bash /home/opc/TEST.sh >/dev/null 2>&1 &` and looks like my problem about "mess" in processes solved – Kostia Sep 07 '22 at 13:42

1 Answers1

2

Your script calls itself by a hardcoded pathname. It's a poor way to create a loop. The problem in question is why it's poor (but there may be more reasons).


&

The problem occurs because the shell interpreting the script waits until this final nohup bash … exits. In general a shell waits for a command, unless the command is terminated with &. & as a command terminator/separator makes the shell execute the command asynchronously. In other words if the last line in your script was:

nohup bash … &

then the shell interpreting this very instance of the script wouldn't wait for the new bash to exit. It would continue; and because there is nothing more to do in the script, it would exit.

The trailing $ in your commands looks weird. $ is special in a shell in syntax like $var, $(…) and few others; sole $ being one word is not special. Your $ is just another argument to nohup (it doesn't matter the argument is after the redirections), an arbitrary argument, which becomes an argument to bash, ultimately it's the first argument to your script. Your script does not use $1, $@ or $*, so this argument is totally irrelevant.

Maybe you (or whomever you get this $ from) wanted to use &, but $ appeared instead because of some mistake. (I think nohup … & is more common than nohup … without &, so if your code is a copy from some resource, it's plausible the original idea was to use &.) Note & as a command terminator/separator is not an argument to the command. You can see $ in the output of ps -ef, but you won't see & if you use it.

Adding & is enough to "fix" your script, it's not the best way though. It's true this method prevents bash processes from accumulating: each time after a new bash is spawned, the old bash dies. Nevertheless this rotation is unnecessary. Creating a new process is costly. With modern computers one can usually afford to waste some resources; still IMO if one can write more optimized and more elegant code without hassle, then one should.


exec

In your script nohup bash … is the last command. For the shell interpreting the script there is nothing more to do. In such circumstances you can avoid creating a new process by using exec. In Bash (and in any POSIX-like shell) exec something makes the shell replace itself with something. The last line of the script can be:

exec nohup bash …

and the current interpreter of the script will replace itself with nohup instead of creating a new process and then exiting.

Note nohup does something similar when it runs a new bash (or whatever). PIDs in the output of ps -ef you posted reveal that each bash is a child of the previous one, despite the fact there was nohup in between. What happens is your nohup process, after doing its job of setting things up, replaces itself with bash and from now on the parent bash sees bash (not nohup) as its child. Using exec nohup bash … in the script will result in bash and nohup replace one another again and again, still under one PID. In your case this is better than a cycle of processes being created anew and dying.

Also note exec … & makes no sense. The current shell cannot replace itself with something and at the same time continue without waiting. If it manages to exec to something then it will be no more, a new executable will take its place as the process. In my tests, in exec … & the & wins, i.e. the command behaves as if exec wasn't there (kinda, there may be nuances, I haven't tested thoroughly).

AFAIK in some circumstances bash (at least some versions of it) can implicitly exec the last command, exactly to avoid creating a new process. I cannot tell why bash didn't do this in your case (nor if we should expect it to do this in the first place). It doesn't matter, you should not rely on such optimizations anyway. If you want bash to exec then use exec explicitly.


nohup and >/dev/null 2>&1

You can learn what nohup does, here: Difference between nohup, disown and &.

nohup sets few things up and it doesn't need to stay, it's job is done. It can replace itself (as mentioned above) with whatever executable you want it to run. Things set up by nohup survive. Until something deliberately changes these things again, the effect of no-longer-running nohup impacts the executable and its descendants.

Similarly if you run the script with redirections, you don't need to re-apply them.

This means you don't really need nohup and >/dev/null 2>&1 in the last line of the script. If you initially run the script (e.g. from an interactive shell) with nohup and >/dev/null 2>&1 as you did, it should be enough. If I were you, I would remove nohup and >/dev/null 2>&1 from the script. Normally I would start the script with nohup and >/dev/null 2>&1, but if I ever chose to start it without then no code in the script would override my choice.


#!/bin/bash and bash

Your script contains a shebang and it's #!/bin/bash. Still every time you run it, you use bash /path/to/the_script. This method explicitly runs bash that opens /path/to/the_script and interprets it. When bash interprets the script, the shebang is just a comment.

If you make the script executable (chmod +x /path/to/the_script) then you will be able to run it "directly" as /path/to/the_script. The kernel will read the shebang and execute /bin/bash /path/to/the_script for you. In this method the shebang is important (see what happens without a shebang). There are nuances and you may (or even have to) stick to bash /path/to/the_script. But you did use the shebang, you probably want to take advantage of it. Make the script executable and call it without the leading bash word.

Imagine some day you ported your script to Python (the actual script is simple and there is no reason to port it, but in general you may want to port a script for whatever reason). Any code that uses bash /path/to/the_script would have to be patched to use python instead of bash. By using the shebang properly (and changing from #!/bin/bash to #!/usr/bin/python or so, when appropriate) you allow anything or anyone to keep invoking /path/to/the_script. The interpreter belongs to the implementation, invokers shouldn't care what the interpreter should be. The mechanics of shebang allows them not to care.

Additionally, since there's nothing in your code that uses features beyond the POSIX shell, the shebang may be #!/bin/sh. sh should perform better because it doesn't load functionalities specific to bash. This is true even if in your OS sh is symlinked to bash (Bash detects when it's called as sh and skips steps specific to bash).


TEST.sh

The name TEST.sh is misleading, as everything else indicates you want to run the script with bash, not with sh. In Linux very few tools care for "extensions", the OS as a whole does not. In fact there is no concept of the extension, this .sh you see is just a substring of TEST.sh, while TEST.sh is (in its entirety) the filename. (For comparison: in the world of DOS/Windows extensions started as separate entities along filenames; they are still important at the OS level.)

Again, imagine you ported the script to Python. Will you change the name? If you don't then it will be misleading (at least to humans). If you do then every piece of code that uses TEST.sh will need to be patched to TEST.py.

The interpreter belongs to the implementation, invokers shouldn't care. Therefore name your executables after what they do, not after what they are under the hood. Name the script TEST, take advantage of the shebang (elaborated above) and don't care about the interpreter while invoking.

In your OS some executables are scripts and you may be unaware because each such script is not foo.sh nor foo.py, it's foo. If it ever gets ported to another interpreter or if it gets implemented as a binary executable, you (and the rest of your OS) won't notice. Adopt this good practice while naming your scripts.


cd

It seems nothing in your script uses relative paths. If you didn't redirect nohup to /dev/null, it would create nohup.out in the current working directory; but you did redirect, so even here the current working directory does not matter.

cd "$(dirname "$0")" is most likely not needed.


Looping in a shell

Considering all the above, you can make your script (named TEST and made executable) as simple as:

#!/bin/sh
ffmpeg -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"
exec /home/opc/TEST

and invoke it with:

nohup /home/opc/TEST >/dev/null 2>&1

(with terminating & if you want). It's still not the best way to create a loop in a shell. A better way is to implement an explicit loop using specific syntax, e.g. while:

#!/bin/sh
while :; do
   ffmpeg -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"
done

: is a no-op that always succeeds, so the loop will never end by itself. Now there is no reason to call (or exec to) the script again and again, one and the same shell loops and calls ffmpeg again and again.


Looping in ffmpeg

I don't really know ffmpeg and I haven't tested, but according to this answer -stream_loop -1 is all you need to make ffmpeg loop. The script may be:

#!/bin/sh
exec ffmpeg -stream_loop -1 -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link"

(some additional flags may be needed, like here).

Now we don't need a loop in the shell, ffmpeg itself loops the input. I used exec, so the shell replaces itself with ffmpeg without creating a new process. In fact the shell interpreting the script has nothing to do except replacing itself. You can as well run the ffmpeg command directly under nohup:

nohup ffmpeg -stream_loop -1 -re -i /home/opc/"123.mp4" -c copy -f flv rtmp://"output link" >/dev/null 2>&1

(with terminating & if you want). Keeping the ffmpeg command in a script may still be a good idea because the command is complex, not obvious, not something you'd want to retype.

Kamil Maciorowski
  • 69,815
  • 22
  • 136
  • 202
  • Thank you, very much for such detailed description how things are working and how improve script to my task, I will continue testing 2 variants with `-stream_loop` and improved variant of initial script with but with `while` command. Generally speaking, I found initial script with `$` in some forum, maybe author mistaken, that's was my problem (and after replacing with `&` all is ok), but now I understand that script can be improved according to your description – Kostia Sep 08 '22 at 04:01
  • Problem with `while :; do` is it starting to create endless ffmpeg processes at same time (not waiting while 1st ffmpeg process will finish, not 1 by 1, so if wait few min your machine will stop working). That's what I got after test. `-stream_loop` works same fine as initial script (but with `&` correction), but for my task better to have name of script in process name (for reference), so I can stop it by `pkill -f "name of script"` (if make `-stream_loop -1` will be possible only by name of mp4 file). So I will stop in initial one (replacing `$` with `&`, and deleting `cd` line). – Kostia Sep 08 '22 at 04:17
  • problem with replacing `nohup bash` with `nohup sh` is same as in my previous comment it makes no filename of *.sh in process name, so I cannt kill it with `pkill -f "name of script"` and I have many *.sh files and dont want to self confuse which I make executable and which no. For me better use `bash` as I see – Kostia Sep 08 '22 at 04:43
  • @Kostia I don't see how my `while` loop can create many `ffmpeg` processes and not wait. Did you test it with a stray `&` that shouldn't be there? Hint: in Bash `exec` provides `-a` that allows you to "rename" a process. E.g. `exec -a my_special_ffmpeg ffmpeg …` should allow you to `pkill -f my_special_ffmpeg` later. – Kamil Maciorowski Sep 08 '22 at 05:12
  • yes you are right I mistakenly added `&` to the end, will test this later. This method takes lower resourses of machine or same as `nohup` at the end of script? (cause if same I can stay with `nohup bash....&` which is currently working well) – Kostia Sep 08 '22 at 07:56
  • I have stopped on my initial script with correction `$` to `&` inside script's last line. But I also noticed interesting detail, if I invoke script through chain Windows Batch file though putty's saved profile with -m from text file: `sudo su - -c "nohup bash /***.sh >/dev/null 2>&1 &"`. Then sometimes script no invokes, but when I am not using `&` at the end (end of invoking command through putty, script on machine is with `&`) it always invokes (I think cause without `&` it hangs on terminal untill I kill it with TASKKILL with some delay), maybe I need create separete Question for it – Kostia Sep 09 '22 at 07:19