1

Suppose I would like to run multiple nohup jobs but at each time I would like to at most run 4 nohup jobs.

Is there a way to:

  • keep track of 4 nohup job status
  • once one of them finishes, it triggers the 5th nohup job?

Thanks!

Sheng Yang
  • 13
  • 3
  • Is `nohup` mandatory? There is Task Spooler with its `-S` option. See the very end of [this answer](https://superuser.com/a/1587834/432690). – Kamil Maciorowski Jul 26 '23 at 17:47

1 Answers1

0

Welcome to SuperUser, Sheng Yang.

In order to run multiple nohup jobs at once, while also controlling how many run, you can use nohup with a script that runs additional nohup commands. Save the script below with a .sh file extension. I chose fourJobs.sh.

To test this script, I created a small test script that performed a random sleep function. You need to replace these calls to "./sleepTest.sh" with your own commands. Each command will execute in order, using nohup. Only 4 commands will run at a single time, as indicated by MAX_JOBS=4.

Ensure to run this script with the nohup command as well, so it doesn't terminate prematurely.

fourJobs.sh

#!/bin/bash

# Manages concurrent execution of nohup jobs with a maximum limit.

# Number of maximum concurrent jobs
MAX_JOBS=4

# List of commands you want to run with nohup
declare -a commands=(
    "./sleepTest.sh"
    "./sleepTest.sh"
    "./sleepTest.sh"
    "./sleepTest.sh"
    "./sleepTest.sh"
    "./sleepTest.sh"
    "./sleepTest.sh"
    # ... add more commands as needed
)

# Function to get the current number of background jobs
num_jobs() {
    jobs -p | wc -l
}

# Loop through each command and execute them
for cmd in "${commands[@]}"; do
    while true; do
        # Check if the number of current jobs is less than the maximum allowed
        if [[ $(num_jobs) -lt $MAX_JOBS ]]; then
            echo "Executing: nohup $cmd & $(($(num_jobs) + 1)) now running"
            nohup $cmd &> /dev/null &
            sleep 1  # give a little time before checking again
            break
        fi

        # Wait a bit before rechecking
        sleep 5
    done
done

# Wait for all jobs to finish
wait

sleepTest.sh is the command script I used to test. The output from the echo commands is being dumped by the > /dev/null in the nohup command above.

sleepTest.sh

#!/bin/bash

# Simulates job duration by sleeping for a random period.

sleep_time=$((1 + RANDOM % 10))
echo "Script $1 sleeping for $sleep_time seconds"
sleep $sleep_time
echo "Script $1 done"

Running these scripts on my computer produces the following output. This output could easily be removed and was used to show the script operating as expected.

./fourJobs.sh
Executing: nohup ./sleepTest.sh & 1 now running
Executing: nohup ./sleepTest.sh & 2 now running
Executing: nohup ./sleepTest.sh & 2 now running
Executing: nohup ./sleepTest.sh & 3 now running
Executing: nohup ./sleepTest.sh & 4 now running
Executing: nohup ./sleepTest.sh & 4 now running
Executing: nohup ./sleepTest.sh & 3 now running
Jim Diroff II
  • 1,081
  • 4
  • 10
  • Thanks for the prompt reply! Here is a follow up question: `jobs -p` only list jobs submitted in the current shell script, is that correct? so if I submit two such shell scripts, my effective max jobs would become 8? (just making sure I understand the code, and this feature is actually what I am looking for) – Sheng Yang Jul 26 '23 at 18:24
  • Yes, you're correct. The `jobs -p` command will list only the background jobs submitted in the current shell script session. If you run two instances of the `fourJobs.sh` script, each instance will manage its own set of jobs. So, you'd indeed end up with a maximum of 8 concurrent jobs, with each script handling up to 4 of those jobs. Each job will have its own Process ID and should run without issue presuming you aren't performing an invalid operation like using the same (or too many) resources. – Jim Diroff II Jul 27 '23 at 02:58