3

I want to run a multi-parallel task in one bash file like the example code below,

for i in 1 2 3 4 5 6 7 8; do
        setsid python /tmp/t.py ${i} 1>>/tmp/1.log 2>&1 &
done
wait  # first wait

echo "next wait"
for i in 9 10 11 12 13 14 15 16; do
        setsid python /tmp/t.py ${i} 1>>/tmp/1.log 2>&1 &
done
wait  # second wait

As you can see it, is wait possible to do this? I want to run the first 8 tasks and then wait all the tasks to finish, then spawn the next 8 tasks because the RAM is limited, I cannot run all the 16 tasks in one round.

GoingMyWay
  • 185
  • 1
  • 2
  • 10
  • Is it possible? The answer is "yes, but...". Is this short answer helpful? I doubt it. What do *your* tests indicate? Does *your* code wait? Do you run the script or `source` it? Have you seen [this answer](https://stackoverflow.com/a/9685973/10765659)? Does `t.py` fork? I guess your problem is the code doesn't wait when you need it to, right? (If it did, you wouldn't ask if it's possible). Simple examples do wait, your code may not. Please [edit] and provide [MCVE](https://meta.stackoverflow.com/a/367019/10765659). Have you considered/tried GNU `parallel` with `-j`? – Kamil Maciorowski Jan 20 '20 at 07:27
  • Just to clarify: The wait command will wait for the _last_ process to finish. It does not care about the others. So, for the first loop, if #8 finishes before numbers 1-7, the script will continue to the second loop. Which is not what the OP wants. I like the answer @meuh gave by using -w on setsid. Simple & elegant. – Scottie H Jan 05 '21 at 00:20
  • @ScottieH After setting -w with `setsid`, should I also `wait`? – GoingMyWay Jan 05 '21 at 04:24
  • 1
    It wont hurt anything to do it. It will ensure that all the background jobs have stopped before proceeding. – Scottie H Jan 06 '21 at 01:40
  • 1
    @ScottieH Thank you. – GoingMyWay Jan 06 '21 at 02:34

3 Answers3

2

Use option -w or --wait to setsid so that the setsid command waits until the python process ends before ending itself. This means the shell wait command now has child processes to wait for.

meuh
  • 6,119
  • 1
  • 20
  • 26
2

Something like this:

set -a pids
for i in {1..8}
do
  python /tmp/t.py ${i} 1>>/tmp/1.log 2>&1 &
  pids[${#pids[*]}]=$!  # Remember pid of child
done

# Wait for each child. This loops exits when all children are finished
for pid in ${pids[*]} 
do
  wait $pid
  printf "Sensed end of %d\n" $pid
done

### Continue processing here
xenoid
  • 9,782
  • 4
  • 20
  • 31
1

With GNU Parallel it looks like this:

parallel -j8 setsid python /tmp/t.py {} ::: {1..16} > log
Ole Tange
  • 4,529
  • 2
  • 34
  • 51