Warnings
The code in this answer can be more destructive than rm -rf. I try to write good code but bugs happen. Forget my reputation, I'm just a random guy on the Internet. Assume the code will delete all files you can delete. Few ways to mitigate the risk:
- test the code in a virtual machine;
- create a new user account that can only hurt itself;
- backup your important files to an external HDD, verify, then physically disconnect the HDD;
- understand the code and only then decide if you want to use it (tricky).
In the question you used rm -rf ${dir}.3. In many shells you should double-quote. By leaving ${dir} unquoted you may remove more than you intend.
Building a nuke
(Hello NSA. I knew you would come.)
chmod -R prior to rm is quite simple. You can simplify it further by hiding any complexity behind a shell function or a script. Develop it once and then just call its name whenever needed. The script can run chmod before rm unconditionally; or it can run rm in hope it succeeds, trying chmod + rm if the first rm fails.
I understand you don't want chmod to affect all files because some of them may be linked also outside of the directory you want to remove. In other words: hardlinks may exist. Mode bits (permissions) of a file are stored in its inode, not in the directory entry (entries) pointing to the inode; therefore multiple entries (pathnames) leading to the same inode cannot be chmoded independently. If you don't want to chmod files that are going to survive in other directories then you shouldn't chmod -R mindlessly. Your concern is justified.
When your problem occurs, it's always a directory you need to chmod. Linux does not allow hardlinks to directories, so chmodding the troublesome directory should not affect anything more (and the directory is going to be removed anyway). If you could chmod -R directories only then the "chmod + rm" solution would be good.
See this question: Generalized chmod function differentiating between directories and files. In my answer there you will find a shell function named chmodx designed to chmod files of certain types. Let's use it to solve your problem:
nuke() {
[ "$#" = 0 ] && return 0
chmodx d -R u+w -- "$@"
rm -rf -- "$@"
}
Usage: nuke foo ./bar /baz/qux.
Notes:
-- after rm will make it stop parsing options, but -- after chmodx will not work this way. If you run nuke -whatever then find (inside chmodx) will misbehave. Therefore you should nuke ./-whatever instead. Down below I will show how to solve this problem.
rm -rf -- without other arguments does nothing. chmodx d -R u+w -- without other arguments runs find without any starting point. Some implementations of find use . in such case. The line with [ "$#" = 0 ] makes nuke invoked without arguments a no-op, even if chmodx d -R u+w -- wouldn't be a no-op.
chmodx runs chmod for every directory. For nuke you can modify its code and insert -perm (see man 1 find) in the right place to only chmod directories that need to be chmodded. Or you can abuse chmodx to inject ! -perm -u+w like this:
nuke() {
[ "$#" = 0 ] && return 0
chmodx d -R u+w -- -u+w -perm ! "$@"
rm -rf -- "$@"
}
This variant checks and sets user's permissions only. Such approach seems right for your usage case.
Processing a directory tree (or trees) with chmodx and then processing the same tree(s) with rm seems suboptimal. If your find supports -delete (it may not) then find foo -delete shouldn't be significantly slower than rm -rf foo, maybe except cases when e.g. dir1/dir2/…/dirN/fileZ cannot be deleted and yet find tries to delete dirN to dir1. Note (some implementations of) rm may know there is no point in trying to remove these directories.
But find foo -delete can be improved and do something if -delete fails. With -execdir (which your find may or may not support) a basic solution to your problem can be:
nuke() {
find "$@" ! -delete -execdir chmod u+w . \; -delete
}
The main advantage is the directory trees are not processed twice. Additional processes (chmod) will be spawned only where they are needed. There are disadvantages:
- If the first
-delete fails then a message will be printed to stderr. You can suppress it with 2>/dev/null but other (potentially useful) messages will also be suppressed. We could use rm -f … 2>/dev/null instead of -delete but this requires a shell. The shell and rm are additional processes we want to avoid.
nuke -whatever will confuse find. Use nuke ./-whatever.
- The example with
dir1/dir2/…/dirN/fileZ (above) still applies.
- (Advantage or disadvantage)
nuke /foo/bar will eventually try to delete bar. If the attempt fails then chmod u+w will work on foo which is not meant to be removed. You may or may not want to change the mode bits of foo. If you don't mind changing them then chmod u+w is way better than chmod +w. Note in the same situation the variant with chmodx will leave foo alone, therefore it may not delete bar.
I think you cannot easily build one find command that will chmod a directory only if it needs to be chmodded and -delete files in it later. This is because -delete implies -depth and -depth means the content of a directory will be processed before the directory itself.
OK, so let's build our improved nuke:
nuke() (
[ "$#" = 0 ] && exit 0
err=0
for p do
case "$p" in
-* )
p="./$p"
;;
esac
2>/dev/null find "$p" ! -delete -execdir chmod u+w . \; -delete
if [ -e "$p" ]; then
>&2 printf '%s survived\n' "$p"
err=1
fi
done
exit "$err"
)
The function nukes its arguments one by one. It reports survivors to stderr. Paths starting with - are supported. Note nuke /foo/bar will chmod … /foo if needed.
The function uses a subshell for two reasons:
- to make all variables local;
- to allow
exit instead of return, so you can paste the body of the function verbatim into a file and make a standalone script if you want to (the only thing to add is a shebang).
The shell code is portable (should work in sh). Non-portable things are -delete and -execdir.