I would like to whittle down a large database from the command line to N files, very similar to this question does. The only difference is that most of my files are in sub-directories, so I was wondering if there was a quick fix to my problem or if it would require more in depth action. Currently, my command looks like (with the (N+1) replaced with the appropriate number):
find . -type f | sort -R | tail -n +(N+1) | xargs rm
I originally thought this would work because find by nature is recursive, and then I tried adding the -r (recursive flag) around the rm as the output indicates that it is randomly selecting files, but can't find them to delete. Any ideas?
EDIT: My new command looks like this:
find . -type f -print0 | sort -R | tail -n +(N+1) | xargs -0 rm
and now I get the error saying rm: missing operand. Also, I am on a CentOS machine so the -z flag is unavailable to me.
EDIT #2 This command runs:
find . -type f -print0 | sort -R | tail -n +(N+1) | xargs -0 -r rm
but when I execute a find . -type f | wc -l to get the number of files in the directory (which should be N if the command worked correctly) has not changed from the starting amount of files.