I would like to whittle down a large database from the command line to N files, very similar to this question does. The only difference is that most of my files are in sub-directories, so I was wondering if there was a quick fix to my problem or if it would require more in depth action. Currently, my command looks like (with the (N+1) replaced with the appropriate number):
find . -type f | sort -R | tail -n +(N+1) | xargs rm
I originally thought this would work because find by nature is recursive, and then I tried adding the -r (recursive flag) around the rm as the output indicates that it is randomly selecting files, but can't find them to delete. Any ideas?
EDIT: My new command looks like this:
find . -type f -print0 | sort -R | tail -n +(N+1) | xargs -0 rm
and now I get the error saying rm: missing operand. Also, I am on a CentOS machine so the -z flag is unavailable to me.
EDIT #2 This command runs:
find . -type f -print0 | sort -R | tail -n +(N+1) | xargs -0 -r rm
but when I execute a find . -type f | wc -l to get the number of files in the directory (which should be N if the command worked correctly) has not changed from the starting amount of files.
sortnor plaintailworks with null-terminated strings the way you want. – Kamil Maciorowski Jul 11 '18 at 19:58-print0, you should clearly state it because (in my opinion) this is the only thing that makes your question not a duplicate of the linked one. Then you should also explicitly mention the inability to use-z(withsort?tail? both?). – Kamil Maciorowski Jul 11 '18 at 20:19-print0, but many of the related solutions I have seen have used it. In the question I originally linked, I saw solutions with and without it and so I have tried it both ways. – Alerra Jul 12 '18 at 11:18