222

I want to test if a directory doesn't contain any files. If so, I will skip some processing.

I tried the following:

if [ ./* == "./*" ]; then
    echo "No new file"
    exit 1
fi

That gives the following error:

line 1: [: too many arguments

Is there a solution/alternative?

Anthony Kong
  • 5,028

26 Answers26

296
if [ -z "$(ls -A /path/to/dir)" ]; then
   echo "Empty"
else
   echo "Not Empty"
fi

Also, it would be cool to check if the directory exists before.

ls -A means list all but . or ..

  • 12
    I had trouble getting this method to work when the /path/to/dir contained spaces and needed to be quoted. I used [ $(ls -A "$path" | wc -l) -ne 0], inspired by @ztank1013's answer. – pix May 19 '15 at 03:23
  • 2
    For those who are looking for a one liner : [ "$(ls -A ./path/to/dir)" ] && echo 'NOT EMPTY' || echo 'EMPTY' – tdhulster Jun 03 '16 at 16:29
  • 3
    Just in case if someone will be looking for "correct/stable" one liner: [ -z "$(ls -A /path/to/dir)" ] && { echo "Not Empty" ; YourCommandA ; true ; } || { echo "Empty" ; YourCommandB ; }. – Victor Yarema Oct 12 '17 at 23:24
  • 5
    This checks whether the directory exists, and deals with spaces in the path (notice the nested quotes in the subshell):

    if [ -d "/path/to/dir" ] && [ -n "$(ls -A "/path/to/dir")" ]; then echo "Non-empty folder" else echo "Empty or not a folder" fi

    – Jonathan H Jan 10 '19 at 00:40
  • 2
    Honestly, at this point, if it's my own system I just write a quick 10 line Python script, symlink it to /usr/local/bin or something, and then call it like if isNotEmpty "$directory"; then ... fi – bjd2385 Aug 27 '19 at 19:02
  • 2
    doesn't work for me, bash 5.0, ls from coreutils 8.30, it returns directory is not empty even though its empty – mauek unak Mar 08 '20 at 18:46
  • 2
    If you want to check if the directory is NOT empty (i.e. the inverse to the original question) change -z to -n. – Finncent Price Jun 21 '21 at 16:46
  • 2
    Can you explain why -z is used? I saw in this tutorial - https://www.cyberciti.biz/faq/linux-unix-shell-check-if-directory-empty/ - practically the same approach but does not use -z – Manuel Jordan Dec 10 '21 at 23:07
  • 2
    /sdcard/WhatsApp/.trash/ on adb is empty but -z will not detect empty state removing -z will work.[ -z "$(ls -A /sdcard/WhatsApp/.trash/) " ] && echo "there no op. But [ "$(ls -A /sdcard/WhatsApp/.trash/) " ] && echo "there" there` maybe you didnt think of hidden dirs. Update answer accordingly – user1874594 Apr 25 '22 at 02:04
  • 2
    @user1874594 sorry, this is very adb specific. I'd like to keep the answer more generic – Andrey Atapin Apr 27 '22 at 06:50
  • 2
    @ManuelJordan Check the edit history. -z was added later just to make it more explicit. – M Imam Pratama May 04 '22 at 10:14
  • Macos Big Sur, using zsh. Doesn't work, even though man ls stated -A has this intent. A simple fix was to regex out the . and .. via -z $(ls -A $tgt | egrep -v '^\.{1,2}$' ) – JL Peyret Oct 10 '23 at 22:01
42

No need for counting anything or shell globs. You can also use read in combination with find. If find's output is empty, you'll return false:

if find /some/dir -mindepth 1 -maxdepth 1 | read; then
   echo "dir not empty"
else
   echo "dir empty"
fi

This should be portable.

slhck
  • 228,104
  • Nice solution, but I think your echo calls reflect the wrong result : in my test (under Cygwin) find . -mindepth 1 | read had a 141 error code in a non-empty dir, and 0 in an empty dir – Lucas Cimon Dec 20 '17 at 09:17
  • @LucasCimon Not here (macOS and GNU/Linux). For an non-empty directory, read returns 0, and for an empty one, 1. – slhck Dec 20 '17 at 11:28
  • 10
    PSA does not work with set -o pipefail – Colonel Thirty Two Sep 10 '19 at 02:07
  • For my case empty or not exist is the same. Can add 2>/dev/null before the pipe to avoid showing error when folder does not exist No such file or directory. – Martin P. Mar 14 '24 at 16:55
36
if [ -n "$(find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty 2>/dev/null)" ]; then
    echo "Empty directory"
else
    echo "Not empty or NOT a directory"
fi
uzsolt
  • 1,245
  • Correct and fast. Nice! – l0b0 Feb 24 '12 at 11:00
  • 5
    It needs quotes (2x) and the test -n to be correct and safe (test with directory with spaces in the name, test it with non-empty directory with name '0 = 1'). ... [ -n "$(find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty 2>/dev/null)" ]; ... – Zrin Mar 07 '17 at 23:51
  • 2
    @ivan_pozdeev That's not true, at least for GNU find. You may be thinking of grep. https://serverfault.com/questions/225798/can-i-make-find-return-non-0-when-no-matching-files-are-found – Vladimir Panteleev Jun 19 '18 at 10:55
  • 4
    It might be simpler to write find "$DIR_TO_CHECK" -maxdepth 0 -type d -empty | grep ., and rely on the exit status from grep. Whichever way you do it, this is very much the right answer to this question. – Tom Anderson Jul 06 '18 at 13:56
  • 2
    I like that this doesn't require listing/reading/traversing all files in the directory. Unlike "ls" based solutions this should work much faster when the directory has lots of files. – oᴉɹǝɥɔ Oct 03 '20 at 18:24
  • 2
    Found this question from a search, discarded poor answers until i found this one, tried it, found it didn't work with GNU find, went to add a comment to that effect ... and found i'd already done that two years ago. Although it should be grep -q .. Thanks again, Mr Udvari! – Tom Anderson Oct 14 '20 at 20:11
  • grep is slower than -n – Alberto Salvia Novella Dec 19 '22 at 18:56
22
#!/bin/bash
if [ -d /path/to/dir ]; then
    # the directory exists
    [ "$(ls -A /path/to/dir)" ] && echo "Not Empty" || echo "Empty"
else
    # You could check here if /path/to/dir is a file with [ -f /path/to/dir]
fi
Renaud
  • 454
21

With FIND(1) (under Linux and FreeBSD) you can look non-recursively at a directory entry via "-maxdepth 0" and test if it is empty with "-empty". Applied to the question this gives:

if test -n "$(find ./ -maxdepth 0 -empty)" ; then
    echo "No new file"
    exit 1
fi
TimJ
  • 319
  • 2
    It may not be 100% portable, but it's elegant. – Craig Ringer Nov 28 '18 at 02:59
  • 2
    This also finishes early in large directories, works with pipefail: `set -o pipefail;

    { find "$DIR" -mindepth 1 || true ; } | head -n1 | read && echo NOTEMPTY || echo EMPTY `

    – macieksk Nov 24 '19 at 12:29
  • Was hoping to find a flag that made find return non-zero when not finding results, but this is probably the closest we're getting to that. Would at least be nice with an equivalent to ! -empty -exit 1 for GNU though. – Steen Schütt Jun 01 '21 at 10:44
  • 1
    Finally a correct answer. No attempts at parsing ls output, not doing anything extra (like listing all files, then counting them), etc. Excellent answer. – lmat - Reinstate Monica Dec 06 '21 at 16:17
13

What about testing if directory exists and not empty in one if statement

if [[ -d path/to/dir && -n "$(ls -A path/to/dir)" ]]; then 
  echo "directory exists and is not empty"
else
  echo "directory doesn't exist or is empty"
fi
aleb
  • 103
  • 3
stanieviv
  • 139
10

Use the following:

count="$( find /path -mindepth 1 -maxdepth 1 | wc -l )"
if [ $count -eq 0 ] ; then
   echo "No new file"
   exit 1
fi

This way, you're independent of the output format of ls. -mindepth skips the directory itself, -maxdepth prevents recursively defending into subdirectories to speed things up.

Daniel Beck
  • 110,419
9

A hacky, but bash-only, PID-free way:

is_empty() {
    test -e "$1/"* 2>/dev/null
    case $? in
        1)   return 0 ;;
        *)   return 1 ;;
    esac
}

This takes advantage of the fact that test builtin exits with 2 if given more than one argument after -e: First, "$1"/* glob is expanded by bash. This results in one argument per file. So

  • If there are no files, the asterisk in test -e "$1"* does not expand, so Shell falls back to trying file named *, which returns 1.

  • ...except if there actually is one file named exactly *, then the asterisk expands to well, asterisk, which ends up as the same call as above, ie. test -e "dir/*", just this time returns 0. (Thanks @TrueY for pointing this out.)

  • If there is one file, test -e "dir/file" is run, which returns 0.

  • But if there are more files than 1, test -e "dir/file1" "dir/file2" is run, which bash reports it as usage error, i.e. 2.

case wraps the whole logic around so that only the first case, with 1 exit status is reported as success.

Possible problems I haven't checked:

  • There are more files than number of allowed arguments--I guess this could behave similar to case with 2+ files.

  • Or there is actually file with an empty name--I'm not sure it's possible on any sane OS/FS.

Alois Mahdal
  • 2,284
  • 2
    Minor correction: if there is no file in dir/, then test -e dir/* is called. If the only file is '*' in dir then test will return 0. If there are more files, then it returns 2. So it works as described. – TrueY Sep 07 '18 at 11:43
  • You're right, @TrueY, I've incorporated it in the answer. Thanks! – Alois Mahdal Sep 11 '18 at 14:45
  • 2
    This won't work if the directory contains a single file named *. A pathological case, but I've seen worse. – kkm -still wary of SE promises Aug 28 '21 at 03:02
  • @kkm it did work for me: http://hastebin.com/hevumumoma.txt -- Bash version 5.1.0(1)-release – Alois Mahdal Sep 10 '21 at 11:44
  • OTOH, if it does not work on older Bash (let me know) one could address that case by adding test -e "$1/*" && return 1 as the first line. – Alois Mahdal Sep 10 '21 at 11:47
  • @AloisMahdal, thank you, good to know, thanks! I usually test features like that with docker run --rm -it bash:4.3, for some arbitrary value of 4.3 :) These are tiny Alpine images, fast to pull and not eating much disk space. 'fraid I have no time to test them.

    It has always baffled me how such a common problem does not have a simple canonical solution in shells.

    – kkm -still wary of SE promises Sep 19 '21 at 14:56
  • @kkm what's your point? have you seen the function fail or not? – Alois Mahdal Sep 20 '21 at 08:56
7

This will do the job in the current working directory (.):

[ `ls -1A . | wc -l` -eq 0 ] && echo "Current dir is empty." || echo "Current dir has files (or hidden files) in it."

or the same command split on three lines just to be more readable:

[ `ls -1A . | wc -l` -eq 0 ] && \
echo "Current dir is empty." || \
echo "Current dir has files (or hidden files) in it."

Just replace ls -1A . | wc -l with ls -1A <target-directory> | wc -l if you need to run it on a different target folder.

Edit: I replaced -1a with -1A (see @Daniel comment)

ztank1013
  • 501
  • 3
    use ls -A instead. Some file systems don't have . and .. symbolic links according to the docs. – Daniel Beck Oct 31 '11 at 10:12
  • 2
    Thanks @Daniel, I edited my answer after your suggestion. I know the "1" might be removed too. – ztank1013 Oct 31 '11 at 10:21
  • 4
    It doesn't hurt, but it's implied if output it not to a terminal. Since you pipe it to another program, it's redundant. – Daniel Beck Oct 31 '11 at 10:24
  • 2
    -1 is definitely redundant. Even if ls will not print one item per line when it will be piped then it doesn't affect the idea of checking if it produced zero or more lines. – Victor Yarema Oct 13 '17 at 09:31
  • Try that on an NFS mount in a directory with 1-2 million files. You'll be pleasantly surprised that a simple empty-or-not check takes 30 min. Sticking head into the pipeline, like (( $( ls -1A | head -n1 | wc -l ) == 0 )) may be a fix, if only partial. – kkm -still wary of SE promises Aug 28 '21 at 03:09
5

Using an array:

files=( * .* )
if (( ${#files[@]} == 2 )); then
    # contents of files array is (. ..)
    echo dir is empty
fi
glenn jackman
  • 26,306
  • 6
    Very nice solution, but note that it requires shopt -s nullglob – xebeche Jan 16 '17 at 11:24
  • 3
    The ${#files[@]} == 2 assumption doesn't stand for the root dir (you will probably not test if it's empty but some code that doesn't know about that limitation might). – ivan_pozdeev Jan 21 '18 at 04:29
  • 2
    @ivan_pozdeev: What do you mean? When I do cd / && files=(* .*), I get an enumeration of all the files and directories in the root directory, which includes . and ... So the ${#files[@]} == 2 test is valid. – Scott - Слава Україні Feb 03 '20 at 04:08
  • Yes, this is ambiguous when you don't know whether nullglob is set (returns 3 array entries in an empty dir when unset). And as long as you're setting nullglob, you may as well set dotglob as well, as noted in the BashFAQ for this issue, then you can just use files=(*) and test against 0. – SpinUp __ A Davis Jun 14 '22 at 20:00
2

Another find solution, (which doesn't rely on other tools), and should be pretty fast:

if (( "$(find /directory/to/check/ -mindepth 1 -printf '1\n' -quit)" )); then
  echo The directory is not empty
else
  echo The directory is empty
fi
smac89
  • 398
2

There are lots of good answers here for simple cases, but this QA ranks very highly in a web search, and many of the answers have subtle failures that may be important for some use cases:

  • permissions problems (i.e. reporting empty dir when it's only unsearchable)
  • ambiguity about symlinks (test dereferences, find doesn't by default, ls may depend on whether the path ends with /)
  • answers using shell globbing may rely on options like nullglob that need to be tested or explicitly set

Here is a function that is efficient for a directory with lots of files, doesn't rely on shell options for globbing, and explicitly tests for permission problems:

function mtdir {
    # test for empty directory
    if [[ -d "$1" && -r "$1" && -x "$1" ]]; then
    if find -L -- &quot;$1&quot; -maxdepth 0 -type d -empty | grep -q .; then
        # empty directory
        return 0
    else
        # non-empty directory
        return 1
    fi

else
    # not a directory, not readable, or not searchable
    # if this is unexpected, might want to print error and exit
    return 2
fi

}

This function dereferences (follows) valid symlinks, testing the targets rather than the links themselves (both at the -d test, and using find -L). If symbolic links should be tested as link files rather than their target for your application, an explicit test should be added before -d (e.g. ! -L "$1" && ...), and find -P should be used instead of find -L.

The function can be used with:

if mtdir some/dir; then
    echo "directory is empty"
    # further processing
fi

This has been tested and works under Linux and macOS, with edge cases of non-directory files, symlinks, and directories with locked down permissions (i.e. chmod 0000 testdir).

You can also test for a non-empty directory explicitly, or an error, using the return code:

mtdir some/dir
ec=$?
if [[ $ec -eq 1 ]]; then
    echo "non-empty directory"
    # further processing
elif [[ $ec -eq 2 ]]; then
    echo "unexpected error: directory not readable or not searchable"
    # further processing
fi
2

I think the best solution is:

files=$(shopt -s nullglob; shopt -s dotglob; echo /MYPATH/*)
[[ "$files" ]] || echo "dir empty" 

thanks to https://stackoverflow.com/a/91558/520567

This is an anonymous edit of my answer that might or might not be helpful to somebody: A slight alteration gives the number of files:

files=$(shopt -s nullglob dotglob; s=(MYPATH/*); echo ${s[*]}) 
echo "MYPATH contains $files files"

This will work correctly even if filenames contains spaces.

akostadinov
  • 1,412
  • Using an array instead of invoking a subshell will make this even better. This answer does that. – codeforester Mar 06 '20 at 21:17
  • 2
    @codeforester, the solution needs to know what are current shell opts. So one would need code to set opts and revert if needed (or figure out opts and count according to them). This will bloat the code and make it less readable. If this part of the code is on a performance critical path, then it might be worth it (or not, needs testing). For a normal use case where you check one directory, I think this answer is safe, short and self contained. If you don't need portability between scripts (which possibly use different shell opts), then the other answer is good. – akostadinov Mar 08 '20 at 10:10
1
if find "${DIR}" -prune ! -empty -exit 1; then
    echo Empty
else
    echo Not Empty
fi

EDIT: I think that this solution works fine with gnu find, after a quick look at the implementation. But this may not work with, for example, netbsd's find. Indeed, that one uses stat(2)'s st_size field. The manual describes it as:

st_size            The size of the file in bytes.  The meaning of the size
                   reported for a directory is file system dependent.
                   Some file systems (e.g. FFS) return the total size used
                   for the directory metadata, possibly including free
                   slots; others (notably ZFS) return the number of
                   entries in the directory.  Some may also return other
                   things or always report zero.

A better solution, also simpler, is:

if find "${DIR}" -mindepth 1 -exit 1; then
    echo Empty
else
    echo Not Empty
fi

Also, the -prune in the 1st solution is useless.

EDIT: no -exit for gnu find.. the solution above is good for NetBSD's find. For GNU find, this should work:

if [ -z "`find \"${DIR}\" -mindepth 1 -exec echo notempty \; -quit`" ]; then
    echo Empty
else
    echo Not Empty
fi
yarl
  • 208
1

This solution is using only shell built-ins:

function is_empty() {
  typeset dir="${1:?Directory required as argument}"
  set -- ${dir}/*
  [ "${1}" == "${dir}/*" ];
}

is_empty /tmp/emmpty && echo "empty" || echo "not empty"
pez
  • 21
1
[ $(ls -A "$path" 2> /dev/null | wc -l) -eq 0 ] && echo "Is empty or not exists." || echo "Not is empty."
  • While this code may answer the question, providing additional context regarding how and/or why it solves the problem would improve the answer's long-term value. – Donald Duck Jun 19 '20 at 17:44
1

Without calling utils like ls, find, etc.:

Inside dir:

[ "$(echo *)x" != '*x' ] || [ "$(echo .[^.]*)x" != ".[^.]*x" ] || echo "empty dir"

The idea:

  • echo * lists non-dot files
  • echo .[^.]* lists dot files except of "." and ".."
  • if echo finds no matches, it returns the search expression, i.e. here * or .[^.]* - which both are no real strings and and have to be concatenated with e.g. a letter to coerce a string
  • || alternates the possibilities in a short circuit: there is at least one non-dot file or dir OR at least one dot file or dir OR the directory is empty - on execution level: "if first possibility fails, try next one, if this fails, try next one"; here technically Bash "tries to execute" echo "empty dir", put your action for empty dirs here (eg. exit).

Checked with symlinks, yet to check with more exotic possible file types.

hh skladby
  • 11
  • 2
  • this is a nice one, basically uses the globs suggested in the BashFAQ here -- perhaps wrap it in a function to make it more digestable? Also not sure whether the shoptions listed in the FAQ are needed – SpinUp __ A Davis Jun 14 '22 at 17:56
  • This test misses files named with 2 leading dots, for example ..a – db-inf Apr 15 '23 at 11:08
1

I could be mistaken, but I think this check should be sufficient:

[[ -s /path/to/dir ]] && echo "Dir not empty" || echo "Dir empty"
Lou
  • 647
  • 1
    -1: Bash 5.0 and 5.1.4 (Debian) return true for rm -r test 2>/dev/null; mkdir test; [[ -s test ]] && echo "true" – xebeche Jul 26 '22 at 12:49
0

For any directory other than the current one, you can check if it's empty by trying to rmdir it, because rmdir is guaranteed to fail for non-empty directories. If rmdir succeeds, and you actually wanted the empty directory to survive the test, just mkdir it again.

Don't use this hack if there are other processes that might become discombobulated by a directory they know about briefly ceasing to exist.

If rmdir won't work for you, and you might be testing directories that could potentially contain large numbers of files, any solution relying on shell globbing could get slow and/or run into command line length limits. Probably better to use find in that case. Fastest find solution I can think of goes like

is_empty() {
    test -z $(find "$1" -mindepth 1 -printf X -quit)
}

This works for the GNU and BSD versions of find but not for the Solaris one, which is missing every single one of those find operators. Love your work, Oracle.

0

This work for me, to check & process files in directory ../IN, considering script is in ../Script directory:

FileTotalCount=0

    for file in ../IN/*; do
    FileTotalCount=`expr $FileTotalCount + 1`
done

if test "$file" = "../IN/*"
then

    echo "EXITING: NO files available for processing in ../IN directory. "
    exit

else

  echo "Starting Process: Found ""$FileTotalCount"" files in ../IN directory for processing."

# Rest of the Code
kenorb
  • 25,417
Arijit
  • 1
0

I made this approach:

CHECKEMPTYFOLDER=$(test -z "$(ls -A /path/to/dir)"; echo $?)
if [ $CHECKEMPTYFOLDER -eq 0 ]
then
  echo "Empty"
elif [ $CHECKEMPTYFOLDER -eq 1 ]
then
  echo "Not Empty"
else
  echo "Error"
fi
0

The Question was:

if [ ./* == "./*" ]; then
    echo "No new file"
    exit 1
fi

Answer is:

if ls -1qA . | grep -q .
    then ! exit 1
    else : # Dir is empty
fi
HarriL
  • 9
0

I might have missed an equivalent to this, which works on Unix

cd directory-concerned
ls * > /dev/null 2> /dev/null

return-code (test value of $?) will be 2 if nothing or 0 something found.

Note this ignores any '.' files and will probably return 2 if any of these exist without any other 'normal' filenames.

Anthony Kong
  • 5,028
Tim
  • 1
0

More solutions with find

# Tests that a directory is empty.
# Will print error message if not empty to stderr and set return
# val to non-zero (i.e. evaluates as false)
#
function is_empty() {
    find $1 -mindepth 1   -exec false {} + -fprintf /dev/stderr "%H is not empty\n" -quit
    # prints error when dir is not empty to stderr
    # -fprintf /dev/stderr "%H is not empty\n"
    #
    # -exec false {} +
    # sets the return value (i.e. $?) to indicate error
    #
    # --quit
    # terminate after the first match

}

examples

#!/bin/bash
set -eE # stop execution upon error

function is_empty() { find $1 -mindepth 1 -exec false {} + -fprintf /dev/stderr "%H is not empty\n" -quit }

trap 'echo FAILED' ERR #trap "echo DONE" EXIT

create a sandbox to play in

d=$(mktemp -d) f=$d/blah # this will be a potention file

set -v # turn on debugging

dir should be empty

is_empty $d

create a file in the dir

touch $f ! is_empty $d

this will cause the script to fail because the dir is not empty

is_empty $d

this line will not execute

echo "we should not get here"

output

[root@sysresccd ~/sandbox]# ./test

dir should be empty

is_empty $d

create a file in the dir

touch $f ! is_empty $d /tmp/tmp.aORTHb3Trv is not empty

this will cause the script to fail because the dir is not empty

is_empty $d /tmp/tmp.aORTHb3Trv is not empty echo FAILED FAILED

0

Although there are many reasonable solutions here, I am personally not a big fan of most of these answers, as many return a lot of output when returning directory contents. I was expecting a solution that would better handle directories with large numbers of files, while also one that I think is easy to understand.

So, this is what I ended up with, and thought I would share:

This appears to work OK for me on RedHat:

dir="/tmp/my_empty_dir"
[[ -d "${dir}" && -z "$(find "${dir}" -not -path "${dir}" -print -quit)" ]] && echo "${dir} is empty"

In this example:

First ensure dir exists -d "$dir", otherwise this will return empty (we will also see an error sent to stderr).

However it's likely you would need to test for this separately, as a "not empty" result is likely to mean "contains files" (which is not correct)

AND

Find: (find $dir -not -path $dir -print -quit):

  • Find everything in $dir
  • Exclude the directory $dir from the resulting output
  • Print the first result (something else within $dir)
  • Quit immediately (only return the first result).

BEWARE the -path parameter takes a "pattern", so if you are expecting special characters (eg: *, [, ]) these would need to be escaped Eg:

dir='/tmp/test[dir]'
dirpath='/tmp/test\[dir\]'
find "${dir}" -not -path "${dirpath}" -print -quit

During my test, this also successfully found hidden files. ($dir/.hidden)

Find returns 0 regardless of whether anything is found, and I don't currently see a simpler way to test this, so:

As per other examples I also wrapped this in:

Empty: [[ -z "$result" ]] to test if the result is blank.

NOT Empty: [[ ! -z "$result" ]] to test if the result is not blank.

Yes, the braces around ${dir} are not really required, but I thought it best to help handle this use case

dir="/tmp/"
[[ -d "${dir}subdir" ...
0

This is all great stuff - just made it into a script so I can check for empty directories below the current one. The below should be put into a file called 'findempty', placed in the path somewhere so bash can find it and then chmod 755 to run. Can easily be amended to your specific needs I guess.

#!/bin/bash
if [ "$#" == "0" ]; then 
find . -maxdepth 1 -type d -exec findempty "{}"  \;
exit
fi

COUNT=`ls -1A "$*" | wc -l`
if [ "$COUNT" == "0" ]; then 
echo "$* : $COUNT"
fi