First I suggest you to enclose the $FILES and $DESTINATION variables in the unzip command in double-quotes, since in case of paths containing spaces they will break the unzip command; also note that your script will break on filenames containing newlines, due to ls outputting the "raw" filenames containing the newlines to the pipe; use bash's globbing feature instead (for FILES in /path/to/archives/* [...]).
Second, when you don't run unzip with the -q option, which suppresses the normal output, the output it's written to stdout (STDandard OUTput), which it's a file tipically linked to the terminal screen; shells such as bash can redirect the output of a command from stdout to another file: for example, in bash, this can be done via the > and >> operators, which respectively 1. Creates the file if not existing or truncates it then appends the output to it and 2. Creates the file if not existing or appends the output to it.
So in this case, since you're processing multiple archives, you don't want to truncate the log file each time a new file it's processed, but you rather want to append the new output to the already existing log; including my corrections:
for FILES in /path/to/archives/*
do
unzip -v "$FILES" -d "$DESTINATION" >> /path/to/logfile
done
lsbecause in a previous iteration of the script I was sorting before unzipping the files from smallest to biggest, in a directory that the files always have a short reliable name (I have a different script that populates this dir). I have since abandoned that idea so/path/to/archives/*would work just perfectly. And I thought that>>would work but seemed like I was missing something. Thanks for clearing thestdoutconcept for me. – dnbpanda Jun 19 '15 at 15:48