While @Patrick's answer is perfectly fine if you have to do a similar task over a directory tree's worth of files you'll need to change your tactics slightly. One method for handling this is to use find & while.
find & while
$ depth=2
$ find . -maxdepth $depth -type f -print0 | sed 's|\./||g' | \
while IFS= read -r -d '' file; do \
f=$(basename "$file"); printf "%s: %s\n" "$file" "${#f}"; \
done | column -s : -t
dir2/more files3.txt 15
some long spacey file.txt 25
dir1/more files1.txt 15
dir1/more files2.txt 15
file 1.txt 10
file 2.txt 10
The above will generate a list of files separated by \0 (i.e. NULLs). You can use the variable $depth to control how deep find will look. This list is then scrubbed so that any .\ characters are removed via sed.
Lastly we loop through this list and use a printf to print each file's name out along with its length, using Bash's built-in facility to count the length of a string, ${#var}. The printf will print the file + its path but only the size of the file.
The column -s : -t is just to pretty print it. It does so by splitting the output on the colon, :, and then splitting the output up into equidistant columns.
echo -ntip. I had noticed thewcwas counting one extra character but had not yet started dwelling on that one. – Fabricio Jan 14 '14 at 01:00