How to get number of files in directory
The following users marked this post as Works for me:
|young_souvlaki||(no comment)||Jul 22, 2022 at 18:38|
There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
If you know that you don't have any exotic file names in the directory, then a relatively trivial
ls -A | wc -l will probably do fine. (It's usually a bad idea to parse the output of
ls, but in a case like this, it might do.)
ls -A lists all files (including directories) including dotfiles but excluding the
.. directory entries, and
wc -l counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
Although when run from a terminal, the output of
ls -A is column-oriented rather than line-oriented, it's common for *nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of
ls, you can force that behavior with
-1 if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use
If you want to exclude directories, a more complex approach would be something like
find . -mindepth 1 -maxdepth 1 ! -type d | wc -l which uses
find to print the names of all non-directories (which need not be files; if you are only interested in files proper, use
-type f instead of
! -type d) within the current directory and pass the names of those to
wc -l for counting the number of lines. (Strictly, it counts the number of newline characters.)
If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass
ls to quote them; the difference between the two is exactly how those characters are represented in the output. (Using
-b retains file name uniqueness in the output.) Another, more complex option would be something like
find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l. The latter uses
find to print the inode number of each file, and then
wc to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
If you want to count hardlinks (regardless of the number of links) to the same data as a single entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the
find invocation above;
sort --unique can do this. Something like
find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l will count the number of unique inode numbers used within the current directory.
Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't quite always true; the latter assumption in particular falls apart in the case of overlay filesystems. If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exists in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore the possibility of overlays and instead consider only the user-visible current state of a directory.
In all cases of
. directory specifier can be replaced with an explicit directory name, such as
0 comment threads