Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How to get number of files in directory

+7
−0

How do you find out the number of files in a directory from the command line?

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

4 answers

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+12
−0

There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.

If you know that you don't have any exotic file names in the directory, then a relatively trivial ls -A | wc -l will probably do fine. (It's usually a bad idea to parse the output of ls, but in a case like this, it might do.) ls -A lists all files (including directories) including dotfiles but excluding the . and .. directory entries, and wc -l counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.

Although when run from a terminal, the output of ls -A is column-oriented rather than line-oriented, it's common for *nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of ls, you can force that behavior with -1 if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use ls -A1.

If you want to exclude directories, a more complex approach would be something like find . -mindepth 1 -maxdepth 1 ! -type d | wc -l which uses find to print the names of all non-directories (which need not be files; if you are only interested in files proper, use -type f instead of ! -type d) within the current directory and pass the names of those to wc -l for counting the number of lines. (Strictly, it counts the number of newline characters.)

If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass -b or -q to ls to quote them; the difference between the two is exactly how those characters are represented in the output. (Using -b retains file name uniqueness in the output.) Another, more complex option would be something like find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l. The latter uses find to print the inode number of each file, and then wc to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.

If you want to count hardlinks (regardless of the number of links) to the same data as a single entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the find invocation above; sort --unique can do this. Something like find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l will count the number of unique inode numbers used within the current directory.

Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't quite always true; the latter assumption in particular falls apart in the case of overlay filesystems. If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exists in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore the possibility of overlays and instead consider only the user-visible current state of a directory.

In all cases of find, the . directory specifier can be replaced with an explicit directory name, such as /etc or $HOME.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

2 comment threads

I like `ls -A | wc -l`, but I'm not sure why it works. `ls -A` outputs multiple files on each line, ... (4 comments)
Use option -q to handle files with special characters (2 comments)
+5
−0

Populate an array of the file names and then print how many entries are in the array:

$ files=( * )
$ echo "${#files[@]}"
124

That will work correctly even if your file names contain newlines, unlike anything piped to wc -l.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

2 comment threads

Doesn't count dotfiles (1 comment)
Fails if directory is empty (1 comment)
+3
−0

A solution I often use (and which is ultimately a variation of the find-based approach in the answer by Canina) also uses find, but only prints a single . per file:

find . -maxdepth 1 -type f -printf '.' | wc -m

It then uses the -m flag of wc to print the number of characters, rather than lines (for obvious reasons).

This approach also avoids problems that may occur if the actual filenames contain "exotic" characters (such as the newline); it may in addition have a slight speed advantage over printing the entire filename for cases where there are many files and, possibly, long filenames (but YMMV and I haven't conducted studies).

Note that, as with any find-based approach, this will also count hidden files (those whose name begin with .) unless you filter them out, so the result can differ from what you might expect based on the output of ls.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+1
−0

The obvious way to do it is:

  1. Find some way to get a list of the files
  2. Pipe it into wc to count

Classically, this would be find /path/to/dir | wc. However fd does the same thing with better usability. By default, fd will skip "hidden" files and directories (like .foo) and will include both files and directories.

Both behaviors can be changed by finding the appropriate CLI arguments in man fd. However, if you are in a hurry, there's no need to overcomplicate it:

fd | grep -v '/$' | wc

/$ is a regex meaning "ends with /" and -v is short for --invert-match.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

2 comment threads

This answer is incomplete (1 comment)
Package name (1 comment)

Sign up to answer this question »