Post History
There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count. If you know that you don't ha...
Answer
#8: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't *quite* always true; the latter assumption in particular falls apart in the case of [overlay filesystems](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html). If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exists in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore overlays and consider only the user-visible current state of a directory.- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
- Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't *quite* always true; the latter assumption in particular falls apart in the case of [overlay filesystems](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html). If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exists in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore the possibility of overlays and instead consider only the user-visible current state of a directory.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#7: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't *quite* always true; the latter assumption in particular falls apart in the case of [overlay filesystems](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html). If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exist in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore overlays and consider only the user-visible current state of a directory.- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
- Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't *quite* always true; the latter assumption in particular falls apart in the case of [overlay filesystems](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html). If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exists in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore overlays and consider only the user-visible current state of a directory.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#6: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. This relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time.- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i
- ' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
- Anything that relies on counting inode numbers relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time. These assumptions aren't *quite* always true; the latter assumption in particular falls apart in the case of [overlay filesystems](https://www.kernel.org/doc/html/latest/filesystems/overlayfs.html). If we're talking about overlaid filesystems, though, then a lot of other assumptions are also suddenly called into question; for example, does a file that exist in a "lower" filesystem but which has been deleted in an "upper" one exist or not for the purpose of counting the number of files? For most purposes, it's probably safe to ignore overlays and consider only the user-visible current state of a directory.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#5: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. This relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- Although when run from a terminal, the output of `ls -A` is column-oriented rather than line-oriented, it's common for \*nix tools to resort to line-oriented output when run in such a way that standard output is not attached to a terminal. In the specific case of `ls`, you can force that behavior with `-1` if you want to, but when in a pipe, it's not necessary to do so. So you could, but need not, use `ls -A1`.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, it counts the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. This relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#4: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)
If the directory in question might contain files with names with exotic characters in them, then one option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i' | wc -l`. This uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the former; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. This relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time.- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)
- If the directory in question might contain files with names with exotic (non-printable) characters in them, then you can pass `-b` or `-q` to `ls` to quote them; the difference between the two is exactly how those characters are represented in the output. (Using `-b` retains file name uniqueness in the output.) Another, more complex option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i
- ' | wc -l`. The latter uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the `find` invocation above; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i
- ' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. This relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#3: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)
- If the directory in question might contain files with names with exotic characters in them, then one option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. This uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the former; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)
- If the directory in question might contain files with names with exotic characters in them, then one option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. This uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the former; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i
- ' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. This relies on the facts that inode numbers are unique per file system, and that a single directory can only exist on a single file system at a time.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#2: Post edited
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)
- If the directory in question might contain files with names with exotic characters in them, then one option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. This uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the former; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
- There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count.
- If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. (It's usually a bad idea to parse the output of `ls`, but in a case like this, it might do.) `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files.
- If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.)
- If the directory in question might contain files with names with exotic characters in them, then one option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. This uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory.
- If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the former; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory.
- In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.
#1: Initial revision
There are several ways that I can think of, depending on how correct you need the answer to be, particularly in exotic situations, and exactly what you want to count. If you know that you don't have any exotic file names in the directory, then a relatively trivial `ls -A | wc -l` will probably do fine. `ls -A` lists all files (including directories) including dotfiles but excluding the `.` and `..` directory entries, and `wc -l` counts the number of lines in the output. This should work in most situations as long as you don't have files with names that contain newlines, and you are fine with counting directories (but not their contents) along with files. If you want to exclude directories, a more complex approach would be something like `find . -mindepth 1 -maxdepth 1 ! -type d | wc -l` which uses `find` to print the names of all *non-directories* (which need not be *files*; if you are only interested in files proper, use `-type f` instead of `! -type d`) within the current directory and pass the names of those to `wc -l` for counting the number of lines. (Strictly, the number of *newline characters*.) If the directory in question might contain files with names with exotic characters in them, then one option would be something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | wc -l`. This uses `find` to print the inode number of each file, and then `wc` to count the number of lines in the output. Since each file's inode number is printed on a line of its own, this returns the number of (inode) entries within the current directory. If you want to count hardlinks (regardless of the number of links) to the same data as a *single* entry, matching the allocation on the underlying file system, then you can simply add a uniqueness criteria to the former; `sort --unique` can do this. Something like `find . -mindepth 1 -maxdepth 1 -printf '%i\n' | sort -u | wc -l` will count the number of *unique* inode numbers used within the current directory. In all cases of `find`, the `.` directory specifier can be replaced with an explicit directory name, such as `/etc` or `$HOME`.