Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

How to find big files and directories?

+3
−1

You've just run df -h and are shocked how little disk space you got left. You'd like to find where all the space has gone.

How to find big(est) files and directories on your system? (So that you can try to free up space.)

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Stuff that takes up a lot of space, usually doesn't come packaged in a single file. It may be a singl... (5 comments)

3 answers

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+1
−0

I agree with other answers:

  • Normal TUI way is ncdu
  • Normal CLI way is du
  • Normal GUI way is Baobab aka "Gnome Disk Usage Analyzer" and the like

But just for fun, you can build a pipeline such as the one below, which finds all files, then uses stat to query their size and parallel to parallelize it, and finally sort and tail to print the best one.

$ fd | pv | parallel 'echo $(stat -c %s {}) {}' | sort -h | tail -n 10

The performance of this is iffy, so I added pv for some progress info. pv doesn't know how many files there are until fd is done enumerating them, so it won't give % progress, only how much is done. You can convert byte sizes to human size with cut and numfmt.

This does give you more control over the specifics, such as if you want to look at only a certain subset of files. You can do that with du as well, but this way you get to use more familiar generic tools rather than dus specialized syntax.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

+3
−0

If you don't mind installing something I recommend ncdu. It sorts the directories and files by size, you can exclude other mounts, navigate the tree directly and even delete files and folders as you go.

ncdu 1.19 ~ Use the arrow keys to navigate, press ? for help
--- / --------------------------------------------------------------------------
   14.4 GiB [###########] /home
    8.3 GiB [######     ] /var
    5.9 GiB [####       ] /usr
    1.6 GiB [#          ] /opt
  180.6 MiB [           ] /boot
   77.8 MiB [           ] /root
   12.2 MiB [           ] /etc
    6.6 MiB [           ] /tmp
    3.6 MiB [           ]  core
    4.0 KiB [           ] /snap
    0.0   B [           ] /media
    0.0   B [           ] /mnt
@   0.0   B [           ]  initrd.img.old
@   0.0   B [           ]  initrd.img
@   0.0   B [           ]  vmlinuz.old
*Total disk usage:  30.5 GiB   Apparent size:  29.2 GiB   Items: 667060

There should be a package for it by default in every distribution.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

+2
−0

If you want to go with command-line tools, the first step might be to run a

du -h --max-depth=1 | sort -h -k 1 -r

in your root directory. This will give you a list of the sub-directories sorted by gross usage. You may then proceed by applying the same command inside the sub-directory identified as the "heaviest", to ultimately find the files that eat up all your storage space.

Note that for a "complete picture" you may need to do this with sudo privileges to access directories that your regular user may not enter. Also keep in mind that there are limitations to the accuracy of disk usage reported by du, as explained e.g. in the Wiki page.

If you want to go for a graphical tool, there is the QDirStat project which will represent the individual files as tiles (with sizes proportional to their ... size).

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »