Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Comments on Why does the file command fail to recognize non-text files as such?

Parent

Why does the file command fail to recognize non-text files as such?

+3
−0

POSIX defines

  • Text file as

    A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character.

  • Line as

    A sequence of zero or more non- <newline> characters plus a terminating <newline> character.

  • Character as

    A sequence of one or more bytes representing a single graphic symbol or control code.

Consider then six files, each with two bytes, created with these Printf commands (using octals):

printf "\101\012" > file1 #A<newline>
printf "\010\012" > file2 #<backspace><newline>
printf "\101\101" > file3 #AA
printf "\200\012" > file4
printf "\200\200" > file5
printf "\000\012" > file6 #<null><newline>

Now, in the UTF-8 encoding, the octal 012 (0x0A) is the newline character, 101 (0x41) is the graphic symbol A, 010 (0x08) is the backspace control character and 200 (0x80) is a continuation byte that never occurs as the first byte of a multi-byte sequence, so it does not form a valid character.

Hence, I would regard files 1 and 2 as text files, but the remaining as non-text files, because file 3 is not newline terminated, files 4 and 5 have an invalid character and file 6 contains a null byte.

However, the file command does not seem to completely agree with me; it lists files 3, 4 and 5 as text files,

$ file --mime-type file*
file1: text/plain
file2: text/plain
file3: text/plain
file4: text/plain
file5: text/plain
file6: application/octet-stream

Why does the file command fail to identify files 3, 4 and 5 as non-text files (I'm assuming it can't possibly be a bug) even though I use en_US.UTF-8 as my locale, or else what did I incorrectly understand?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

General comments (6 comments)
Post
+4
−0

You might be enlightened by reading the man page for file(1).

A brief quotation:

This manual page documents version 5.35 of the file command. file tests each argument in an attempt to classify it. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests. The first test that succeeds causes the file type to be printed. The type printed will usually contain one of the words text (the file contains only printing characters and a few common control characters and is probably safe to read on an ASCII terminal), executable (the file contains the result of compiling a program in a form understandable to some UNIX kernel or another), or data meaning anything else (data is usually “binary” or non-printable).

Then we skip a bit:

If a file does not match any of the entries in the magic file, it is examined to see if it seems to be a text file. ASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets (such as those used on Macintosh and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC character sets can be distinguished by the different ranges and sequences of bytes that constitute printable text in each set. If a file passes any of these tests, its character set is reported. ASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified as “text” because they will be mostly readable on nearly any terminal; UTF-16 and EBCDIC are only “character data” because, while they contain text, it is text that will require translation before it can be read. In addition, file will attempt to determine other characteristics of text-type files. If the lines of a file are terminated by CR, CRLF, or NEL, instead of the Unix-standard LF, this will be reported. Files that contain embedded escape sequences or overstriking will also be identified.

So: you have very small UTF-8 files. file(1) does as specified by its man page, and announces that pretty much all of them are plausibly text.

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

1 comment thread

General comments (4 comments)
General comments
Quasímodo‭ wrote over 3 years ago

Indeed, I had only read man 1p file. To be honest I don't see how the information you bring explains the matter. Note that none of the files are reported as UTF-8, but instead the first three as "ASCII", the next two as "non-ISO extended-ASCII" and the last as simply "data". An important question: Do we agree on that only files 1 and 2 are text files?

dsr‭ wrote over 3 years ago

I don't think I can agree with you. According to the filesystem, file classes are "directory", "fifo", "symbolic link", "hard link, etc. Interpreting the significance of the contents of a file is extremely specific to the task that the user is trying to accomplish.

I think you are currently asking for a Platonic category of text file, about which reasonable people can and will disagree. Asking for help in doing the differentiation that is actually significant to you might be a good move here.

Quasímodo‭ wrote over 3 years ago

I don't really ask for a Platonic category of file, but for the POSIX category. Most text-processing utilities (sed, grep, awk, ...) assume text files in the POSIX specification. To keep my applications portable, I try to conform to POSIX. But then there are many users/editors that, for example, don't newline terminate the last "line" of a file. May the utilities break? That surely depends on whether they are classified by POSIX as text files or not. Thus my interest on this question.

celtschk‭ wrote over 3 years ago

@Quasimodo: Note that file doesn't give a definitive answer anyway. For a start, it only looks at the beginning of a file, so it is starts with text, but then continues with arbitrary binary data, file may still classify it as text. If you want to know whether the last character of a file is a newline character, maybe you should test that directly. tail -c 1 gives you the last byte of the file.