Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users

Dashboard
Notifications
Mark all as read
Q&A

Why does the file command fail to recognize non-text files as such?

+1
−0

POSIX defines

  • Text file as

    A file that contains characters organized into zero or more lines. The lines do not contain NUL characters and none can exceed {LINE_MAX} bytes in length, including the <newline> character.

  • Line as

    A sequence of zero or more non- <newline> characters plus a terminating <newline> character.

  • Character as

    A sequence of one or more bytes representing a single graphic symbol or control code.

Consider then six files, each with two bytes, created with these Printf commands (using octals):

printf "\101\012" > file1 #A<newline>
printf "\010\012" > file2 #<backspace><newline>
printf "\101\101" > file3 #AA
printf "\200\012" > file4
printf "\200\200" > file5
printf "\000\012" > file6 #<null><newline>

Now, in the UTF-8 encoding, the octal 012 (0x0A) is the newline character, 101 (0x41) is the graphic symbol A, 010 (0x08) is the backspace control character and 200 (0x80) is a continuation byte that never occurs as the first byte of a multi-byte sequence, so it does not form a valid character.

Hence, I would regard files 1 and 2 as text files, but the remaining as non-text files, because file 3 is not newline terminated, files 4 and 5 have an invalid character and file 6 contains a null byte.

However, the file command does not seem to completely agree with me; it lists files 3, 4 and 5 as text files,

$ file --mime-type file*
file1: text/plain
file2: text/plain
file3: text/plain
file4: text/plain
file5: text/plain
file6: application/octet-stream

Why does the file command fail to identify files 3, 4 and 5 as non-text files (I'm assuming it can't possibly be a bug) even though I use en_US.UTF-8 as my locale, or else what did I incorrectly understand?

Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

6 comments

Why don't you consider 3,4, and 5 as text files? 3 fist the definitions given. I'm not quite sure about 4 and 5, but my first guess would be that they just didn't put that much error checking into it (0x80 is a valid continuation byte, so it can appear in valid text files) Moshi‭ 22 days ago

@Moshi True, I said 0x80 was straightforwardly invalid but it is not. Still, it cannot be the first byte of a valid character. It forcefully follows that neither file 4 nor file 5 are newline terminated or that they have an invalid character. File 3 is also not newline terminated (even in ASCII encoding). Quasímodo‭ 22 days ago

They don't have to be newline terminated. A newline termination defines a line, yes, but a text file can have zero lines. Moshi‭ 22 days ago

Note that file is not a POSIX utility; the question it answers is not “does this conform to POSIX's idea of what is a text file” but “is this file likely to contain human-readable text”. celtschk‭ 22 days ago

@Moshi But then any kind of file would be a text-file, since you could say it contained zero lines. Even a file with a NUL would be a text-file. Instead, I interpret that if the file contains non-lines, then it is not a text-file. In that sense, an empty text file would be the only case for which "zero lines" apply. Quasímodo‭ 22 days ago

Show 1 more comments

1 answer

+2
−0

You might be enlightened by reading the man page for file(1).

A brief quotation:

This manual page documents version 5.35 of the file command. file tests each argument in an attempt to classify it. There are three sets of tests, performed in this order: filesystem tests, magic tests, and language tests. The first test that succeeds causes the file type to be printed. The type printed will usually contain one of the words text (the file contains only printing characters and a few common control characters and is probably safe to read on an ASCII terminal), executable (the file contains the result of compiling a program in a form understandable to some UNIX kernel or another), or data meaning anything else (data is usually “binary” or non-printable).

Then we skip a bit:

If a file does not match any of the entries in the magic file, it is examined to see if it seems to be a text file. ASCII, ISO-8859-x, non-ISO 8-bit extended-ASCII character sets (such as those used on Macintosh and IBM PC systems), UTF-8-encoded Unicode, UTF-16-encoded Unicode, and EBCDIC character sets can be distinguished by the different ranges and sequences of bytes that constitute printable text in each set. If a file passes any of these tests, its character set is reported. ASCII, ISO-8859-x, UTF-8, and extended-ASCII files are identified as “text” because they will be mostly readable on nearly any terminal; UTF-16 and EBCDIC are only “character data” because, while they contain text, it is text that will require translation before it can be read. In addition, file will attempt to determine other characteristics of text-type files. If the lines of a file are terminated by CR, CRLF, or NEL, instead of the Unix-standard LF, this will be reported. Files that contain embedded escape sequences or overstriking will also be identified.

So: you have very small UTF-8 files. file(1) does as specified by its man page, and announces that pretty much all of them are plausibly text.

Why does this post require moderator attention?
You might want to add some details to your flag.

4 comments

Indeed, I had only read man 1p file. To be honest I don't see how the information you bring explains the matter. Note that none of the files are reported as UTF-8, but instead the first three as "ASCII", the next two as "non-ISO extended-ASCII" and the last as simply "data". An important question: Do we agree on that only files 1 and 2 are text files? Quasímodo‭ 22 days ago

I don't think I can agree with you. According to the filesystem, file classes are "directory", "fifo", "symbolic link", "hard link, etc. Interpreting the significance of the contents of a file is extremely specific to the task that the user is trying to accomplish. I think you are currently asking for a Platonic category of text file, about which reasonable people can and will disagree. Asking for help in doing the differentiation that is actually significant to you might be a good move here. dsr‭ 19 days ago

I don't really ask for a Platonic category of file, but for the POSIX category. Most text-processing utilities (sed, grep, awk, ...) assume text files in the POSIX specification. To keep my applications portable, I try to conform to POSIX. But then there are many users/editors that, for example, don't newline terminate the last "line" of a file. May the utilities break? That surely depends on whether they are classified by POSIX as text files or not. Thus my interest on this question. Quasímodo‭ 18 days ago

@Quasimodo: Note that file doesn't give a definitive answer anyway. For a start, it only looks at the beginning of a file, so it is starts with text, but then continues with arbitrary binary data, file may still classify it as text. If you want to know whether the last character of a file is a newline character, maybe you should test that directly. tail -c 1 gives you the last byte of the file. celtschk‭ 4 days ago

Sign up to answer this question »

This community is part of the Codidact network. We have other communities too — take a look!

You can also join us in chat!

Want to advertise this community? Use our templates!