The BOM is, simply, the Unicode codepoint , encoded in the current encoding. A text file beginning with the bytes FE FF suggests that the file is encoded in big-endian UTF-16. The name ZWNBSP (
zero-width no-break space) should be used if the BOM appears in the middle of a data stream. Unicode says it should be interpreted as a normal codepoint (namely a
word joiner), not as a BOM. Since Unicode 3.2, this usage has been deprecated in favor of . The Unicode 1.0 name for this codepoint is also BYTE ORDER MARK.
UTF-8 The
UTF-8 representation of the BOM is the (
hexadecimal) byte sequence EF BB BF. The Unicode Standard permits the BOM in
UTF-8, but does not require or recommend its use. UTF-8 always has the same byte order, so its only use in UTF-8 is to signal at the start that the text stream is encoded in UTF-8, or that it was converted to UTF-8 from a stream that contained an optional BOM. The standard also does not recommend removing a BOM when it is there, so that round-tripping between encodings does not lose information, and so that code that relies on it continues to work. The IETF recommends that if a protocol either (a) always uses UTF-8, or (b) has some other way to indicate what encoding is being used, then it "SHOULD forbid use of U+FEFF as a signature." An example of not following this recommendation is the IETF
Syslog protocol which requires text to be in UTF-8 and also requires the BOM. Not using a BOM allows text to be backwards-compatible with software designed for
extended ASCII. For instance many programming languages permit non-
ASCII bytes in
string literals but not at the start of the file. A BOM is not necessary for detecting UTF-8 encoding. UTF-8 is a sparse encoding: a large fraction of possible byte combinations do not result in valid UTF-8 text. Binary data and text in any other encoding are likely to contain byte sequences that are invalid as UTF-8, so existence of such invalid sequences indicates the file is not UTF-8, while lack of invalid sequences is a very strong indication the text
is UTF-8. Practically the only exception is text containing only ASCII-range bytes, as this may be a non-ASCII 7-bit encoding, but this is unlikely in any modern data and even then the difference from ASCII is minor (such as changing '\' to '¥').
Microsoft compilers and interpreters, and many pieces of software on
Microsoft Windows such as
Notepad (prior to Windows 10 Build 1903) treat the BOM as a required
magic number rather than use heuristics. These tools add a BOM when saving text as UTF-8, and cannot interpret UTF-8 unless the BOM is present or the file contains only ASCII.
Windows PowerShell (up to 5.1) will add a BOM when it saves UTF-8 XML documents. However, PowerShell Core 6 has added a -Encoding switch on some cmdlets called utf8NoBOM so that document can be saved without BOM.
Google Docs also adds a BOM when converting a document to a
plain text file for download.
UTF-16 In
UTF-16, a BOM (U+FEFF) may be placed as the first bytes of a file or character stream to indicate the endianness (byte order) of all the 16-bit
code units of the file or stream. If an attempt is made to read this stream with the wrong endianness, the bytes will be swapped, thus delivering the character U+FFFE, which
is defined by Unicode as a "" that should never appear in text. • If the 16-bit units are represented in
big-endian byte order ("UTF-16BE"), the BOM is the (
hexadecimal) byte sequence FE FF • If the 16-bit units use
little-endian order ("UTF-16LE"), the BOM is the (
hexadecimal) byte sequence FF FE For the
IANA registered charsets UTF-16BE and UTF-16LE, a byte-order mark should not be used because the names of these character sets already determine the byte order. Clause D98 of conformance (section 3.10) of the Unicode standard states, "The UTF-16 encoding scheme may or may not begin with a BOM. However, when there is no BOM, and in the absence of a higher-level protocol, the byte order of the UTF-16 encoding scheme is big-endian." Whether or not a higher-level protocol is in force is open to interpretation. Files local to a computer for which the native byte ordering is little-endian, for example, might be argued to be encoded as UTF-16LE implicitly. Therefore, the presumption of big-endian is widely ignored. The
W3C/
WHATWG encoding standard used in HTML5 specifies that content labelled either "utf-16" or "utf-16le" are to be interpreted as little-endian "to deal with deployed content". However, if a byte-order mark is present, then that BOM is to be treated as "more authoritative than anything else". Without a BOM, it is still fairly reliable to detect if text is UTF-16 and what byte order it is in if the text is sufficiently long. The characters 1-255, such as line-endings and spaces which are much used also in non-latin scripts, have a NUL high byte. If NUL bytes are much more often at even offsets in the file then it is likely to be big-endian UTF-16, and for odd offsets little-endian.
UTF-32 Although a BOM could be used with
UTF-32, this encoding is rarely used for transmission. Otherwise the same rules as for
UTF-16 are applicable. The BOM for little-endian UTF-32 is the same pattern as a little-endian UTF-16 BOM followed by a UTF-16 NUL character, an unusual example of the BOM being the same pattern in two different encodings. Programmers using the BOM to identify the encoding will have to decide whether UTF-32 or UTF-16 with a NUL first character is more likely. UTF-32 is easily detected without a BOM because every 4th byte is NUL. == Byte-order marks by encoding ==