Each Unicode
code point is encoded either as one or two 16-bit
code units. Code points less than 216 ("in the BMP") are encoded with a single 16-bit code unit equal to the numerical value of the code point, as in the older UCS-2. Code points greater than or equal to 216 ("above the BMP") are encoded using
two 16-bit code units. These two 16-bit code units are chosen from the
UTF-16 surrogate range which had not previously been assigned to characters. Values in this range are not used as characters, and UTF-16 provides no legal way to code them as individual code points. A UTF-16 stream, therefore, consists of single 16-bit codes outside the surrogate range, and pairs of 16-bit values that are within the surrogate range.
U+0000 to U+D7FF and U+E000 to U+FFFF Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. These code points in the
Basic Multilingual Plane (BMP) are the
only code points that can be represented in UCS-2. As of Unicode 9.0, some modern non-Latin Asian, Middle-Eastern, and African scripts fall outside this range, as do most
emoji characters.
Code points from U+010000 to U+10FFFF Code points from the other
planes are encoded as two 16-bit
code units called a
surrogate pair. The first code unit is a
high surrogate and the second is a
low surrogate (These are also known as "leading" and "trailing" surrogates, respectively, analogous to the leading and trailing bytes of UTF-8.): • 0x10000 is subtracted from the code point
(U), leaving a 20-bit number ''(U')'' in the hex number range 0x00000–0xFFFFF. • The high ten bits (in the range 0x000–0x3FF) are added to 0xD800 to give the first 16-bit
code unit or
high surrogate (W1), which will be in the range . • The low ten bits (also in the range 0x000–0x3FF) are added to 0xDC00 to give the second 16-bit
code unit or
low surrogate (W2), which will be in the range . Illustrated visually, the distribution of
U' between
W1 and
W2 looks like: U' = yyyyyyyyyyxxxxxxxxxx // U - 0x10000 W1 = 110110yyyyyyyyyy // 0xD800 + yyyyyyyyyy W2 = 110111xxxxxxxxxx // 0xDC00 + xxxxxxxxxx Since the ranges for the
high surrogates (),
low surrogates (), and valid BMP characters (0x0000–0xD7FF, 0xE000–0xFFFF) are
disjoint, it is not possible for a surrogate to match a BMP character, or for two adjacent
code units to look like a legal
surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is
self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units (i.e. the type of
code unit can be determined by the ranges of values in which it falls). UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as
Shift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string. UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte. Because the most commonly used characters are all in the BMP, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software (e.g. ).
U+D800 to U+DFFF (surrogates) The official Unicode standard says that no UTF forms, including UTF-16, can encode the surrogate code points. Since these will never be assigned a character, there should be no reason to encode them. However, Windows allows unpaired surrogates in filenames and other places, which generally means they have to be supported by software in spite of their exclusion from the Unicode standard. UCS-2, UTF-8, and
UTF-32 can encode these code points in trivial and obvious ways, and a large amount of software does so, even though the standard states that such arrangements should be treated as encoding errors. It is possible to unambiguously encode an
unpaired surrogate (a high surrogate code point not followed by a low one, or a low one not preceded by a high one) in the format of UTF-16 by using a code unit equal to the code point. The result is not valid UTF-16, but the majority of UTF-16 encoder and decoder implementations do this when translating between encodings.
Examples To encode U+10437 (𐐷) to UTF-16: • Subtract 0x10000 from the code point, leaving 0x0437. • For the high surrogate, shift right by 10 (divide by 0x400), then add 0xD800, resulting in 0x0001 + 0xD800 = 0xD801. • For the low surrogate, take the low 10 bits (remainder of dividing by 0x400), then add 0xDC00, resulting in 0x0037 + 0xDC00 = 0xDC37. To decode U+10437 (𐐷) from UTF-16: • Take the high surrogate (0xD801) and subtract 0xD800, then multiply by 0x400, resulting in 0x0001 × 0x400 = 0x0400. • Take the low surrogate (0xDC37) and subtract 0xDC00, resulting in 0x37. • Add these two results together (0x0437), and finally add 0x10000 to get the final code point, 0x10437. The following table summarizes this conversion, as well as others. The colors indicate how bits from the code point are distributed among the UTF-16 bytes. Additional bits added by the UTF-16 encoding process are shown in black. == Byte-order encoding schemes ==