There are numerous difficulties to writing software that would work in both ASCII and EBCDIC. • The gaps between letters makes simple code that worked in ASCII fail on EBCDIC. For example would print the letters from A to Z if ASCII is used, but print 41 characters (including a number of unassigned ones) in EBCDIC. • Sorting EBCDIC puts lowercase letters before uppercase letters and letters before numbers, exactly the opposite of ASCII. • Many programming languages and file formats and network protocols designed for ASCII use available punctuation marks (such as caret , tilde , square brackets, and the curly braces and ) that did not exist in EBCDIC, making translation to EBCDIC systems difficult. Workarounds such as
trigraphs were used. Conversely EBCDIC had some characters such as logical not, (
US cent) that were used on IBM systems and could not be translated to ASCII. The logical not character is used in the
PL/I programming language, and some other IBM languages. • The EBCDIC character NL (next line) is best treated as the ASCII LF, but, as EBCDIC also contains a character called LF, this is not always done consistently. • If seven-bit ASCII is used, there was an "unused" high bit in 8-bit bytes, and many pieces of software stored other information there. Software would also pack the seven bits and discard the eighth, such as packing five seven-bit ASCII characters in a
36-bit word. On the
PDP-11, bytes with the high bit set were treated as negative numbers, behavior that was copied to
C, causing unexpected problems if the high bit was set. These all made it difficult to switch from ASCII to the 8-bit EBCDIC (and also made it difficult to switch to 8-bit
extended ASCII encodings). == Code page layout ==