Null-terminated strings were produced by the .ASCIZ directive of the
PDP-11 assembly languages and the ASCIZ directive of the
MACRO-10 macro assembly language for the
PDP-10. These predate the development of the C programming language, but other forms of strings were often used. At the time C (and the languages that it was derived from) was developed, memory was extremely limited, so using only one byte of overhead to store the length of a string was attractive. The only popular alternative at that time, usually called a "Pascal string" (a more modern term is "
length-prefixed"), used a leading
byte to store the length of the string. This allows the string to contain NUL and made finding the length need only one memory access (O(1)
(constant) time), but limited string length to 255 characters. C designer
Dennis Ritchie chose to follow the convention of null-termination to avoid the limitation on the length of a string and because maintaining the count seemed, in his experience, less convenient than using a terminator. This had some influence on CPU
instruction set design. Some CPUs in the 1970s and 1980s, such as the
Zilog Z80 and the
DEC VAX, had dedicated instructions for handling length-prefixed strings. However, as the null-terminated string gained traction, CPU designers began to take it into account, as seen for example in IBM's decision to add the "Logical String Assist" instructions to the
ES/9000 520 in 1992 and the vector string instructions to the
IBM z13 in 2015.
FreeBSD developer
Poul-Henning Kamp, writing in
ACM Queue, referred to the victory of null-terminated strings over a 2-byte (not one-byte) length as "the most expensive one-byte mistake" ever. == Limitations ==