The history of user interfaces can be divided into the following phases according to the dominant type of user interface:
1945–1968: Batch interface In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible. The input side of the user interfaces for batch machines was mainly
punched cards or equivalent media like
paper tape. The output side added
line printers to these media. With the limited exception of the system
operator's console, human beings did not interact with batch machines in real time at all. Submitting a job to a batch machine involved first preparing a deck of punched cards that described a program and its dataset. The program cards were not punched on the computer itself but on
keypunches, specialized, typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes designed to be parsed by the smallest possible compilers and interpreters. . Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting
magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation. The
turnaround time for a single job often spanned entire days. If one was very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as
plugboards. Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as
operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called "
load-and-go" systems. These used a
monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces.
1969–present: Command-line user interface Command-line interfaces (
CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change their mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy
mnemonic load on the user, requiring a serious investment of effort and learning time to master. The earliest command-line systems combined
teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the
rule of least surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users. The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s. Just as importantly, the existence of an accessible screen—a two-dimensional display of text that could be rapidly and reversibly modified—made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as
rogue(6), and
vi(1), are still a live part of
Unix tradition.
1985: SAA user interface or text-based user interface In 1985, with the beginning of
Microsoft Windows and other
graphical user interfaces, IBM created what is called the
Systems Application Architecture (SAA) standard which include the
Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent
MS-DOS or Windows Console Applications will use that standard as well. This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard.
1968–present: Graphical user interface GUI. • 1968 –
Douglas Engelbart demonstrated
NLS, a system which uses a
mouse,
pointers,
hypertext, and multiple
windows. • 1970 – Researchers at
Xerox Palo Alto Research Center (many from
SRI) develop
WIMP paradigm (Windows, Icons, Menus, Pointers) • 1981 –
Xerox Star: focus on
WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing • 1982 –
Rob Pike and others at
Bell Labs designed
Blit, which was released in 1984 by AT&T and
Teletype as DMD 5620 terminal. • 1984 – Apple
Macintosh popularizes the
GUI.
Super Bowl commercial shown twice, was the most expensive commercial ever made at that time • 1984 –
MIT's
X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems • 1985 –
Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead). • 1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows • 1986 – Apple threatens to sue
Digital Research because their GUI desktop looked too much like Apple's Mac. • 1987 –
Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements • 1987 – Macintosh II: first full-color Mac • 1988 –
OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2 • Mid-2000s - UIs that used 3D-rendered elements became popular and were used on several operating systems, including Mac OS X and Windows XP/Vista. == Interface design ==