In an 1886 letter,
Charles Sanders Peirce described how logical operations could be carried out by electrical switching circuits. During 1880-81 he showed that
NOR gates alone (or
NAND gates alone) can be used to reproduce the functions of all the other
logic gates, but this work on it was unpublished until 1933. The first published proof was by
Henry M. Sheffer in 1913, so the NAND logical operation is sometimes called
Sheffer stroke; the
logical NOR is sometimes called ''Peirce's arrow
. Consequently, these gates are sometimes called universal logic gates''. Eventually,
vacuum tubes replaced relays for logic operations.
Lee de Forest's modification, in 1907, of the
Fleming valve can be used as a logic gate.
Ludwig Wittgenstein introduced a version of the 16-row
truth table as proposition 5.101 of
Tractatus Logico-Philosophicus (1921).
Walther Bothe, inventor of the
coincidence circuit, got part of the 1954
Nobel Prize in physics, for the first modern electronic AND gate in 1924.
Konrad Zuse designed and built electromechanical logic gates for his computer
Z1 (from 1935 to 1938). The first recorded idea of using
digital electronics for computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by
C. E. Wynn-Williams. From 1934 to 1936,
NEC engineer
Akira Nakashima,
Claude Shannon, and
Victor Shestakov published papers introducing
switching circuit theory, using digital electronics for
Boolean algebraic operations. In 1936
Alan Turing published his seminal paper
On Computable Numbers, with an Application to the Entscheidungsproblem in which he modeled computation in terms of a one-dimensional storage tape, leading to the idea of the
Universal Turing machine and
Turing-complete systems. The first digital electronic computer was developed between April 1936 and June 1939 at the IBM Patent Department in Endicott, New York, by Arthur Halsey Dickinson. In this computer, IBM introduced a calculating device with a keyboard, processor and electronic output (display). The competitor to IBM was the digital electronic computer NCR3566, developed in NCR, Dayton, Ohio by Joseph Desch and Robert Mumma in the period April 1939 - August 1939. The IBM and NCR machines were decimal, executing addition and subtraction in binary position code. In December 1939
John Atanasoff and
Clifford Berry completed their experimental model to prove the concept of the
Atanasoff–Berry computer (ABC) which began development in 1937. This experimental model is binary, executed addition and subtraction in octal binary code and is the first binary digital
electronic computing device. The Atanasoff–Berry computer was intended to solve systems of linear equations, though it was not programmable. The computer was never truly completed due to Atanasoff's departure from
Iowa State University in 1942 to work for the United States Navy. Many people credit ABC with many of the ideas used in later developments during the age of early electronic computing. The
Z3 computer, built by
German inventor
Konrad Zuse in 1941, was the first programmable, fully automatic computing machine, but it was not electronic. During World War II, ballistics computing was done by women, who were hired as "computers." The term computer remained one that referred to mostly women (now seen as "operator") until 1945, after which it took on the modern definition of machinery it presently holds. The
ENIAC (Electronic Numerical Integrator And Computer) was the first electronic general-purpose computer, announced to the public in 1946. It was Turing-complete, digital, and capable of being reprogrammed to solve a full range of computing problems. Women implemented the programming for machines like the ENIAC, and men created the hardware.
William Shockley,
John Bardeen and
Walter Brattain at
Bell Labs invented the first working
transistor, the
point-contact transistor, in 1947, followed by the
bipolar junction transistor in 1948. At the
University of Manchester in 1953, a team under the leadership of
Tom Kilburn designed and built the first
transistorized computer, called the
Transistor Computer, a machine using the newly developed transistors instead of valves. The first stored-program transistor computer was the ETL Mark III, developed by Japan's Electrotechnical Laboratory from 1954 to 1956. In 1954, 95% of computers in service were being used for engineering and scientific purposes. {{Blockquote|Computers other than the E101 are divided into four classes:
desk calculators; general-purpose punched-card computers such as the
International Business Machines Corporation CPC or
Remington Rand 409;
magnetic-drum computers; and
electronic-memory computers.
Personal computers The
metal–oxide–silicon field-effect transistor (MOSFET), also known as the MOS transistor, was invented at Bell Labs between 1955 and 1960, It was the first truly compact transistor that could be
miniaturised and
mass-produced for a wide range of uses. The MOSFET is the most widely used transistor in computers, and is the fundamental building block of
digital electronics. The
silicon-gate MOS integrated circuit was developed by
Federico Faggin at
Fairchild Semiconductor in 1968. This led to the development of the first single-chip
microprocessor, the
Intel 4004. The Intel 4004 was developed as a single-chip microprocessor from 1969 to 1970, led by Intel's Federico Faggin,
Marcian Hoff, and
Stanley Mazor, and Busicom's Masatoshi Shima. The chip was mainly designed and realized by Faggin, with his silicon-gate MOS technology. They used the microprocessor in the TI-99/4 and
TI-99/4A computers. The 1980s brought about significant advances with microprocessors that greatly impacted the fields of engineering and other sciences. The
Motorola 68000 microprocessor had a processing speed that was far superior to the other microprocessors being used at the time. Because of this, having a newer, faster microprocessor allowed for the newer
microcomputers that came along after to be more efficient in the amount of computing they were able to do. This was evident in the 1983 release of the
Apple Lisa. The Lisa was one of the first personal computers with a
graphical user interface (GUI) that was sold commercially. It ran on the Motorola 68000 CPU and used both dual floppy disk drives and a 5 MB hard drive for storage. The machine also had 1MB of
RAM used for running software from disk without rereading the disk persistently. After the failure of the Lisa in terms of sales, Apple released its
first Macintosh computer, still running on the Motorola 68000 microprocessor, but with only 128KB of RAM, one floppy drive, and no hard drive to lower the price. In the late 1980s and early 1990s, computers became more useful for personal and work purposes, such as
word processing. In 1989, Apple released the
Macintosh Portable, it weighed and was extremely expensive, costing US$7,300. At launch, it was one of the most powerful laptops available, but due to the price and weight, it was not met with great success and was discontinued only two years later. That same year Intel introduced the Touchstone Delta
supercomputer, which had 512 microprocessors. This technological advancement was very significant, as it was used as a model for some of the fastest multi-processor systems in the world. It was even used as a prototype for Caltech researchers, who used the model for projects like real-time processing of satellite images and simulating molecular models for various fields of research.
Supercomputers In terms of supercomputing, the first widely acknowledged supercomputer was the
Control Data Corporation (CDC)
6600 built in 1964 by
Seymour Cray. Its maximum speed was 40 MHz or 3 million floating point operations per second (
FLOPS). The CDC 6600 was replaced by the
CDC 7600 in 1969; although its normal clock speed was not faster than the 6600, the 7600 was still faster due to its peak clock speed, which was approximately 30 times faster than that of the 6600. Although CDC was a leader in supercomputers, their relationship with Seymour Cray (which had already been deteriorating) completely collapsed. In 1972, Cray left CDC and began his own company,
Cray Research Inc. With support from investors in Wall Street, an industry fueled by the Cold War, and without the restrictions he had within CDC, he created the
Cray-1 supercomputer. With a clock speed of 80 MHz or 136 megaFLOPS, Cray developed a name for himself in the computing world. By 1982, Cray Research produced the
Cray X-MP equipped with multiprocessing and in 1985 released the
Cray-2, which continued with the trend of multiprocessing and clocked at 1.9 gigaFLOPS. Cray Research developed the
Cray Y-MP in 1988, however afterward struggled to continue to produce supercomputers. This was largely because the Cold War had ended, and the demand for cutting-edge computing by colleges and the government declined drastically and the demand for microprocessing units increased. In 1998,
David Bader developed the first
Linux supercomputer using commodity parts. While at the University of New Mexico, Bader sought to build a supercomputer running Linux using consumer off-the-shelf parts and a high-speed low-latency interconnection network. The prototype utilized an Alta Technologies "AltaCluster" of eight dual, 333 MHz, Intel Pentium II computers running a modified Linux kernel. Bader ported a significant amount of software to provide Linux support for necessary components as well as code from members of the National Computational Science Alliance (NCSA) to ensure interoperability, as none of it had been run on Linux previously. Using the successful prototype design, he led the development of "RoadRunner," the first Linux supercomputer for open use by the national science and engineering community via the National Science Foundation's National Technology Grid. RoadRunner was put into production use in April 1999. At the time of its deployment, it was considered one of the 100 fastest supercomputers in the world. Though Linux-based clusters using consumer-grade parts, such as
Beowulf, existed before the development of Bader's prototype and RoadRunner, they lacked the scalability, bandwidth, and
parallel computing capabilities to be considered "true" supercomputers. Today, supercomputers are still used by the governments of the world and educational institutions for computations such as simulations of natural disasters, genetic variant searches within a population relating to disease, and more. , the fastest supercomputer is
El Capitan. ==Navigation and astronomy==