The earliest preliminary IDS concept was delineated in 1980 by James Anderson at the
National Security Agency and consisted of a set of tools intended to help administrators review audit trails. User access logs, file access logs, and system event logs are examples of audit trails.
Fred Cohen noted in 1987 that it is impossible to detect an intrusion in every case, and that the resources needed to detect intrusions grow with the amount of usage.
Dorothy E. Denning, assisted by
Peter G. Neumann, published a model of an IDS in 1986 that formed the basis for many systems today. Her model used statistics for
anomaly detection, and resulted in an early IDS at
SRI International named the Intrusion Detection Expert System (IDES), which ran on
Sun workstations and could consider both user and network level data. IDES had a dual approach with a rule-based
Expert System to detect known types of intrusions plus a statistical anomaly detection component based on profiles of users, host systems, and target systems. The author of "IDES: An Intelligent System for Detecting Intruders", Teresa F. Lunt, proposed adding an
artificial neural network as a third component. She said all three components could then report to a resolver. SRI followed IDES in 1993 with the Next-generation Intrusion Detection Expert System (NIDES). The
Multics intrusion detection and alerting system (MIDAS), an expert system using P-BEST and
Lisp, was developed in 1988 based on the work of Denning and Neumann. Haystack was also developed in that year using statistics to reduce audit trails. In 1986 the
National Security Agency started an IDS research transfer program under
Rebecca Bace. Bace later published the seminal text on the subject,
Intrusion Detection, in 2000. Wisdom & Sense (W&S) was a statistics-based anomaly detector developed in 1989 at the
Los Alamos National Laboratory. W&S created rules based on statistical analysis, and then used those rules for anomaly detection. In 1990, the Time-based Inductive Machine (TIM) did anomaly detection using inductive learning of sequential user patterns in
Common Lisp on a
VAX 3500 computer. The Network Security Monitor (NSM) performed masking on access matrices for anomaly detection on a Sun-3/50 workstation. The Information Security Officer's Assistant (ISOA) was a 1990 prototype that considered a variety of strategies including statistics, a profile checker, and an expert system. ComputerWatch at
AT&T Bell Labs used statistics and rules for audit data reduction and intrusion detection. Then, in 1991, researchers at the
University of California, Davis created a prototype Distributed Intrusion Detection System (DIDS), which was also an expert system. The Network Anomaly Detection and Intrusion Reporter (NADIR), also in 1991, was a prototype IDS developed at the Los Alamos National Laboratory's Integrated Computing Network (ICN), and was heavily influenced by the work of Denning and Lunt. NADIR used a statistics-based anomaly detector and an expert system. The
Lawrence Berkeley National Laboratory announced
Bro in 1998, which used its own rule language for packet analysis from
libpcap data. Network Flight Recorder (NFR) in 1999 also used libpcap. APE was developed as a packet sniffer, also using libpcap, in November, 1998, and was renamed
Snort one month later. Snort has since become the world's largest used IDS/IPS system with over 300,000 active users. It can monitor both local systems, and remote capture points using the
TZSP protocol. The Audit Data Analysis and Mining (ADAM) IDS in 2001 used
tcpdump to build profiles of rules for classifications. In 2003,
Yongguang Zhang and Wenke Lee argue for the importance of IDS in networks with mobile nodes. In 2015, Viegas and his colleagues proposed an anomaly-based intrusion detection engine, aiming System-on-Chip (SoC) for applications in Internet of Things (IoT), for instance. The proposal applies machine learning for anomaly detection, providing energy-efficiency to a Decision Tree, Naive-Bayes, and k-Nearest Neighbors classifiers implementation in an Atom CPU and its hardware-friendly implementation in a FPGA. In the literature, this was the first work that implement each classifier equivalently in software and hardware and measures its energy consumption on both. Additionally, it was the first time that was measured the energy consumption for extracting each features used to make the network packet classification, implemented in software and hardware. == Regulatory requirements ==