The names Reno and Tahoe are the names of releases of the
BSD UNIX operating system, and were used to refer to the congestion control algorithms (CCAs) at least as early a 1996 paper by Kevin Fall and Sally Floyd. The following is one possible classification according to the following properties: • the type and amount of feedback received from the network • incremental deployability on the current Internet • the aspect of performance it aims to improve: high
bandwidth-delay product networks (B); lossy links (L); fairness (F); advantage to short flows (S); variable-rate links (V);
speed of convergence (C) • the fairness criterion it uses Some well-known congestion avoidance mechanisms are classified by this scheme as follows:
TCP Tahoe and Reno TCP Tahoe and Reno algorithms were retrospectively named after the versions or flavors of the
4.3BSD operating system in which each first appeared (which were themselves named after
Lake Tahoe and the nearby city of
Reno, Nevada). The Tahoe algorithm first appeared in 4.3BSD-Tahoe (which was made to support the
CCI Power 6/32 "Tahoe" minicomputer), and was later made available to non-AT&T licensees as part of the 4.3BSD Networking Release 1; this ensured its wide distribution and implementation. Improvements were made in 4.3BSD-Reno and subsequently released to the public as Networking Release 2 and later 4.4BSD-Lite. While both consider retransmission timeout (RTO) and duplicate ACKs as packet loss events, the behavior of Tahoe and Reno differ primarily in how they react to duplicate ACKs: • Tahoe: if three duplicate ACKs are received (i.e., four ACKs acknowledging the same packet, which are not piggybacked on data and do not change the receiver's advertised window), Tahoe performs a fast retransmit, sets the slow start threshold to half of the current congestion window, reduces the congestion window to 1 MSS, and resets to slow start state. • Reno: if three duplicate ACKs are received, Reno will perform a fast retransmit and skip the slow start phase by instead halving the congestion window (instead of setting it to 1 MSS like Tahoe), setting the ssthresh equal to the new congestion window, and enter a phase called
fast recovery. In both Tahoe and Reno, if an ACK times out (RTO timeout), slow start is used, and both algorithms reduce the congestion window to 1 MSS.
TCP New Reno TCP New Reno, defined by (which obsolesces previous definitions in and ), improves retransmission during the fast-recovery phase of TCP Reno. During fast recovery, to keep the transmit window full, for every duplicate ACK that is returned, a new unsent packet from the end of the congestion window is sent. The difference from Reno is that New Reno does not halve ssthresh immediately, which may reduce the window too much if multiple packet losses occur. It does not exit fast-recovery and reset ssthresh until it acknowledges all of the data. After retransmission, newly acknowledged data have two cases: • Full acknowledgments: The ACK acknowledges all the intermediate segments sent; the ssthresh cannot be changed, and cwnd can be set to ssthresh • Partial acknowledgments: The ACK does not acknowledge all data. It means another loss may occur, retransmit the first unacknowledged segment if permitted It uses a variable called
recover to record how much data needs to be recovered. After a retransmit timeout, it records the highest sequence number transmitted in the recover variable and exits the fast recovery procedure. If this sequence number is acknowledged, TCP returns to the congestion avoidance state. A problem occurs with New Reno when there are no packet losses, but instead, packets are reordered by more than 3 packet sequence numbers. In this case, New Reno mistakenly enters fast recovery. When the reordered packet is delivered, duplicate and needless retransmissions are immediately sent. New Reno performs as well as SACK at low packet error rates and substantially outperforms Reno at high error rates.
TCP Vegas Until the mid-1990s, all of TCP's set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer.
University of Arizona researchers
Larry Peterson and
Lawrence Brakmo introduced TCP Vegas, in which timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases in the congestion window. In a 2012 comparison study of various TCP s, TCP Vegas appeared to be the smoothest, followed by TCP CUBIC. TCP Vegas was not widely deployed outside Peterson's laboratory but was selected as the default congestion control method for
DD-WRT firmware v24 SP2.
TCP Hybla TCP Hybla aims to eliminate penalties to TCP connections that use high-latency terrestrial or satellite radio links. Hybla improvements are based on analytical evaluation of the congestion window dynamics.
TCP BIC Binary Increase Congestion control (BIC) is a TCP implementation with an optimized CCA for high-speed networks with high latency, known as
long fat networks (LFNs). BIC is used by default in
Linux kernels 2.6.8 through 2.6.18.
TCP CUBIC CUBIC is a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event. CUBIC is used by default in
Linux kernels since version 2.6.19.
Agile-SD TCP Agile-SD is a Linux-based CCA that is designed for the real Linux kernel. It is a receiver-side algorithm that employs a loss-based approach using a novel mechanism, called
agility factor (AF), to increase the bandwidth utilization over high-speed and short-distance networks (low bandwidth-delay product networks) such as local area networks or fiber-optic networks, especially when the applied buffer size is small. is an algorithm designed to improve the accuracy of data sent during recovery. The algorithm ensures that the window size after recovery is as close as possible to the slow start threshold. In tests performed by
Google, PRR resulted in a 3–10% reduction in average latency and recovery timeouts were reduced by 5%. PRR is available in
Linux kernels since version 3.2.
TCP BBR Bottleneck Bandwidth and Round-trip propagation time (BBR) is a CCA developed at Google in 2016. While most CCAs are loss-based, in that they rely on packet loss to detect congestion and lower rates of transmission, BBR, like
TCP Vegas, is model-based. The algorithm uses the maximum bandwidth and round-trip time at which the network delivered the most recent flight of outbound data packets to build a model of the network. Each cumulative or selective acknowledgment of packet delivery produces a rate sample that records the amount of data delivered over the time interval between the transmission of a data packet and the acknowledgment of that packet. When implemented at
YouTube, BBRv1 yielded an average of 4% higher network throughput and up to 14% in some countries. BBR has been available for Linux TCP since Linux 4.9. It is also available for
QUIC. BBR version 1 (BBRv1) fairness to non-BBR streams is disputed. While Google's presentation shows BBRv1 co-existing well with CUBIC, Hock et al. also found "some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss" in the BBR implementation of Linux 4.9. Soheil Abbasloo et al. (authors of C2TCP) show that BBRv1 doesn't perform well in dynamic environments such as cellular networks. In BBRv2 the model used by BBRv1 is augmented to include information about packet loss and information from
Explicit Congestion Notification (ECN). Whilst BBRv2 may at times have lower throughput than BBRv1 it is generally considered to have better
goodput.
Windows 11, version 24H2 and
Windows Server 2025 had support for BBRv2, but may not enable it by default. Version 3 (BBRv3) fixes two bugs in BBRv2 (premature end of bandwidth probing, bandwidth convergence) and performs some performance tuning. There is also a variant, termed BBR.Swift, optimized for datacenter-internal links: it uses network_RTT (excluding receiver delay) as the main congestion control signal. showed that C2TCP outperforms the delay and delay-variation performance of various state-of-the-art TCP schemes. For instance, they showed that compared to BBR, CUBIC, and Westwood on average, C2TCP decreases the average delay of packets by about 250%, 900%, and 700%, respectively on various cellular network environments.
Other TCP congestion avoidance algorithms •
FAST TCP • Generalized FAST TCP •
H-TCP •
Data Center TCP •
High Speed TCP • HSTCP-LP •
TCP-Illinois • TCP-LP •
Westwood • XCP • YeAH-TCP • TCP-FIT • Congestion Avoidance with Normalized Interval of Time (CANIT) • Non-linear neural network congestion control based on genetic algorithm for TCP/IP networks • D-TCP • NexGen D-TCP • Copa
TCP New Reno was the most commonly implemented algorithm, SACK support is very common and is an extension to Reno/New Reno. Most others are competing proposals that still need evaluation. Starting with 2.6.8 the Linux kernel switched the default implementation from New Reno to
BIC. The default implementation was again changed to CUBIC in the 2.6.19 version.
FreeBSD from version 14.X onwards also uses CUBIC as the default algorithm. Previous versions used New Reno. However, FreeBSD supports a number of other choices. When the per-flow product of bandwidth and latency increases, regardless of the queuing scheme, TCP becomes inefficient and prone to instability. This becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links. TCP Interactive (iTCP) allows applications to subscribe to TCP events and respond accordingly, enabling various functional extensions to TCP from outside TCP layer. Most TCP congestion schemes work internally. iTCP additionally enables advanced applications to directly participate in congestion control, such as to control the source generation rate.
Zeta-TCP detects congestion from both latency and loss rate measures. To maximize the
goodput, Zeta-TCP applies different congestion window backoff strategies based on the likelihood of congestion. It also has other improvements to accurately detect packet losses, avoiding retransmission timeout, and accelerate and control the inbound (download) traffic. == Classification by network awareness ==