MarketPCI Express
Company Profile

PCI Express

PCI Express, officially abbreviated as PCIe, is a high-speed standard used to connect hardware components inside computers. It is designed to replace older expansion bus standards such as PCI, PCI-X and AGP. Developed and maintained by the PCI-SIG, PCIe is commonly used to connect graphics cards, sound cards, Wi-Fi and Ethernet adapters, and storage devices such as solid-state drives and hard disk drives.

Architecture
Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus. == Form factors ==
Form factors
PCI Express add-in card A PCI Express add-in card fits into a slot of its physical size or larger (with ×16 as the largest used), but may not fit into a smaller PCI Express slot; for example, a ×16 card may not fit into a ×4 or ×8 slot. Some slots use open-ended sockets to permit physically longer cards and negotiate the best available electrical and logical connection. The number of lanes actually connected to a slot may also be fewer than the number supported by the physical slot size. An example is a ×16 slot that runs at ×4, which accepts any ×1, ×2, ×4, ×8 or ×16 card, but provides only four lanes. Its specification may read as "×16 (×4 mode)" or "×16 (×4 signal)", while "mechanical @ electrical" notation (e.g. "×16 @ ×4") is also common. The advantage is that such slots can accommodate a larger range of PCI Express cards without requiring motherboard hardware to support the full transfer rate. Standard mechanical sizes are ×1, ×4, ×8, and ×16. Cards using a number of lanes other than the standard mechanical sizes need to physically fit the next larger mechanical size (e.g. an ×2 card uses the ×4 size, or an ×12 card uses the ×16 size). The cards themselves are designed and manufactured in various sizes. For example, solid-state drives (SSDs) that come in the form of PCI Express cards often use HHHL (half height, half length) and FHHL (full height, half length) to describe the physical dimensions of the card. The concept of "full" and "half" heights and lengths are inherited from Conventional PCI. PCIe slots. In fact, even the methodology of how to measure the cards varies between vendors, with some including the metal bracket size in dimensions and others not. For instance, comparing three high-end video cards released in 2020: a Sapphire Radeon RX 5700 XT card measures 135 mm in height (excluding the metal bracket), which exceeds the PCIe standard height by 28 mm, Slot power All PCI express cards may consume up to at (). The amount of +12 V and total power they may consume depends on the form factor and the role of the card: • ×1 cards are limited to 0.5 A at +12V (6 W) and 10 W combined. • ×4 and wider cards are limited to 2.1 A at +12V (25 W) and 25 W combined. • A full-sized ×1 card may draw up to the 25 W limits after initialization and software configuration as a high-power device. • A full-sized ×16 graphics card may draw up to 5.5 A at +12V (66 W) and 75 W combined after initialization and software configuration as a high-power device. The contacts are rated for 15 Amps continuous current. The 48VHPWR connector can carry 720 watts. Later it was removed and an incompatible 48V 1×2 connector was introduced where Sense0 and Sense1 are located farthest from each other. Power excursion Power excursion refers to short peaks of power draw exceeding the rated maximum (sustained) power level. Since an add-on Engineering Change Notice (ECN) to PCIe-CEM 5.0, the additional power connectors need to be able to handle 100-microsecond power draw at 3× of maximum sustained power, reducing to 1× at the 1-second window level following a logarithmic line. Since PCIe-ECM 5.1, slot power has a similar excursion expansion at 2.5× over 100 μs. In CEM 5.1, the added excursion limit is only provided after software configuration, specifically the Set_​Slot_​Power_​Limit message. The ECN is part of ATX 3.0 and PCIe CEM 5.1 is part of ATX 3.1. PCI Express Mini Card PCI Express Mini Card and its connector PCI Express Mini Card (also known as Mini PCI Express, Mini PCIe, Mini PCI-E, mPCIe, and PEM), based on PCI Express, is a replacement for the Mini PCI form factor. It is developed by the PCI-SIG. The host device supports both PCI Express and USB 2.0 connectivity, and each card may use either standard. Most laptop computers built between 2005 and 2013 use Mini PCI Express for expansion cards; however, , many vendors have moved toward using the newer M.2 form factor for this purpose. Due to different dimensions, PCI Express Mini Cards are not physically compatible with standard full-size PCI Express slots; however, passive adapters exist that let them be used in full-size slots. Electrical interface PCI Express Mini Card edge connectors provide multiple connections and buses: • PCI Express ×1 (with SMBus) • USB 2.0 • Wires to diagnostics LEDs for wireless network (i.e., Wi-Fi) status on computer's chassis • SIM card for GSM and WCDMA applications (UIM signals on spec.) • Future extension for another PCIe lane • 1.5 V and 3.3 V power Mini-SATA (mSATA) variant Despite sharing the Mini PCI Express form factor, an mSATA slot is not necessarily electrically compatible with Mini PCI Express. For this reason, only certain notebooks are compatible with mSATA drives. Most compatible systems are based on Intel's Sandy Bridge processor architecture, using the Huron River platform. Notebooks such as Lenovo's ThinkPad T, W and X series, released in March–April 2011, have support for an mSATA SSD card in their WWAN card slot. The ThinkPad Edge E220s/E420s, and the Lenovo IdeaPad Y460/Y560/Y570/Y580 also support mSATA. Derivative forms Numerous other form factors use, or are able to use, PCIe. These include: • Low-height card • ExpressCard: Successor to the PC Card form factor (with ×1 PCIe and USB 2.0; hot-pluggable) • PCI Express ExpressModule: A hot-pluggable modular form factor defined for servers and workstations • XQD card: A PCI Express-based flash card standard by the CompactFlash Association with ×2 PCIe • CFexpress card: A PCI Express-based flash card by the CompactFlash Association in three form factors supporting 1 to 4 PCIe lanes • SD card: The SD Express bus, introduced in version 7.0 of the SD specification uses a ×1 PCIe link • XMC: Similar to the CMC/PMC form factor (VITA 42.3) • AdvancedTCA: A complement to CompactPCI for larger applications; supports serial based backplane topologies • AMC: A complement to the AdvancedTCA specification; supports processor and I/O modules on ATCA boards (×1, ×2, ×4 or ×8 PCIe). • FeaturePak: A tiny expansion card format (43mm × 65 mm) for embedded and small-form-factor applications, which implements two ×1 PCIe links on a high-density connector along with USB, I2C, and up to 100 points of I/O • Universal IO: A variant from Super Micro Computer Inc designed for use in low-profile rack-mounted chassis. It has the connector bracket reversed so it cannot fit in a normal PCI Express socket, but it is pin-compatible and may be inserted if the bracket is removed. • M.2 (formerly known as NGFF) • M-PCIe brings PCIe 3.0 to mobile devices (such as tablets and smartphones), over the M-PHY physical layer. • Serial Attached SCSI-related ports: • SATA Express, U.2 (formerly known as SFF-8639), U.3 use the same port • SlimSAS (SFF-8654) • SFF-TA-1016 (M-XIO connector) • SFF-TA-1026, SFF-TA-1033 The PCIe slot connector can also carry protocols other than PCIe. Some 9xx series Intel chipsets support Serial Digital Video Out, a proprietary technology that uses a slot to transmit video signals from the host CPU's integrated graphics instead of PCIe, using a supported add-in. The PCIe transaction-layer protocol can also be used over some other interconnects, which are not electrically PCIe: • Thunderbolt: A royalty-free (as of Thunderbolt 3) interconnect standard by Intel that combines DisplayPort and PCIe protocols in a form factor compatible with Mini DisplayPort. Thunderbolt 3.0 also combines USB 3.1 and uses the USB-C form factor as opposed to Mini DisplayPort. • USB4 is an extension of Thunderbolt 3.0. Thunderbolt 4 and Thunderbolt 5 are profiles of USB4 specifying higher levels of mandatory features. == History and revisions ==
History and revisions
While in early development, PCIe was initially referred to as HSI (for High Speed Interconnect), and underwent a name change to 3GIO (for 3rd Generation I/O) before finally settling on its PCI-SIG name PCI Express. A technical working group named the Arapaho Work Group (AWG) drew up the standard. For initial drafts, the AWG consisted only of Intel engineers; subsequently, the AWG expanded to include industry partners. Since, PCIe has undergone several large and smaller revisions, improving on performance and other features. Comparison table ; Notes PCI Express 1.0a In 2003, PCI-SIG introduced PCIe 1.0a, with a per-lane data rate of 0.25 gigabytes per second (GB/s) and a transfer rate of 2.5 gigatransfers per second (GT/s). Transfer rate is expressed in transfers per second instead of bits per second because the number of transfers includes the overhead bits, which do not provide additional throughput; PCI-SIG officially announced the release of the final PCI Express 4.0 specification on 8 June 2017. NETINT Technologies introduced the first NVMe SSD based on PCIe 4.0 on 17 July 2018, ahead of Flash Memory Summit 2018 On 9 September 2021, IBM announced the Power E1080 Enterprise server with planned availability date 17 September. It can have up to 16 Power10 SCMs with maximum of 32 slots per system which can act as PCIe 5.0 ×8 or PCIe 4.0 ×16. Alternatively they can be used as PCIe 5.0 ×16 slots for optional optical CXP converter adapters connecting to external PCIe expansion drawers. On 27 October 2021, Intel announced the 12th Gen Intel Core CPU family, the world's first consumer x86-64 processors with PCIe 5.0 (up to 16 lanes) connectivity. On 22 March 2022, Nvidia announced Nvidia Hopper GH100 GPU, the world's first PCIe 5.0 GPU. On 23 May 2022, AMD announced its Zen 4 architecture with support for up to 24 lanes of PCIe 5.0 connectivity on consumer platforms and 128 lanes on server platforms. PCI Express 6.0 On 18 June 2019, PCI-SIG announced the development of PCI Express 6.0 specification. Bandwidth is expected to increase to 64GT/s, yielding 128GB/s in each direction in a 16-lane configuration, with a target release date of 2021. On 11 January 2022, PCI-SIG officially announced the release of the final PCI Express 6.0 specification. The PCI Express 6.0 retained backward compatibility with previous versions of PCI Express specifications. PAM-4 coding results in a vastly higher bit error rate (BER) of 10−6 (vs. 10−12 previously), so in place of 128b/130b encoding, a 3-way interlaced forward error correction (FEC) is used in addition to cyclic redundancy check (CRC). A fixed 256 byte Flow Control Unit (FLIT) block carries 242 bytes of data, which includes variable-sized transaction level packets (TLP) and data link layer payload (DLLP); remaining 14 bytes are reserved for 8-byte CRC and 6-byte FEC. 3-way Gray code is used in PAM-4/FLIT mode to reduce error rate; the interface does not switch to NRZ and 128/130b encoding even when retraining to lower data rates. PCIe 6.0 hardware was not launched until August 2025, roughly three years after the release of the final specifications and shortly after the publication of the PCIe 7.0 specifications. The delay was described as unprecedented, with PCWorld noting that for many years PCIe 6.0 existed "solely on paper". PCI Express 7.0 On 21 June 2022, PCI-SIG announced the development of PCI Express 7.0 specification. It will deliver 128 GT/s raw bit rate and up to 242 GB/s per direction in ×16 configuration, using the same PAM4 signaling as version 6.0. Doubling of the data rate will be achieved by fine-tuning channel parameters to decrease signal losses and improve power efficiency, but signal integrity is expected to be a challenge. The specification is expected to be finalized in 2025. On 3 April 2024, the PCI Express 7.0 revision 0.5 specification (a "first draft") was released. At its release, PCI-SIG commented that it did not see the PCIe 7.0 coming to the PC market for some time. Instead the interface is initially targeted at cloud computing, 800-gigabit Ethernet, and artificial intelligence applications. == Extensions and future directions ==
Extensions and future directions
Some vendors offer PCIe over fiber products, or in specific cases where transparent PCIe bridging is preferable to using a more mainstream standard (such as InfiniBand or Ethernet) that may require additional software to support it. Thunderbolt was co-developed by Intel and Apple as a general-purpose high speed interface combining a logical PCIe link with DisplayPort and was originally intended as an all-fiber interface, but due to early difficulties in creating a consumer-friendly fiber interconnect, nearly all implementations are copper systems. A notable exception, the Sony VAIO Z VPC-Z2, uses a nonstandard USB port with an optical component to connect to an outboard PCIe display adapter. Apple has been the primary driver of Thunderbolt adoption through 2011, though several other vendors have announced new products and systems featuring Thunderbolt. Thunderbolt 3 forms the basis of the USB4 standard. Mobile PCIe specification (abbreviated to M-PCIe) allows PCI Express architecture to operate over the MIPI Alliance's M-PHY physical layer technology. Building on top of already existing widespread adoption of M-PHY and its low-power design, Mobile PCIe lets mobile devices use PCI Express. iPhone is one example that utilizing integrated NVMe storage with M-PCIe. Draft process There are 5 primary releases/checkpoints in a PCI-SIG specification: • Draft 0.3 (Concept): this release may have few details, but outlines the general approach and goals. • Draft 0.5 (First draft): this release has a complete set of architectural requirements and must fully address the goals set out in the 0.3 draft. • Draft 0.7 (Complete draft): this release must have a complete set of functional requirements and methods defined, and no new functionality may be added to the specification after this release. Before the release of this draft, electrical specifications must have been validated via test silicon. • Draft 0.9 (Final draft): this release allows PCI-SIG member companies to perform an internal review for intellectual property, and no functional changes are permitted after this draft. • 1.0 (Final release): this is the final and definitive specification, and any changes or enhancements are through Errata documentation and Engineering Change Notices (ECNs) respectively. Historically, the earliest adopters of a new PCIe specification generally begin designing with the Draft 0.5 as they can confidently build up their application logic around the new bandwidth definition and often even start developing for any new protocol features. At the Draft 0.5 stage, however, there is still a strong likelihood of changes in the actual PCIe protocol layer implementation, so designers responsible for developing these blocks internally may be more hesitant to begin work than those using interface IP from external sources. == Hardware protocol summary ==
Hardware protocol summary
The PCIe link is built around dedicated unidirectional couples of serial (1-bit), point-to-point connections known as lanes. This is in sharp contrast to the earlier PCI connection, which is a bus-based system where all the devices share the same bidirectional, 32-bit or 64-bit parallel bus. PCI Express is a layered protocol, consisting of a transaction layer, a data link layer, and a physical layer. The Data Link Layer is subdivided to include a media access control (MAC) sublayer. The Physical Layer is subdivided into logical and electrical sublayers. The Physical logical-sublayer contains a physical coding sublayer (PCS). The terms are borrowed from the IEEE 802 networking protocol model. Physical layer The PCIe Physical Layer (PHY, PCIEPHY, PCI Express PHY, or PCIe PHY) specification is divided into two sub-layers, corresponding to electrical and logical specifications. The logical sublayer is sometimes further divided into a MAC sublayer and a PCS, although this division is not formally part of the PCIe specification. A specification published by Intel, the PHY Interface for PCI Express (PIPE), This allows for very good compatibility in two ways: • A PCIe card physically fits (and works correctly) in any slot that is at least as large as it is (e.g., a ×1 sized card works in any sized slot); • A slot of a large physical size (e.g., ×16) can be wired electrically with fewer lanes (e.g., ×1, ×4, ×8, or ×12) as long as it provides the ground connections required by the larger physical slot size. In both cases, PCIe negotiates the highest mutually supported number of lanes. Many graphics cards, motherboards and BIOS versions are verified to support ×1, ×4, ×8 and ×16 connectivity on the same connection. The width of a PCIe connector is 8.8 mm, while the height is 11.25 mm, and the length is variable. The fixed section of the connector is 11.65 mm in length and contains two rows of 11 pins each (22 pins total), while the length of the other section is variable depending on the number of lanes. The pins are spaced at 1 mm intervals, and the thickness of the card going into the connector is 1.6 mm. Data link layer The data link layer performs three vital services for the PCIe link: • sequence the transaction layer packets (TLPs) that are generated by the transaction layer, • ensure reliable delivery of TLPs between two endpoints via an acknowledgement protocol (ACK and NAK signaling) that explicitly requires replay of unacknowledged/bad TLPs, • initialize and manage flow control credits On the transmit side, the data link layer generates an incrementing sequence number for each outgoing TLP. It serves as a unique identification tag for each transmitted TLP, and is inserted into the header of the outgoing TLP. A 32-bit cyclic redundancy check code (known in this context as Link CRC or LCRC) is also appended to the end of each outgoing TLP. On the receive side, the received TLP's LCRC and sequence number are both validated in the link layer. If either the LCRC check fails (indicating a data error), or the sequence-number is out of range (non-consecutive from the last valid received TLP), then the bad TLP, as well as any TLPs received after the bad TLP, are considered invalid and discarded. The receiver sends a negative acknowledgement message (NAK) with the sequence-number of the invalid TLP, requesting re-transmission of all TLPs forward of that sequence-number. If the received TLP passes the LCRC check and has the correct sequence number, it is treated as valid. The link receiver increments the sequence-number (which tracks the last received good TLP), and forwards the valid TLP to the receiver's transaction layer. An ACK message is sent to remote transmitter, indicating the TLP was successfully received (and by extension, all TLPs with past sequence-numbers.) If the transmitter receives a NAK message, or no acknowledgement (NAK or ACK) is received until a timeout period expires, the transmitter must retransmit all TLPs that lack a positive acknowledgement (ACK). Barring a persistent malfunction of the device or transmission medium, the link-layer presents a reliable connection to the transaction layer, since the transmission protocol ensures delivery of TLPs over an unreliable medium. In addition to sending and receiving TLPs generated by the transaction layer, the data-link layer also generates and consumes data link layer packets (DLLPs). ACK and NAK signals are communicated via DLLPs, as are some power management messages and flow control credit information (on behalf of the transaction layer). In practice, the number of in-flight, unacknowledged TLPs on the link is limited by two factors: the size of the transmitter's replay buffer (which must store a copy of all transmitted TLPs until the remote receiver ACKs them), and the flow control credits issued by the receiver to a transmitter. PCI Express requires all receivers to issue a minimum number of credits, to guarantee a link allows sending PCIConfig TLPs and message TLPs. Transaction layer PCI Express implements split transactions (transactions with request and response separated by time), allowing the link to carry other traffic while the target device gathers data for the response. PCI Express uses credit-based flow control. In this scheme, a device advertises an initial amount of credit for each received buffer in its transaction layer. The device at the opposite end of the link, when sending transactions to this device, counts the number of credits each TLP consumes from its account. The sending device may only transmit a TLP when doing so does not make its consumed credit count exceed its credit limit. When the receiving device finishes processing the TLP from its buffer, it signals a return of credits to the sending device, which increases the credit limit by the restored amount. The credit counters are modular counters, and the comparison of consumed credits to credit limit requires modular arithmetic. The advantage of this scheme (compared to other methods such as wait states or handshake-based transfer protocols) is that the latency of credit return does not affect performance, provided that the credit limit is not encountered. This assumption is generally met if each device is designed with adequate buffer sizes. PCIe 1.x is often quoted to support a data rate of 250 MB/s in each direction, per lane. This figure is a calculation from the physical signaling rate (2.5 gigabaud) divided by the encoding overhead (10 bits per byte). This means a sixteen lane (×16) PCIe card would then be theoretically capable of 16×250 MB/s = 4 GB/s in each direction. While this is correct in terms of data bytes, more meaningful calculations are based on the usable data payload rate, which depends on the profile of the traffic, which is a function of the high-level (software) application and intermediate protocol levels. Like other high data rate serial interconnect systems, PCIe has a protocol and processing overhead due to the additional transfer robustness (CRC and acknowledgements). Long continuous unidirectional transfers (such as those typical in high-performance storage controllers) can approach >95% of PCIe's raw (lane) data rate. These transfers also benefit the most from increased number of lanes (×2, ×4, etc.) But in more typical applications (such as a USB or Ethernet controller), the traffic profile is characterized as short data packets with frequent enforced acknowledgements. A PCIe 1.x lane for example offers a data rate on top of the physical layer of 250 MB/s (simplex). This is due to a 2.5 GT/s bit rate multiplied by the efficiency of the 8b/10b line code (see #Comparison table). This is not the payload bandwidth but the physical layer bandwidth – a PCIe lane has to carry additional information for full functionality. The Gen2 overhead is then 20, 24, or 28 bytes per transaction. The Gen3 overhead is then 22, 26 or 30 bytes per transaction. The for a 128 byte payload is 86%, and 98% for a 1024 byte payload. For small accesses like register settings (4 bytes), the efficiency drops as low as 16%. That said, most PCIe config registers reside in a DMA region mapped to the CPU's control registers and require no bus access. The maximum payload size (MPS) is set on all devices based on smallest maximum on any device in the chain. If one device has an MPS of 128 bytes, all devices of the tree must set their MPS to 128 bytes. In this case the bus will have a maximum efficiency of 86% for writes. == Applications ==
Applications
Nvidia GeForce GTX 650 Ti, a PCI Express 3.0 ×16 graphics card GeForce GTX 1070, a PCI Express 3.0 ×16 Graphics card 82574L Gigabit Ethernet NIC, a PCI Express ×1 card -based SATA 3.0 controller, as a PCI Express ×1 card PCI Express operates in consumer, server, and industrial applications, as a motherboard-level interconnect (to link motherboard-mounted peripherals), a passive backplane interconnect and as an expansion card interface for add-in boards. In virtually all modern () PCs, from consumer laptops and desktops to enterprise servers, the PCIe bus serves as the primary motherboard-level interconnect, connecting the host system-processor with both integrated peripherals (surface-mounted ICs) and add-on peripherals (expansion cards). In some of these systems, the PCIe bus co-exists with one or more legacy PCI buses, for backward compatibility with the large body of legacy PCI peripherals. , PCI Express has replaced AGP as the default interface for graphics cards on new systems. Almost all models of graphics cards released since 2010 by AMD (ATI) and Nvidia use PCI Express. AMD, Nvidia, and Intel have released motherboard chipsets that support as many as four PCIe ×16 slots, allowing tri-GPU and quad-GPU card configurations. External GPUs Theoretically, external PCIe could give a notebook the graphics power of a desktop, by connecting a notebook with any PCIe desktop video card (enclosed in its own external housing, with a power supply and cooling); this is possible with an ExpressCard or Thunderbolt interface. An ExpressCard interface provides bit rates of 5 Gbit/s (0.5 GB/s throughput), whereas a Thunderbolt interface provides bit rates of up to 40 Gbit/s (5 GB/s throughput). In 2006, Nvidia developed the Quadro Plex external PCIe family of GPUs that can be used for advanced graphic applications for the professional market. These video cards require a PCI Express ×8 or ×16 slot for the host-side card, which connects to the Plex via a VHDCI carrying eight PCIe lanes. In 2008, AMD announced the ATI XGP technology, based on a proprietary cabling system that is compatible with PCIe ×8 signal transmissions. This connector is available on the Fujitsu Amilo and the Acer Ferrari One notebooks. Fujitsu launched their AMILO GraphicBooster enclosure for XGP soon thereafter. Around 2010 Acer launched the Dynavivid graphics dock for XGP. In 2010, external card hubs were introduced that can connect to a laptop or desktop through a PCI ExpressCard slot. These hubs can accept full-sized graphics cards. Examples include MSI GUS, Village Instrument's ViDock, the Asus XG Station, Bplus PE4H V3.2 adapter, as well as more improvised DIY devices. However such solutions are limited by the size (often only ×1) and version of the available PCIe slot on a laptop. The Intel Thunderbolt interface has provided a new option to connect with a PCIe card externally. Magma has released the ExpressBox 3T, which can hold up to three PCIe cards (two at ×8 and one at ×4). MSI also released the Thunderbolt GUS II, a PCIe chassis dedicated for video cards. Other products such as the Sonnet's Echo Express and mLogic's mLink are Thunderbolt PCIe chassis in a smaller form factor. In 2017, more fully featured external card hubs were introduced, such as the Razer Core, which has a full-length PCIe ×16 interface. Storage devices RevoDrive SSD, a full-height ×4 PCI Express card The PCI Express protocol can be used as data interface to flash memory devices, such as memory cards and solid-state drives (SSDs). The XQD card is a memory card format utilizing PCI Express, developed by the CompactFlash Association, with transfer rates of up to 1 GB/s. Many high-performance, enterprise-class SSDs are designed as PCI Express RAID controller cards. Before NVMe was standardized, many of these cards utilized proprietary interfaces and custom drivers to communicate with the operating system; they had much higher transfer rates (over 1 GB/s) and IOPS (over one million I/O operations per second) when compared to Serial ATA or SAS drives. For example, in 2011 OCZ and Marvell co-developed a native PCI Express solid-state drive controller for a PCI Express 3.0 ×16 slot with maximum capacity of 12 TB and a performance of to 7.2 GB/s sequential transfers and up to 2.52 million IOPS in random transfers. SATA Express was an interface for connecting SSDs through SATA-compatible ports, optionally providing multiple PCI Express lanes as a pure PCI Express connection to the attached storage device. M.2 is a specification for internally mounted computer expansion cards and associated connectors, which can use up to four PCI Express lanes. PCI Express storage devices can implement both AHCI logical interface for backward compatibility, and NVM Express logical interface for much faster I/O operations provided by utilizing internal parallelism offered by such devices. Enterprise-class SSDs can also implement SCSI over PCI Express. Cluster interconnect Certain data-center applications (such as large computer clusters) require the use of fiber-optic interconnects due to the distance limitations inherent in copper cabling. Typically, a network-oriented standard such as Ethernet or Fibre Channel suffices for these applications, but in some cases the overhead introduced by routable protocols is undesirable and a lower-level interconnect, such as InfiniBand, RapidIO, or NUMAlink is needed. Local-bus standards such as PCIe and HyperTransport can in principle be used for this purpose, but , solutions are only available from niche vendors such as Dolphin ICS, and TTTech Auto. == Competing protocols ==
Competing protocols
PCIe 1.0 initially competed with PCI-X 2.0, with both specifications being ratified in 2003 and offering roughly the same maximum bandwidth (~4 GB/s). By 2005, however, PCIe emerged as the dominant technology. Other communications standards based on high bandwidth serial architectures include InfiniBand, RapidIO, HyperTransport, Intel QuickPath Interconnect, the Mobile Industry Processor Interface (MIPI), and NVLink. Differences are based on the trade-offs between flexibility and extensibility vs latency and overhead. For example, making the system hot-pluggable, as with Infiniband but not PCI Express, requires that software track network topology changes. Another example is making the packets shorter to decrease latency (as is required if a bus must operate as a memory interface). Smaller packets mean packet headers consume a higher percentage of the packet, thus decreasing the effective bandwidth. Examples of bus protocols designed for this purpose are RapidIO and HyperTransport. targeted by design as a system interconnect (local bus) rather than a device interconnect or routed network protocol. Additionally, its design goal of software transparency constrains the protocol and raises its latency somewhat. Delays in PCIe 4.0 implementations led to the Gen-Z consortium, the CCIX effort and an open Coherent Accelerator Processor Interface (CAPI) all being announced by the end of 2016. On 11 March 2019, Intel presented Compute Express Link (CXL), a new interconnect bus, based on the PCI Express 5.0 physical layer infrastructure. The initial promoters of the CXL specification included: Alibaba, Cisco, Dell EMC, Facebook, Google, HPE, Huawei, Intel and Microsoft. == Integrators list ==
Integrators list
The PCI-SIG Integrators List lists products made by PCI-SIG member companies that have passed compliance testing. The list include switches, bridges, NICs, SSDs, etc. == See also ==
tickerdossier.comtickerdossier.substack.com