For communication to occur, protocols have to be selected. The rules can be expressed by algorithms and data structures. Hardware and operating system independence is enhanced by expressing the algorithms in a portable programming language. Source independence of the specification provides wider interoperability. Protocol standards are commonly created by obtaining the approval or support of a
standards organization, which initiates the standardization process. The members of the standards organization agree to adhere to the work result on a voluntary basis. Often, the members are in control of large market shares relevant to the protocol, and in many cases, standards are enforced by law or the government because they are thought to serve an important public interest, so getting approval can be very important for the protocol.
The need for protocol standards The need for protocol standards can be shown by looking at what happened to the
Binary Synchronous Communications (BSC) protocol invented by
IBM. BSC is an early link-level protocol used to connect two separate nodes. It was originally not intended to be used in a multinode network, but doing so revealed several deficiencies of the protocol. In the absence of standardization, manufacturers and organizations felt free to enhance the protocol, creating incompatible versions on their networks. In some cases, this was deliberately done to discourage users from using equipment from other manufacturers. There are more than 50 variants of the original bi-sync protocol. One can assume that a standard would have prevented at least some of this from happening. In some cases, protocols gain market dominance without going through a standardization process. Such protocols are referred to as
de facto standards. De facto standards are common in emerging markets, niche markets, or markets that are
monopolized (or
oligopolized). They can hold a market in a very negative grip, especially when used to scare away competition. From a historical perspective, standardization should be seen as a measure to counteract the ill effects of de facto standards. Positive exceptions exist; a de facto standard operating system like Linux does not have this negative grip on its market because the sources are published and maintained in an open way, thus inviting competition.
Standards organizations Some of the
standards organizations of relevance for communication protocols are the
International Organization for Standardization (ISO), the
International Telecommunication Union (ITU), the
Institute of Electrical and Electronics Engineers (IEEE), and the
Internet Engineering Task Force (IETF). The IETF maintains the protocols in use on the Internet. The IEEE controls many software and hardware protocols in the electronics industry for commercial and consumer devices. The ITU is an umbrella organization of telecommunication engineers designing the
public switched telephone network (PSTN), as well as many
radio communication systems. For
marine electronics, the
NMEA standards are used. The
World Wide Web Consortium (W3C) produces protocols and standards for Web technologies. International standards organizations are supposed to be more impartial than local organizations with a national or commercial self-interest to consider. Standards organizations also do research and development for standards of the future. In practice, the standards organizations mentioned cooperate closely with each other. Multiple standards bodies may be involved in the development of a protocol. If they are uncoordinated, then the result may be multiple, incompatible definitions of a protocol, or multiple, incompatible interpretations of messages; important invariants in one definition (e.g., that
time-to-live values are
monotone decreasing to prevent stable
routing loops) may not be respected in another.
The standardization process In the ISO, the standardization process starts off with the commissioning of a sub-committee workgroup. The workgroup issues working drafts and discussion documents to interested parties (including other standards bodies) in order to provoke discussion and comments. This will generate a lot of questions, much discussion and usually some disagreement. These comments are taken into account, and a
draft proposal is produced by the working group. After feedback, modification, and compromise, the proposal reaches the status of a
draft international standard, and ultimately an
international standard. International standards are reissued periodically to address deficiencies and reflect changing views on the subject.
OSI standardization A lesson learned from
ARPANET, the predecessor of the Internet, was that protocols need a framework to operate. It is therefore important to develop a general-purpose, future-proof framework suitable for
structured protocols (such as layered protocols) and their standardization. This would prevent protocol standards with overlapping functionality and would allow a clear definition of the responsibilities of a protocol at the different levels (layers). This gave rise to the
Open Systems Interconnection model (OSI model), which is used as a framework for the design of standard protocols and services conforming to the various layer specifications. In the OSI model, communicating systems are assumed to be connected by an underlying physical medium providing a basic transmission mechanism. The layers above it are numbered. Each layer provides service to the layer above it using the services of the layer immediately below it. The top layer provides services to the application process. The layers communicate with each other by means of an interface, called a
service access point. Corresponding layers at each system are called
peer entities. To communicate, two peer entities at a given layer use a protocol specific to that layer, which is implemented by using services of the layer below. For each layer, there are two types of standards: protocol standards defining how peer entities at a given layer communicate, and service standards defining how a given layer communicates with the layer above it. In the OSI model, the layers and their functionality are (from highest to lowest layer): • The
Application layer may provide the following services to the application processes: identification of the intended communication partners, establishment of the necessary authority to communicate, determination of availability and authentication of the partners, agreement on privacy mechanisms for the communication, agreement on responsibility for error recovery and procedures for ensuring
data integrity, synchronization between cooperating application processes, identification of any constraints on syntax (e.g. character sets and data structures), determination of cost and acceptable quality of service, selection of the dialogue discipline, including required logon and logoff procedures. • The
presentation layer may provide the following services to the application layer: a request for the establishment of a session, data transfer, negotiation of the syntax to be used between the application layers, any necessary syntax transformations, formatting and special purpose transformations (e.g., data compression and data encryption). • The
session layer may provide the following services to the presentation layer: establishment and release of session connections, normal and expedited data exchange, a quarantine service which allows the sending presentation entity to instruct the receiving session entity not to release data to its presentation entity without permission, interaction management so presentation entities can control whose turn it is to perform certain control functions, resynchronization of a session connection, reporting of unrecoverable exceptions to the presentation entity. • The
transport layer provides reliable and transparent data transfer in a cost-effective way as required by the selected quality of service. It may support the multiplexing of several transport connections onto one network connection or split one transport connection into several network connections. • The
network layer does the setup, maintenance and release of network paths between transport peer entities. When relays are needed, routing and relay functions are provided by this layer. The quality of service is negotiated between network and transport entities at the time the connection is set up. This layer is also responsible for
network congestion control. • The
data link layer does the setup, maintenance and release of data link connections. Errors occurring in the physical layer are detected and may be corrected. Errors are reported to the network layer. The exchange of data link units (including flow control) is defined by this layer. • The
physical layer describes details like the electrical characteristics of the physical connection, the transmission techniques used, and the setup, maintenance and clearing of physical connections. In contrast to the
TCP/IP layering scheme, which assumes a connectionless network, RM/OSI assumed a connection-oriented network. Connection-oriented networks are more suitable for wide area networks and connectionless networks are more suitable for local area networks. Connection-oriented communication requires some form of session and (virtual) circuits, hence the (in the TCP/IP model lacking) session layer. The constituent members of ISO were mostly concerned with wide area networks, so the development of RM/OSI concentrated on connection-oriented networks and connectionless networks were first mentioned in an addendum to RM/OSI and later incorporated into an update to RM/OSI. At the time, the IETF had to cope with this and the fact that the Internet needed protocols that simply were not there. As a result, the IETF developed its own standardization process based on "rough consensus and running code". The standardization process is described by . Nowadays, the IETF has become a standards organization for the protocols in use on the Internet. RM/OSI has extended its model to include connectionless services, and because of this, both TCP and IP could be developed into international standards. == Wire image ==