MarketWeb server
Company Profile

Web server

A web server is computer software that accepts requests via HTTP or its secure variant HTTPS. A user agent, commonly a web browser or web crawler, initiates communication by making a request for a web page or other resource using HTTP, and the server responds with the content of that resource or an error message. A web server can also accept and store resources sent from the user agent if configured to do so.

History
workstation with Ethernet, 1990. The case label reads: "This machine is a server. DO NOT POWER IT DOWN!!" This is a very brief history of web server programs, so some information necessarily overlaps with the histories of the web browsers, the World Wide Web and the Internet; therefore, for the sake of clarity and understandability, some key historical information below reported may be similar to that found also in one or more of the above-mentioned history articles. Initial WWW project (1989–1991) In March 1989, Sir Tim Berners-Lee proposed a new project to his employer CERN, with the goal of easing the exchange of information between scientists by using a hypertext system. The proposal titled "HyperText and CERN", asked for comments and it was read by several people. In October 1990 the proposal was reformulated and enriched (having as co-author Robert Cailliau), and finally, it was approved. Between late 1990 and early 1991 the project resulted in Berners-Lee and his developers writing and testing several software libraries along with three programs, which initially ran on NeXTSTEP OS installed on NeXT workstations: Soon after, those programs, along with their source code, were made available to people interested in their usage. This statement freed web server developers from any possible legal issue about the development of derivative work based on that source code (a threat that in practice never existed). At the beginning of 1994, the most notable among new web servers was NCSA httpd which ran on a variety of Unix-based OSs and could serve dynamically generated content by implementing the POST HTTP method and the CGI to communicate with external programs. These capabilities, along with the multimedia features of NCSA's Mosaic browser (also able to manage HTML FORMs in order to send data to a web server) highlighted the potential of web technology for publishing and distributed computing applications.In the second half of 1994, the development of NCSA httpd stalled to the point that a group of external software developers, webmasters and other professional figures interested in that server, started to write and collect patches thanks to the NCSA httpd source code being available to the public domain. At the beginning of 1995 those patches were all applied to the last release of NCSA source code and, after several tests, the Apache HTTP server project was started. At the end of 1994, a new commercial web server, named Netsite, was released with specific features. It was the first one of many other similar products that were developed first by Netscape, then also by Sun Microsystems, and finally by Oracle Corporation. In mid-1995, the first version of IIS was released by Microsoft for Windows NT OS. This marked the entry, in the field of World Wide Web technologies, of a commercial developer and vendor that has played and still is playing a key role on both sides (client and server) of the web. In the second half of 1995, CERN and NCSA web servers started to decline (in global percentage usage) because of the widespread adoption of new web servers which had a much faster development cycle along with more features, more fixes applied, and more performances than the previous ones. Explosive growth and competition (1996–2014) 3 – a computer server appliance (2002, discontinued) At the end of 1996, there were already over fifty known, different web-server-software programs that were available to everybody who wanted to own an Internet domain name or to host websites. Many of them lived only shortly and were replaced by other web servers. The publication of RFCs about protocol versions HTTP/1.0 (1996) and HTTP/1.1 (1997, 1999), forced most web servers to comply (not always completely) with those standards. The use of TCP/IP persistent connections (HTTP/1.1) required web servers both to increase the maximum number of concurrent connections allowed and to improve their level of scalability. Between 1996 and 1999, Netscape Enterprise Server and Microsoft's IIS emerged among the leading commercial options whereas among the freely available and open-source programs Apache HTTP Server held the lead as the preferred server (because of its reliability and its many features). In those years there was also another commercial web server called Zeus (now discontinued) that was known as one of the fastest and most scalable web servers available on market, at least till the first decade of 2000s, despite its low percentage of usage. Apache resulted in the most used web server from mid-1996 to the end of 2015 when, after a few years of decline, it was surpassed initially by IIS and then by Nginx. Afterward IIS dropped to much lower percentages of usage than Apache (see also market share). From 2005–2006, Apache started to improve its speed and its scalability level by introducing new performance features (e.g., event MPM and new content cache). As those new performance improvements initially were marked as experimental, they were not enabled by its users for a long time and so Apache suffered, even more, the competition of commercial servers and, above all, of other open-source servers which meanwhile had already achieved far superior performances (mostly when serving static content) since the beginning of their development and at the time of the Apache decline were able to offer also a long enough list of well tested advanced features. A few years after 2000 started, not only other commercial and highly competitive web servers (e.g., LiteSpeed) but also many other open-source programs such as Hiawatha, Cherokee HTTP server, Lighttpd, Nginx and other derived and related products also available with commercial support emerged. Around 2007–2008, most popular web browsers increased their previous default limit of 2 persistent connections per host-domain (a limit recommended by RFC-2616) to 4, 6 or 8 persistent connections per host-domain, in order to speed up the retrieval of heavy web pages with lots of images, and to mitigate the problem of the shortage of persistent connections dedicated to dynamic objects used for bi-directional notifications of events in web pages. Within a year, these changes, on average, nearly tripled the maximum number of persistent connections that web servers had to manage. This trend (of increasing the number of persistent connections) definitely gave a strong impetus to the adoption of reverse proxies in front of slower web servers and it gave also one more chance to the emerging new web servers that could show all their speed and their capability to handle very high numbers of concurrent connections without requiring too many hardware resources (expensive computers with lots of CPUs, RAM and fast disks). New challenges (2015 and later years) In 2015, RFCs published new protocol version [HTTP/2], and as the implementation of new specifications was not trivial at all, a dilemma arose among developers of less popular web servers (e.g., with a percentage of usage lower than 1% .. 2%), about adding or not adding support for that new protocol version. In fact supporting HTTP/2 often required radical changes to their internal implementation due to many factors (practically always required encrypted connections, capability to distinguish between HTTP/1.x and HTTP/2 connections on the same TCP port, binary representation of HTTP messages, message priority, compression of HTTP headers, use of streams also known as TCP/IP sub-connections and related flow-control, etc.) and so a few developers of those web servers opted for not supporting new HTTP/2 version (at least in the near future) also because of these main reasons: In 2020–2021 the HTTP/2 dynamics about its implementation (by top web servers and popular web browsers) were partly replicated after the publication of advanced drafts of future RFC about HTTP/3 protocol. ==Technical overview==
Technical overview
The following technical overview should be considered only as an attempt to give a few very limited examples about some features that may be implemented in a web server and some of the tasks that it may perform in order to have a sufficiently wide scenario about the topic. A web server program plays the role of a server in a client–server model by implementing one or more versions of HTTP protocol, often including the HTTPS secure variant and other features and extensions that are considered useful for its planned usage. The complexity and the efficiency of a web server program may vary a lot depending on: • to read an HTTP request message; • to interpret it; • to verify its syntax; • to identify known HTTP headers and to extract their values from them. Once an HTTP request message has been decoded and verified, its values can be used to determine whether that request can be satisfied or not. This requires many other steps, including security checks. URL normalization Web server programs usually perform some type of URL normalization (URL found in most HTTP request messages) in order to: • make resource path always a clean uniform path from root directory of website; • lower security risks (e.g., by intercepting more easily attempts to access static resources outside the root directory of the website or to access to portions of path below website root directory that are forbidden or which require authorization); • make path of web resources more recognizable by human beings and web log analysis programs (also known as log analyzers or statistical applications). The term URL normalization refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed, including the conversion of the scheme and host to lowercase. Among the most important normalizations are the removal of "." and ".." path segments and adding trailing slashes to a non-empty path component. URL mapping "URL mapping is the process by which a web server or application framework determines how an incoming URL request is routed to the appropriate resource, handler, or action. Modern URL mapping mechanisms analyse the structure of the requested URL and use routing rules or configuration patterns to deliver static resources, invoke dynamic handlers, or perform rewrites and redirects without directly relying on file system paths. This approach allows clean, human-readable URLs and flexible application architectures.In practice, web server programs that implement advanced features, beyond the simple static content serving (e.g., URL rewrite engine, dynamic content serving), usually have to figure out how that URL has to be handled as a: • URL redirection, a redirection to another URL; • static request of file content; • dynamic request of: • directory listing of files or other sub-directories contained in that directory; • other types of dynamic request in order to identify the program or module processor able to handle that kind of URL path and to pass to it other URL parts, (i.e., usually path-info and query string variables). One or more configuration files of web server may specify the mapping of parts of URL path (e.g., initial parts of file path, filename extension and other path components) to a specific URL handler (file, directory, external program or internal module). When a web server implements one or more of the above-mentioned advanced features then the path part of a valid URL may not always match an existing file system path under website directory tree (a file or a directory in file system) because it can refer to a virtual name of an internal or external module processor for dynamic requests. URL path translation to file system Web server programs are able to translate an URL path (all or part of it), that refers to a physical file system path, to an absolute path under the target website's root directory. Example of a dynamic request using a program file to generate output: http://www.example.com/cgi-bin/forum.php?action=view&orderby=thread&date=2021-10-15 The client's user agent connects to www.example.com and then sends the following HTTP/1.1 request: GET /cgi-bin/forum.php?action=view&ordeby=thread&date=2021-10-15 HTTP/1.1 Host: www.example.com Connection: keep-alive The result is the local file path of the program (in this example, a PHP program): /home/www/www.example.com/cgi-bin/forum.php The web server executes that program, passing in the path-info and the query string action=view&orderby=thread&date=2021-10-15 so that the program has the info it needs to run. (In this case, it will return an HTML document containing a view of forum entries ordered by thread from October 15, 2021). In addition to this, the web server reads data sent from the external program and resends that data to the client that made the request. Manage request message Once a request has been read, interpreted, and verified, it has to be managed depending on its method, its URL, and its parameters, which may include values of HTTP headers. In practice, the web server has to handle the request by using one of these response paths: Serve dynamic content If a web server program is capable of serving dynamic content and it has been configured to do so, then it is able to communicate with the proper internal module or external program (associated with the requested URL path) in order to pass to it the parameters of the client request. After that, the web server program reads from it its data response (that it has generated, often on the fly) and then it resends it to the client program who made the request. NOTE: when serving static and dynamic content, a web server program usually has to support also the following HTTP method in order to be able to safely receive data from clients and so to be able to host also websites with interactive forms that may send large data sets (e.g., lots of data entry or file uploads) to web server, external programs or modules: • POST In order to be able to communicate with its internal modules or external programs, a web server program must have implemented one or more of the many available gateway interfaces (see also Web Server Gateway Interfaces used for dynamic content). The three standard and historical gateway interfaces are the following ones. ; CGI : An external CGI program is run by web server program for each dynamic request, then web server program reads from it the generated data response and then resends it to client. ; SCGI : An external SCGI program (it usually is a process) is started once by web server program or by some other program or process and then it waits for network connections; every time there is a new request for it, web server program makes a new network connection to it in order to send request parameters and to read its data response, then network connection is closed. ; FastCGI : An external FastCGI program (it usually is a process) is started once by web server program or by some other program or process and then it waits for a network connection which is established permanently by web server; through that connection are sent the request parameters and read data responses. Directory listings A web server program may be capable to manage the dynamic generation (on the fly) of a directory index list of files and sub-directories. If a web server program is configured to do so and a requested URL path matches an existing directory and its access is allowed and no static index file is found under that directory then a web page (usually in HTML format), containing the list of files or subdirectories of above mentioned directory, is dynamically generated (on the fly). If it cannot be generated an error is returned. Some web server programs allow the customization of directory listings by allowing the usage of a web page template—an HTML document containing placeholders, (e.g., $(FILE_NAME), $(FILE_SIZE), etc.) that are replaced with the field values of each file entry found in directory by web server (e.g., index.tpl) or the usage of HTML and embedded source code that is interpreted and executed (e.g.,, index.asp) or by supporting the usage of dynamic index programs such as CGIs, SCGIs, FCGIs (e.g., index.cgi, index.php, index.fcgi). Usage of dynamically generated directory listings is usually avoided or limited to a few selected directories of a website because that generation takes much more OS resources than sending a static index page. The main usage of directory listings is to allow the download of files (usually when their names, sizes, modification date-times or file attributes may change randomly and frequently) as they are, without requiring to provide further information to requesting user. Program or module processing An external program or an internal module (processing unit) can execute some sort of application function that may be used to get data from or to store data to one or more data repositories: • files (file system); • databases (DBs); • other sources located in local computer or in other computers. A processing unit can return any kind of web content, also by using data retrieved from a data repository: • a document (e.g., HTML, XML, etc.); • an image; • a video; • structured data (e.g., that may be used to update one or more values displayed by a dynamic page (DHTML) of a web interface and that maybe was requested by an XMLHttpRequest API) (see also: dynamic page). In practice whenever there is content that may vary, depending on one or more parameters contained in client request or in configuration settings, then, usually, it is generated dynamically. Send response message Web server programs are able to send response messages as replies to client request messages. • HTTP server errors, due to internal server errors. When an error response or message is received by a client browser, then if it is related to the main user request (e.g., an URL of a web resource such as a web page) then usually that error message is shown in some browser window or message. URL authorization A web server program may be able to verify whether the requested URL path: • can be freely accessed by everybody; • requires a user authentication (request of user credentials such as user name and password); • access is forbidden to some or all kind of users. If the authorization or access-rights feature has been implemented and enabled and access to web resource is not granted, then, depending on the required access rights, a web server program: • can deny access by sending a specific error message (e.g., access forbidden); • may deny access by sending a specific error message (e.g., access unauthorized) that usually forces the client browser to ask human user to provide required user credentials; if authentication credentials are provided then web server program verifies and accepts or rejects them. URL redirection A web server program may have the capability of doing URL redirections to new URLs (new locations) which consists in replying to a client request message with a response message containing a new URL suited to access a valid or an existing web resource (client should redo the request with the new URL). URL redirection of location is used: If web resource data is sent back to client, then it can be static content or dynamic content depending on how it has been retrieved (from a file or from the output of some program or module). Content cache In order to speed up web server responses by lowering average HTTP response times and hardware resources used, many popular web servers implement one or more content caches, each one specialized in a content category. Content is usually cached by its origin: • static content: • file cache; • dynamic content: • dynamic cache (module or program output). File cache Historically, static contents found in files which had to be accessed frequently, randomly and quickly, have been stored mostly on electro-mechanical disks since mid-late 1960s and 1970s; regrettably reads from and writes to those kind of devices have always been considered very slow operations when compared to RAM speed and so, since early OSs, first disk caches and then also OS file cache sub-systems were developed to speed up I/O operations of frequently accessed data. Even with the aid of an OS file cache, the relative or occasional slowness of I/O operations involving directories and files stored on disks became soon a bottleneck in the increase of performances expected from top level web servers, specially since mid-late 1990s, when web Internet traffic started to grow exponentially along with the constant increase of speed of Internet or network lines. The problem about how to further efficiently speed-up the serving of static files, thus increasing the maximum number of requests or responses per second (RPS), started to be studied and researched since mid 1990s, with the aim to propose useful cache models that could be implemented in web server programs. In practice, nowadays, many web server programs include their own userland file cache, tailored for a web server usage and using their specific implementation and parameters. The wide spread adoption of RAID and fast solid-state drives (storage hardware with very high I/O speed) has slightly reduced but of course not eliminated the advantage of having a file cache incorporated in a web server. Dynamic cache Dynamic content, output by an internal module or an external program, may not always change very frequently (given a unique URL with keys or parameters) and so, maybe for a while (e.g., from one second to several hours or more), the resulting output can be cached in RAM or even on a fast disk. The typical usage of a dynamic cache is when a website has dynamic web pages about news, weather, images, maps, etc. that do not change frequently (e.g., every n minutes) and that are accessed by a huge number of clients per minute per hour; in those cases it is useful to return cached content too (without calling the internal module or the external program) because clients often do not have an updated copy of the requested content in their browser caches. Anyway, in most cases those kind of caches are implemented by external servers (e.g., reverse proxy) or by storing dynamic data output in separate computers, managed by specific applications (e.g., memcached), in order to not compete for hardware resources (CPU, RAM, disks) with web servers. Kernel-mode and user-mode web servers A web server software can be either incorporated into the OS and executed in kernel space, or it can be executed in user space (like other regular applications). Web servers that run in kernel mode (usually called kernel space web servers) can have direct access to kernel resources and so they can be, in theory, faster than those running in user mode, but there are disadvantages in running a web server in kernel mode (e.g., difficulties in developing and debugging software) whereas run-time critical errors may lead to serious problems in OS kernel. Web servers that run in user-mode have to ask the system for permission to use more memory or more CPU resources. Not only do these requests to the kernel take time, but they might not always be satisfied because the system reserves resources for its own usage and has the responsibility to share hardware resources with all the other running applications. Executing in user mode can also mean using more buffer or data copies (between user-space and kernel-space) which can lead to a decrease in the performance of a user-mode web server. Nowadays almost all web server software is executed in user mode (because many of the aforementioned small disadvantages have been overcome by faster hardware, new OS versions, much faster OS system calls and new optimized web server software). See also comparison of web server software to discover which of them run in kernel mode or in user mode (also referred as kernel space or user space). ==Performances==
Performances
To improve the user experience (on client or browser side), a web server should reply quickly (as soon as possible) to client requests; unless content response is throttled (by configuration) for some type of files (e.g., big or huge files), also returned data content should be sent as fast as possible (high transfer speed). In other words, a web server should always be very responsive, even under high load of web traffic, in order to keep '''total user's wait (sum of browser time + network time + web server response time) for a response as low as possible'''. Performance metrics For web server software, main key performance metrics (measured under vary operating conditions) usually are at least the following ones: • (, similar to , depending on HTTP version and configuration, type of HTTP requests and other operating conditions); • (), is the number of connections per second accepted by web server (useful when using HTTP/1.0 or HTTP/1.1 with a very low limit of requests or responses per connection, i.e., 1 .. 20); • + response time for each new client request; usually benchmark tool shows how many requests have been satisfied within a scale of time laps (e.g., within 1ms, 3ms, 5ms, 10ms, 20ms, 30ms, 40ms) or the shortest, the average and the longest response time; • , in bytes per second. Among the operating conditions, the (1 .. n) of used during a test is an important parameter because it allows to correlate the supported by web server with results of the tested performance metrics. Software efficiency The specific web server software design and model adopted: • single process or multi-process; • single thread (no thread) or multi-thread for each process; • usage of coroutines or not; ... and other programming techniques, such as: • minimization of possible CPU cache misses; • minimization of possible CPU branch mispredictions in critical paths for speed; • minimization of the number of system calls used to perform a certain function or task; • other tricks; ... used to implement a web server program, can bias a lot the performances and in particular the scalability level that can be achieved under heavy load or when using high end hardware (many CPUs, disks and lots of RAM). In practice some web server software models may require more OS resources (specially more CPUs and more RAM) than others to be able to work well and so to achieve target performances. Operating conditions There are many operating conditions that can affect the performances of a web server; performance values may vary depending on: • the settings of web server (including the fact that log file is or is not enabled, etc.); • the HTTP version used by client requests; • the average HTTP request type (method, length of HTTP headers and optional body); • whether the requested content is static or dynamic; • whether the content is cached or not cached (by server or client); • whether the content is compressed on the fly (when transferred), pre-compressed (i.e., when a file resource is stored on disk already compressed so that web server can send that file directly to the network with the only indication that its content is compressed) or not compressed at all; • whether the connections are or are not encrypted; • the average network speed between web server and its clients; • the number of active TCP connections; • the number of active processes managed by web server (including external CGI, SCGI, FCGI programs); • the hardware and software limitations or settings of the OS of the computers on which the web server runs; • other minor conditions. Benchmarking Performances of a web server are typically benchmarked by using one or more of the available automated load testing tools. ==Load limits==
Load limits
A web server (program installation) usually has pre-defined load limits for each combination of operating conditions, also because it is limited by OS resources and because it can handle only a limited number of concurrent client connections (usually between 2 and several tens of thousands for each active web server process, see also the C10k problem and the C10M problem). When a web server is near to or over its load limits, it gets overloaded and so it may become unresponsive. Causes of overload At any time web servers can be overloaded due to one or more of the following causes: • Excess legitimate web traffic. Thousands or even millions of clients connecting to the website in a short amount of time (e.g., the Slashdot effect). • Distributed Denial of Service attacks. A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer or network resource unavailable to its intended users. • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them). • XSS worms can cause high traffic because of millions of infected browsers or web servers. • Internet bots Traffic not filtered or limited on large websites with very few network resources (e.g., bandwidth) or hardware resources (CPUs, RAM, disks). • Internet (network) slowdowns (e.g., due to packet losses) so that client requests are served more slowly and the number of connections increases so much that server limits are reached. • Web servers, serving dynamic content, waiting for slow responses coming from back-end computers (e.g., databases), maybe because of too many queries mixed with too many inserts or updates of DB data; in these cases web servers have to wait for back-end data responses before replying to HTTP clients but during these waits too many new client connections or requests arrive and so they become overloaded. • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures such as back-end (e.g., database) failures; in these cases the remaining web servers may get too much traffic and become overloaded. Symptoms of overload The symptoms of an overloaded web server are usually the following ones: • Requests are served with (possibly long) delays (from one second to a few hundred seconds). • The web server returns an HTTP error code, such as 500, 502, 503, 504, 408, or even an intermittent 404. • The web server refuses or resets (interrupts) TCP connections before it returns any content. • In very rare cases, the web server returns only a part of the requested content. This behavior can be considered a bug, even if it usually arises as a symptom of overload. Anti-overload techniques To partially overcome above average load limits and to prevent overload, most popular websites use common techniques like the following ones: • Tuning OS parameters for hardware capabilities and usage. • Tuning web servers parameters to improve their security and performances. • Deploying techniques (not only for static contents but, whenever possible, for dynamic contents too). • Managing network traffic, by using: • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns; • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns; • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage. • Using different domain names, IP addresses and computers to serve different kinds (static and dynamic) of content; the aim is to separate big or huge files (download.*) (that domain might be replaced also by a CDN) from small and medium-sized files (static.*) and from main dynamic site (maybe where some contents are stored in a backend database) (www.*); the idea is to be able to efficiently serve big or huge (over 10 – 1000 MB) files (maybe throttling downloads) and to fully cache small and medium-sized files, without affecting performances of dynamic site under heavy load, by using different settings for each (group) of web server computers: • https://download.example.com • https://static.example.com • https://www.example.com • Using many web servers (computers) that are grouped together behind a load balancer so that they act or are seen as one big web server. • Adding more hardware resources (i.e., RAM, fast disks) to each computer. • Using more efficient computer programs for web servers (see also: software efficiency). • Using the most efficient to process dynamic requests (spawning one or more external programs every time a dynamic page is retrieved, kills performances). • Using other programming techniques and workarounds, especially if dynamic content is involved, to speed up the HTTP responses (i.e., by avoiding dynamic calls to retrieve objects, such as style sheets, images and scripts), that never change or change very rarely, by copying that content to static files once and then keeping them synchronized with dynamic content. • Using latest efficient versions of HTTP (e.g., beyond using common HTTP/1.1 also by enabling HTTP/2 and maybe HTTP/3 too, whenever available web server software has reliable support for the latter two protocols) in order to reduce a lot the number of TCP/IP connections started by each client and the size of data exchanged (because of more compact HTTP headers representation and maybe data compression). This may not prevent overloads of RAM and CPU caused by the need for encryption. It may also not address overloads caused by excessively large files uploaded at high speed, because they are optimized for concurrency. ==Market share==
Market share
Below are the latest statistics of the market share of all sites of the top web servers on the Internet by Netcraft. NOTE: (*) percentage rounded to integer number, because its decimal values are not publicly reported by source page (only its rounded value is reported in graph). ==See also==
tickerdossier.comtickerdossier.substack.com