Instead of creating a new process for each request, FastCGI uses persistent processes to handle a series of requests. These processes are owned by the FastCGI server, not the web server. To service an incoming request, the web server sends
environment variable information and the page request to a FastCGI process over either a
Unix domain socket, a
named pipe, or a
Transmission Control Protocol (TCP) connection. Responses are returned from the process to the web server over the same connection, and the web server then delivers that response to the
end user. The connection may be closed at the end of a response, but both web server and FastCGI service processes persist. Each individual FastCGI process can handle many requests over its lifetime, thereby avoiding the overhead of per-request process creation and termination. Processing multiple requests concurrently can be done in several ways: by using one connection with internal
multiplexing (i.e., multiple requests over one connection); by using multiple connections; or by a mix of these methods. Multiple FastCGI servers can be configured, increasing stability and scalability. Web site administrators and programmers can find that separating web applications from the web server in FastCGI has many advantages over embedded interpreters (
mod_perl,
mod_php, etc.). This separation allows server and application processes to be restarted independently – an important consideration for busy web sites. It also enables the implementation of per-application, hosting service security policies, which is an important requirement for ISPs and web hosting companies. Different types of incoming requests can be distributed to specific FastCGI servers which have been equipped to handle those types of requests efficiently. ==Web servers that implement FastCGI==