Data centers house critical computing resources in a controlled environment and must generally operate with very
high availability. Key design elements include providing power for the equipment, temperature and humidity control, cabling, fire safety, and security.
Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach.
Obsolescence and modernization Industry research company
International Data Corporation (IDC) puts the average age of a data center at nine years old.
Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize. Focus on
modernization is not new: rapid obsolescence of data center equipment was a concern by at least 2007, and in 2011
Uptime Institute was concerned about aging equipment.
Industry standards The
Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center. Telcordia GR-3160,
NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to: • Operate and manage a carrier's telecommunication network • Provide data center based applications directly to the carrier's customers • Provide hosted applications for a third party to provide services to their customers • Provide a combination of these and similar data center applications
Reliability of electrical power supply Power supplies, either back up or continuous onsite power consists of one or more
uninterruptible power supplies, battery banks,
diesel,
gas turbine,
gas engine generating sets.Greater primary fuel energy efficiency can be achieved with the use of
cogeneration technology, generating electricity, heating and cooling onsite. To prevent
single points of failure, all elements of the electrical systems, including backup systems, are typically given
redundant copies, and critical servers are connected to both the
A-side and
B-side power feeds. This arrangement is often made to achieve
N+1 redundancy in the systems.
Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
Low-voltage cable routing Options for low voltage cable routing might include:
data cabling that is routed through overhead
cable trays; raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks; anti-static tiles for flooring, especially in low-cost data centers.
Environmental control Maintaining suitable temperature and humidity levels is critical to preventing equipment damage caused by
overheating. Overheating can cause components, usually the silicon or copper of the wires or circuits to melt, causing loose connections and fire hazards. Typical temperature control methods include: •
Air conditioning • Indirect cooling, such as the use of outside air,
indirect evaporative cooling units, and seawater cooling.
Airflow management is the practice of achieving data center
cooling efficiency by preventing the recirculation of hot exhaust air and by reducing bypass airflow. Common approaches include hot-aisle/cold-aisle containment and the deployment of in-row cooling units which position cooling directly between server racks to intercept exhaust heat before it mixes with room air. Humidity control not only prevents moisture-related issues: importantly, excess humidity can cause dust to adhere more readily to fan blades and heat sinks, impeding air cooling leading to higher temperatures.
Aisle containment Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products. Computer cabinets/
Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts do not mix air, which would severely reduce cooling efficiency. Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained. Another option is fitting cabinets with vertical exhaust duct
chimneys. Hot exhaust pipes/vents/ducts can direct the air into a
Plenum space above a
Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement.
Fire protection fire suppression tanks Data centers feature
fire protection systems, including
passive and
active design elements, as well as implementation of
fire prevention programs in operations.
Smoke detectors are usually installed to provide early warning of a fire at its incipient stage. Although the main room usually does not allow
Wet Pipe-based Systems due to the fragile nature of
circuit boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are
closed systems, such as: •
Sprinkler systems •
Misting, using high pressure to create extremely small water droplets, which can be used in sensitive rooms due to the nature of the droplets. However, there also exist other means to put out fires, especially in
Sensitive areas, usually using
Gaseous fire suppression, of which
Halon gas was the most popular, until the negative effects of producing and using it were discovered.
Security Physical access is usually restricted. Layered security often starts with fencing,
bollards and
mantraps.
Video camera surveillance and permanent
security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace. Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent
power distribution units, so that locks are networked through the same appliance.
Transformation Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation,
virtualization,
automation and security. Data center consolidation consists in reducing the number of data centers and avoiding
server sprawl (both physical and virtual), often includes replacing aging data center equipment. Likewise, this process is aided by standardization which makes these systems follow a uniform set of configurations in order to simplify and improve efficiency. Lastly, security initiatives integrate the protection of virtual systems with existing security of physical infrastructures.
Raised floor The first
raised floor computer room was made by
IBM in 1956 to allow access for wiring. During the 1970s, raised floors became more common because they allow cool air to circulate more efficiently. A raised floor standards guide (GR-2930) was developed by
Telcordia Technologies, a subsidiary of
Ericsson.
Lights out The
lights-out data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.
Noise levels Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence." OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92–96 dB(A). Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying "It's like being on a
tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves." External sources of noise include HVAC equipment and energy generators.]
Site selection Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Other considerations should include flight paths, neighboring power drains, geological risks, and climate (associated with cooling costs). Local political considerations such as availability of subsidies and lack of opposition are also important factors in locating data centers.The costs of avoiding downtime should not exceed the cost of the downtime itself.
Dynamic infrastructure Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations,
provisioning, to enhance performance, or building
co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed. Side benefits include • reducing cost • facilitating
business continuity and
high availability • enabling
cloud and
grid computing.
Software/data backup Non-mutually exclusive options for
data backup are: • Onsite • Offsite Onsite is traditional, and one of its major advantages is immediate availability.
Offsite backup storage Data backup techniques include having an
encrypted copy of the data offsite. Methods used for transporting data are: • Having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere. • Directly transferring the data to another site during the backup, using appropriate links. • Uploading the data "into the cloud".
Network infrastructure Communications in data centers today are most often based on
networks running the
Internet protocol suite. Data centers contain a set of
routers and
switches that transport traffic between the servers and to the outside world which are connected according to the
data center network architecture.
Redundancy of the internet connection is often provided by using two or more upstream service providers (see
Multihoming). Some of the servers at the data center are used for running the basic internet and
intranet services needed by internal users in the organization, e.g., e-mail servers,
proxy servers, and
DNS servers. Network security elements are also usually deployed:
firewalls,
VPN gateways,
intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center. ==Energy use==