The ATLAS detector is 46 metres long, 25 metres in diameter, and weighs about 7,000 tonnes; it contains some 3,000 km of cable.
Detector systems : (1) Forward regions (End-caps) (1) Barrel region
Magnet System: (2) Toroid Magnets (3) Solenoid Magnet
Inner Detector: (4) Transition Radiation Tracker (5) Semi-Conductor Tracker (6) Pixel Detector
Calorimeters: (7) Liquid Argon Calorimeter (8) Tile Calorimeter The ATLAS detector
Inner Detector s in September 2005. The Inner Detector begins a few centimetres from the proton beam axis, extends to a radius of 1.2 metres, and is 6.2 metres in length along the beam pipe. Its basic function is to track charged particles by detecting their interaction with material at discrete points, revealing detailed information about the types of particles and their momentum. The Inner Detector has three parts, which are explained below. The
magnetic field surrounding the entire inner detector causes charged particles to curve; the direction of the curve reveals a particle's charge and the degree of curvature reveals its momentum. The starting points of the tracks yield useful information for
identifying particles; for example, if a group of tracks seem to originate from a point other than the original proton–proton collision, this may be a sign that the particles came from the decay of a hadron with a
bottom quark (see
b-tagging).
Pixel Detector The Pixel Detector, the innermost part of the detector, contains four concentric layers and three disks on each end-cap, with a total of 1,744
modules, each measuring 2 centimetres by 6 centimetres. The detecting material is 250 μm thick
silicon. Each module contains 16 readout
chips and other electronic components. The smallest unit that can be read out is a pixel (50 by 400 micrometres); there are roughly 47,000 pixels per module. The minute pixel size is designed for extremely precise tracking very close to the interaction point. In total, the Pixel Detector has over 92 million readout channels, which is about 50% of the total readout channels of the whole detector. Having such a large count created a considerable design and engineering challenge. Another challenge was the
radiation to which the Pixel Detector is exposed because of its proximity to the interaction point, requiring that all components be
radiation hardened in order to continue operating after significant exposures.
Semi-Conductor Tracker The Semi-Conductor Tracker (SCT) is the middle component of the inner detector. It is similar in concept and function to the Pixel Detector but with long, narrow strips rather than small pixels, making coverage of a larger area practical. Each strip measures 80 micrometres by 12 centimetres. The SCT is the most critical part of the inner detector for basic tracking in the plane perpendicular to the beam, since it measures particles over a much larger area than the Pixel Detector, with more sampled points and roughly equal (albeit one-dimensional) accuracy. It is composed of four double layers of silicon strips, and has 6.3 million readout channels and a total area of 61 square meters.
Transition Radiation Tracker The Transition Radiation Tracker (TRT), the outermost component of the inner detector, is a combination of a
straw tracker and a
transition radiation detector. The detecting elements are drift tubes (straws), each four millimetres in diameter and up to 144 centimetres long. The uncertainty of track position measurements (position resolution) is about 200 micrometres. This is not as precise as those for the other two detectors, but it was necessary to reduce the cost of covering a larger volume and to have transition radiation detection capability. Each straw is filled with gas that becomes
ionized when a charged particle passes through. The straws are held at about −1,500 V, driving the negative ions to a fine wire down the centre of each straw, producing a current pulse (signal) in the wire. The wires with signals create a pattern of 'hit' straws that allow the path of the particle to be determined. Between the straws, materials with widely varying
indices of refraction cause ultra-relativistic charged particles to produce
transition radiation and leave much stronger signals in some straws.
Xenon and
argon gas is used to increase the number of straws with strong signals. Since the amount of transition radiation is greatest for highly
relativistic particles (those with a speed very near the
speed of light), and because particles of a particular energy have a higher speed the lighter they are, particle paths with many very strong signals can be identified as belonging to the lightest charged particles:
electrons and their antiparticles,
positrons. The TRT has about 298,000 straws in total.
Calorimeters calorimeter, waiting to be moved inside the toroid magnets. , waiting to be inserted in late February 2006. The
calorimeters Both are
sampling calorimeters; that is, they absorb energy in high-density metal and periodically sample the shape of the resulting
particle shower, inferring the energy of the original particle from this measurement.
Electromagnetic calorimeter The electromagnetic (EM) calorimeter absorbs energy from particles that interact
electromagnetically, which include charged particles and photons. It has high precision, both in the amount of energy absorbed and in the precise location of the energy deposited. The angle between the particle's trajectory and the detector's beam axis (or more precisely the
pseudorapidity) and its angle within the perpendicular plane are both measured to within roughly 0.025
radians. The barrel EM calorimeter has accordion shaped electrodes and the energy-absorbing materials are
lead and
stainless steel, with liquid
argon as the sampling material, and a
cryostat is required around the EM calorimeter to keep it sufficiently cool.
Hadron calorimeter The
hadron calorimeter absorbs energy from particles that pass through the EM calorimeter, but do interact via the
strong force; these particles are primarily hadrons. It is less precise, both in energy magnitude and in the localization (within about 0.1 radians only). This high magnetic field allows even very energetic particles to curve enough for their momentum to be determined, and its nearly uniform direction and strength allow measurements to be made very precisely. Particles with momenta below roughly 400
MeV will be curved so strongly that they will loop repeatedly in the field and most likely not be measured; however, this energy is very small compared to the several
TeV of energy released in each proton collision.
Toroid Magnets The outer
toroidal magnetic field is produced by eight very large air-core
superconducting barrel loops and two smaller end-caps air toroidal magnets, for a total of 24 barrel loops all situated outside the calorimeters and within the muon system. • LUCID (LUminosity Cherenkov Integrating Detector) is the first of these detectors designed to measure luminosity, and located in the ATLAS cavern at 17 m from the interaction point between the two muon endcaps; • ZDC (Zero Degree Calorimeter) is designed to measure neutral particles on-axis to the beam, and located at 140 m from the IP in the LHC tunnel where the two beams are split back into separate beam pipes; • AFP (Atlas Forward Proton) is designed to tag diffractive events, and located at 204 m and 217 m; • ALFA (Absolute Luminosity For ATLAS) is designed to measure elastic proton scattering located at 240 m just before the bending magnets of the LHC arc.
Data systems Data generation Earlier particle detector read-out and event detection systems were based on parallel shared
buses such as
VMEbus or
FASTBUS. Since such a bus architecture cannot keep up with the data requirements of the LHC detectors, all the ATLAS data acquisition systems rely on high-speed point-to-point links and switching networks. Even with advanced
electronics for data reading and storage, the ATLAS detector generates too much raw data to read out or store everything: about 25
MB per raw event, multiplied by 40 million
beam crossings per second (40
MHz) in the center of the detector. This produces a total of 1
petabyte of raw data per second. By avoiding to write empty segments of each event (zero suppression), which do not contain physical information, the average size of an event is reduced to 1.6
MB, for a total of 64
terabyte of data per second. uses fast event reconstruction to identify, in real time, the most interesting
events to retain for detailed analysis. In the second data-taking period of the LHC, Run-2, there were two distinct trigger levels: • The Level 1 trigger (L1), implemented in custom hardware at the detector site. The decision to save or reject an event data is made in less than 2.5 μs. It uses reduced granularity information from the calorimeters and the muon spectrometer, and reduces the rate of events in the read-out from 40
MHz to 100
kHz. The L1 rejection factor in therefore equal to 400. • The High Level Trigger trigger (HLT), implemented in software, uses a computer battery consisting of approximately 40,000
CPUs. In order to decide which of the 100,000 events per second coming from L1 to save, specific analyses of each collision are carried out in 200 μs. The HLT uses limited regions of the detector, so-called Regions of Interest (RoI), to be reconstructed with the full detector granularity, including tracking, and allows matching of energy deposits to tracks. The HLT rejection factor is 100: after this step, the rate of events is reduced from 100 to 1
kHz. The remaining data, corresponding to about 1,000 events per second, are stored for further analyses.
Analysis process ATLAS permanently records more than 10
petabytes of data per year. Offline
event reconstruction is performed on all permanently stored events, turning the pattern of signals from the detector into physics objects, such as
jets,
photons, and
leptons.
Grid computing is being used extensively for event reconstruction, allowing the parallel use of university and laboratory computer networks throughout the world for the
CPU-intensive task of reducing large quantities of raw data into a form suitable for physics analysis. The
software for these tasks has been under development for many years, and refinements are ongoing, even after data collection has begun. Individuals and groups within the collaboration are continuously writing their own
code to perform further analyses of these objects, searching the patterns of detected particles for particular physical models or hypothetical particles. This activity requires processing 25
petabytes of data every week. ==References==