X-ray reflectivity measurements are analyzed by fitting to the measured data a simulated curve calculated using the recursive Parratt's formalism combined with the rough interface formula. The fitting parameters are typically layer thicknesses, densities (from which the index of refraction n and eventually the wavevector z component k_{j,z} is calculated) and interfacial roughnesses. Measurements are typically normalized so that the maximum reflectivity is 1, but normalization factor can be included in fitting, as well. Additional fitting parameters may be background radiation level and limited sample size due to which beam footprint at low angles may exceed the sample size, thus reducing reflectivity. Several fitting algorithms have been attempted for X-ray reflectivity, some of which find a local optimum instead of the global optimum. The
Levenberg-Marquardt method finds a local optimum. Due to the curve having many interference fringes, it finds incorrect layer thicknesses unless the initial guess is extraordinarily good. The derivative-free
simplex method also finds a local optimum. In order to find global optimum, global optimization algorithms such as simulated annealing are required. Unfortunately, simulated annealing may be hard to parallelize on modern multicore computers. Given enough time,
simulated annealing can be shown to find the global optimum with a probability approaching 1, but such convergence proof does not mean the required time is reasonably low. In 1998, it was found that
genetic algorithms are robust and fast fitting methods for X-ray reflectivity. Thus, genetic algorithms have been adopted by the software of practically all X-ray diffractometer manufacturers and also by open source fitting software. Fitting a curve requires a function usually called fitness function, cost function, fitting error function or figure of merit (FOM). It measures the difference between measured curve and simulated curve, and therefore, lower values are better. When fitting, the measurement and the best simulation are typically represented in logarithmic space. From mathematical standpoint, the \chi^2 fitting error function takes into account the effects of Poisson-distributed photon counting noise in a mathematically correct way: : F = \sum_i \frac{(x_{simul,i} - x_{meas,i})^2}{x_{meas,i}} . However, this \chi^2 function may give too much weight to the high-intensity regions. If high-intensity regions are important (such as when finding mass density from critical angle), this may not be a problem, but the fit may not visually agree with the measurement at low-intensity high-angle ranges. Another popular fitting error function is the 2-norm in logarithmic space function. It is defined in the following way: : F = \sqrt{\sum_i (\log x_{simul,i} - \log x_{meas,i})^2} . Needless to say, in the equation data points with zero measured photon counts need to be removed. This 2-norm in logarithmic space can be generalized to p-norm in logarithmic space. The drawback of this 2-norm in logarithmic space is that it may give too much weight to regions where relative photon counting noise is high. == Neural network analysis of XRR ==