When the functions
ui represent some abstract form of "happiness", the utilitarian rule becomes harder to interpret. For the above formula to make sense, it must be assumed that the utility functions (u_i)_{i \in I} are both
cardinal and
interpersonally comparable at a cardinal level. The notion that individuals have cardinal utility functions is not that problematic. Cardinal utility has been implicitly assumed in
decision theory ever since
Daniel Bernoulli's analysis of the
St. Petersburg paradox. Rigorous mathematical theories of cardinal utility (with application to risky decision making) were developed by
Frank P. Ramsey,
Bruno de Finetti,
von Neumann and Morgenstern, and
Leonard Savage. However, in these theories, a person's utility function is only well-defined up to an "affine rescaling". Thus, if the utility function u_i:X\longrightarrow \mathbb{R} is valid description of her preferences, and if r_i,s_i\in \mathbb{R} are two constants with s_i>0, then the "rescaled" utility function v_i(x) := s_i\, u_i(x) + r_i is an equally valid description of her preferences. If we define a new package of utility functions (v_i)_{i\in I} using possibly different r_i\in \mathbb{R} and s_i>0 for all i \in I, and we then consider the utilitarian sum : V(x):= \sum_{i\in I} v_i(x), then in general, the maximizer of V will
not be the same as the maximizer of U. Thus, in a sense, classic utilitarian social choice is not well-defined within the standard model of cardinal utility used in decision theory, unless a mechanism is specified to "calibrate" the utility functions of the different individuals. == Relative utilitarianism ==