Person classification Problem: classify whether a given person is a male or a female based on the measured features. The features include height, weight, and foot size. Although with NB classifier we treat them as independent, they are not in reality.
Training Example training set below. The classifier created from the training set using a Gaussian distribution assumption would be (given variances are
unbiased sample variances): The following example assumes equiprobable classes so that P(male)= P(female) = 0.5. This prior
probability distribution might be based on prior knowledge of frequencies in the larger population or in the training set.
Testing Below is a sample to be classified as male or female. In order to classify the sample, one has to determine which posterior is greater, male or female. For the classification as male the posterior is given by \text{posterior (male)} = \frac{P(\text{male}) \, p(\text{height} \mid \text{male}) \, p(\text{weight} \mid \text{male}) \, p(\text{foot size} \mid \text{male})}{\text{evidence}} For the classification as female the posterior is given by \text{posterior (female)} = \frac{P(\text{female}) \, p(\text{height} \mid \text{female}) \, p(\text{weight} \mid \text{female}) \, p(\text{foot size} \mid \text{female})}{\text{evidence}} The evidence (also termed normalizing constant) may be calculated: \begin{align} \text{evidence} = P(\text{male}) \, p(\text{height} \mid \text{male}) \, p(\text{weight} \mid \text{male}) \, p(\text{foot size} \mid \text{male}) \\ + P(\text{female}) \, p(\text{height} \mid \text{female}) \, p(\text{weight} \mid \text{female}) \, p(\text{foot size} \mid \text{female}) \end{align} However, given the sample, the evidence is a constant and thus scales both posteriors equally. It therefore does not affect classification and can be ignored. The
probability distribution for the sex of the sample can now be determined: P(\text{male}) = 0.5 p({\text{height}} \mid \text{male}) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(\frac{-(6-\mu)^2}{2\sigma^2}\right) \approx 1.5789, where \mu = 5.855 and \sigma^2 = 3.5033 \cdot 10^{-2} are the parameters of normal distribution which have been previously determined from the training set. Note that a value greater than 1 is OK here – it is a probability density rather than a probability, because
height is a continuous variable. p({\text{weight}} \mid \text{male}) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(\frac{-(130-\mu)^2}{2\sigma^2}\right) = 5.9881 \cdot 10^{-6} p({\text{foot size}} \mid \text{male}) = \frac{1}{\sqrt{2\pi \sigma^2}}\exp\left(\frac{-(8-\mu)^2}{2\sigma^2}\right) = 1.3112 \cdot 10^{-3} \text{posterior numerator (male)} = \text{their product} = 6.1984 \cdot 10^{-9} P({\text{female}}) = 0.5 p({\text{height}} \mid {\text{female}}) = 2.23 \cdot 10^{-1} p({\text{weight}} \mid {\text{female}}) = 1.6789 \cdot 10^{-2} p({\text{foot size}} \mid {\text{female}}) = 2.8669 \cdot 10^{-1} \text{posterior numerator (female)} = \text{their product} = 5.3778 \cdot 10^{-4} Since posterior numerator is greater in the female case, the prediction is that the sample is female.
Document classification Here is a worked example of naive Bayesian classification to the
document classification problem. Consider the problem of classifying documents by their content, for example into
spam and non-spam
e-mails. Imagine that documents are drawn from a number of classes of documents which can be modeled as sets of words where the (independent) probability that the i-th word of a given document occurs in a document from class
C can be written as p(w_i \mid C)\, (For this treatment, things are further simplified by assuming that words are randomly distributed in the document - that is, words are not dependent on the length of the document, position within the document with relation to other words, or other document-context.) Then the probability that a given document
D contains all of the words w_i, given a class
C, is p(D\mid C) = \prod_i p(w_i \mid C)\, The question that has to be answered is: "what is the probability that a given document
D belongs to a given class
C?" In other words, what is p(C \mid D)\,? Now
by definition p(D\mid C)={p(D\cap C)\over p(C)} and p(C \mid D) = {p(D\cap C)\over p(D)} Bayes' theorem manipulates these into a statement of probability in terms of
likelihood. p(C\mid D) = \frac{p(C)\,p(D\mid C)}{p(D)} Assume for the moment that there are only two mutually exclusive classes,
S and ¬
S (e.g. spam and not spam), such that every element (email) is in either one or the other; p(D\mid S)=\prod_i p(w_i \mid S)\, and p(D\mid\neg S)=\prod_i p(w_i\mid\neg S)\, Using the Bayesian result above, one can write: p(S\mid D)={p(S)\over p(D)}\,\prod_i p(w_i \mid S) p(\neg S\mid D)={p(\neg S)\over p(D)}\,\prod_i p(w_i \mid\neg S) Dividing one by the other gives: {p(S\mid D)\over p(\neg S\mid D)}={p(S)\,\prod_i p(w_i \mid S)\over p(\neg S)\,\prod_i p(w_i \mid\neg S)} Which can be re-factored as: {p(S\mid D)\over p(\neg S\mid D)}={p(S)\over p(\neg S)}\,\prod_i {p(w_i \mid S)\over p(w_i \mid\neg S)} Thus, the probability ratio p(
S |
D) / p(¬
S |
D) can be expressed in terms of a series of
likelihood ratios. The actual probability p(
S |
D) can be easily computed from log (p(
S |
D) / p(¬
S |
D)) based on the observation that p(
S |
D) + p(¬
S |
D) = 1. Taking the
logarithm of all these ratios, one obtains: \ln{p(S\mid D)\over p(\neg S\mid D)}=\ln{p(S)\over p(\neg S)}+\sum_i \ln{p(w_i\mid S)\over p(w_i\mid\neg S)} (This technique of "
log-likelihood ratios" is a common technique in statistics. In the case of two mutually exclusive alternatives (such as this example), the conversion of a log-likelihood ratio to a probability takes the form of a
sigmoid curve: see
logit for details.) Finally, the document can be classified as follows. It is spam if p(S\mid D) > p(\neg S\mid D) (i. e., \ln{p(S\mid D) \over p(\neg S\mid D)} > 0), otherwise it is not spam.
Spam filtering Naive Bayes classifiers are a popular
statistical technique of
e-mail filtering. They typically use
bag-of-words features to identify
email spam, an approach commonly used in
text classification. Naive Bayes classifiers work by correlating the use of tokens (typically words, or sometimes other things), with spam and non-spam e-mails and then using
Bayes' theorem to calculate a probability that an email is or is not spam.
Naive Bayes spam filtering is a baseline technique for dealing with spam that can tailor itself to the email needs of individual users and give low
false positive spam detection rates that are generally acceptable to users. Bayesian algorithms were used for email filtering as early as 1996. Although naive Bayesian filters did not become popular until later, multiple programs were released in 1998 to address the growing problem of unwanted email. The first scholarly publication using the naive Bayes classifier for spam filtering was by Sahami et al. in 1998. Variants of the basic technique have been implemented in a number of research works and commercial
software products. Many modern mail
clients implement Bayesian spam filtering. Users can also install separate
email filtering programs.
Server-side email filters, such as
DSPAM,
Rspamd,
SpamAssassin,
SpamBayes,
Bogofilter, and
ASSP, make use of Bayesian spam filtering techniques, and the functionality is sometimes embedded within
mail server software itself.
CRM114, often cited as a Bayesian filter, is not intended to use a Bayes filter in production, but includes the ″unigram″ feature for reference.
Dealing with rare words In the case a word has never been met during the learning phase, both the numerator and the denominator are equal to zero, both in the general formula and in the spamicity formula. The software can decide to discard such words for which there is no information available. More generally, the words that were encountered only a few times during the learning phase cause a problem, because it would be an error to trust blindly the information they provide. A simple solution is to simply avoid taking such unreliable words into account as well. Applying again Bayes' theorem, and assuming the classification between spam and ham of the emails containing a given word ("replica") is a
random variable with
beta distribution, some programs decide to use a corrected probability: :\Pr'(S|W) = \frac{s \cdot \Pr(S) + n \cdot \Pr(S|W)}{s + n } where: • \Pr'(S|W) is the corrected probability for the message to be spam, knowing that it contains a given word ; • s is the
strength we give to background information about incoming spam ; • \Pr(S) is the probability of any incoming message to be spam ; • n is the number of occurrences of this word during the learning phase ; • \Pr(S|W) is the spamicity of this word. (Demonstration:) This corrected probability is used instead of the spamicity in the combining formula. This formula can be extended to the case where
n is equal to zero (and where the spamicity is not defined), and evaluates in this case to Pr(S).
Other heuristics "Neutral" words like "the", "a", "some", or "is" (in English), or their equivalents in other languages, can be ignored. These are also known as
Stop words. More generally, some bayesian filtering filters simply ignore all the words which have a spamicity next to 0.5, as they contribute little to a good decision. The words taken into consideration are those whose spamicity is next to 0.0 (distinctive signs of legitimate messages), or next to 1.0 (distinctive signs of spam). A method can be for example to keep only those ten words, in the examined message, which have the greatest
absolute value |0.5 −
pI|. Some software products take into account the fact that a given word appears several times in the examined message, others don't. Some software products use
patterns (sequences of words) instead of isolated natural languages words. For example, with a "context window" of four words, they compute the spamicity of "Viagra is good for", instead of computing the spamicities of "Viagra", "is", "good", and "for". This method gives more sensitivity to context and eliminates the Bayesian noise better, at the expense of a bigger database.
Disadvantages Depending on the implementation, Bayesian spam filtering may be susceptible to
Bayesian poisoning, a technique used by spammers in an attempt to degrade the effectiveness of spam filters that rely on Bayesian filtering. A spammer practicing Bayesian poisoning will send out emails with large amounts of legitimate text (gathered from legitimate news or literary sources).
Spammer tactics include insertion of random innocuous words that are not normally associated with spam, thereby decreasing the email's spam score, making it more likely to slip past a Bayesian spam filter. However, with (for example)
Paul Graham's scheme only the most significant probabilities are used, so that padding the text out with non-spam-related words does not affect the detection probability significantly. Words that normally appear in large quantities in spam may also be transformed by spammers. For example, «Viagra» would be replaced with «Viaagra» or «V!agra» in the spam message. The recipient of the message can still read the changed words, but each of these words is met more rarely by the Bayesian filter, which hinders its learning process. As a general rule, this spamming technique does not work very well, because the derived words end up recognized by the filter just like the normal ones. Another technique used to try to defeat Bayesian spam filters is to replace text with pictures, either directly included or linked. The whole text of the message, or some part of it, is replaced with a picture where the same text is "drawn". The spam filter is usually unable to analyze this picture, which would contain the sensitive words like «Viagra». However, since many mail clients disable the display of linked pictures for security reasons, the spammer sending links to distant pictures might reach fewer targets. Also, a picture's size in bytes is bigger than the equivalent text's size, so the spammer needs more bandwidth to send messages directly including pictures. Some filters are more inclined to decide that a message is spam if it has mostly graphical contents. A solution used by
Google in its
Gmail email system is to perform an
OCR (Optical Character Recognition) on every mid to large size image, analyzing the text inside. == See also ==