, the training data is labelled with the expected answers, while in
unsupervised learning, the model identifies patterns or structures in unlabelled data. Machine learning approaches are traditionally divided into three broad categories, which correspond to learning paradigms, depending on the nature of the "signal" or "feedback" available to the learning system: •
Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that
maps inputs to outputs. •
Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (
feature learning). •
Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as
driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback analogous to rewards, which it tries to maximise.
Supervised learning is a supervised learning model that divides the data into regions separated by a
linear boundary. Here, the linear boundary divides the black circles from the white. Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data, known as
training data, consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an
array or vector, sometimes called a
feature vector, and the training data is represented by a
matrix. Through
iterative optimisation of an
objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function allows the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task. Classification algorithms are used when the outputs are restricted to a limited set of values, while regression algorithms are used when the outputs can take any numerical value within a range. For example, in a classification algorithm that filters emails, the input is an incoming email, and the output is the folder in which to file the email. In contrast, regression is used for tasks such as predicting a person's height based on factors like age and genetics or forecasting future temperatures based on historical data.
Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in
ranking,
recommendation systems, visual identity tracking, face verification, and speaker verification.
Unsupervised learning Unsupervised learning algorithms find structures in data that has not been labelled, classified or categorised. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. Central applications of unsupervised machine learning include clustering,
dimensionality reduction, Cluster analysis is the assignment of a set of observations into subsets (called
clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some
similarity metric and evaluated, for example, by
internal compactness, or the similarity between members of the same cluster, and
separation, the difference between clusters. Other methods are based on
estimated density and
graph connectivity. A special type of unsupervised learning called
self-supervised learning involves training a model by generating the supervisory signal from the data itself.
Dimensionality reduction Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the
feature set, also called the "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination or
extraction. One of the popular methods of dimensionality reduction is
principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). The
manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional
manifolds, and many dimensionality reduction techniques make this assumption, leading to the areas of
manifold learning and
manifold regularisation.
Semi-supervised learning Semi-supervised learning falls between
unsupervised learning (without any labelled training data) and
supervised learning (with completely labelled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabelled data, when used in conjunction with a small amount of labelled data, can produce a considerable improvement in learning accuracy. In
weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets.
Reinforcement learning Reinforcement learning is an area of machine learning concerned with how
software agents ought to take
actions in an environment to maximise some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as
game theory,
control theory,
operations research,
information theory,
simulation-based optimisation,
multi-agent systems,
swarm intelligence,
statistics and
genetic algorithms. In reinforcement learning, the environment is typically represented as a
Markov decision process (MDP). Many reinforcement learning algorithms use
dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent.
Other types Other approaches have been developed which do not fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example,
topic modelling,
meta-learning.
Self-learning Self-learning, as a machine learning paradigm, was introduced in 1982 along with a neural network capable of self-learning, named
crossbar adaptive array (CAA). It gives a solution to the problem learning without any external reward, by introducing emotion as an internal reward. Emotion is used as a state evaluation of a self-learning agent. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion. The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: • in situation
s act
a • receive a consequence situation
s • compute emotion of being in the consequence situation ''v(s')'' • update crossbar memory ''w'(a,s) = w(a,s) + v(s')'' It is a system with only one input, situation, and only one output, action (or behaviour) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioural environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioural environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behaviour in an environment that contains both desirable and undesirable situations.
Feature learning Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include
principal component analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual
feature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labelled input data. Examples include
artificial neural networks,
multilayer perceptrons, and supervised
dictionary learning. In unsupervised feature learning, features are learned with unlabelled input data. Examples include dictionary learning,
independent component analysis,
autoencoders,
matrix factorisation and various forms of
clustering.
Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional.
Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros.
Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from
tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors.
Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine learns a representation that disentangles the underlying factors of variation that explain the observed data. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data have not yielded attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms.
Sparse dictionary learning Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of
basis functions and assumed to be a
sparse matrix. The method is
strongly NP-hard and difficult to solve approximately. A popular
heuristic method for sparse dictionary learning is the
k-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in
image denoising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot.
Anomaly detection In
data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations that raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as
bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as
outliers, novelties, noise, deviations and exceptions. In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object. Many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns. Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabelled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit the least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labelled as "normal" and "abnormal" and involves training a classifier (the key difference from many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behaviour from a given normal training data set and then test the likelihood of a test instance being generated by the model.
Robot learning Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally
meta-learning (e.g. MAML).
Association rules Association rule learning is a
rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness". Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilisation of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include
learning classifier systems, association rule learning, and
artificial immune systems. Based on the concept of strong rules,
Rakesh Agrawal,
Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by
point-of-sale (POS) systems in supermarkets. For example, the rule \{\mathrm{onions, potatoes}\} \Rightarrow \{\mathrm{burger}\} found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional
pricing or
product placements. In addition to
market basket analysis, association rules are employed today in application areas including
Web usage mining,
intrusion detection,
continuous production, and
bioinformatics. In contrast with
sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions.
Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a
genetic algorithm, with a learning component, performing either
supervised learning,
reinforcement learning, or
unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a
piecewise manner to make predictions.
Inductive logic programming (ILP) is an approach to rule learning using
logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that
entails all positive and no negative examples.
Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as
functional programs. Inductive logic programming is particularly useful in
bioinformatics and
natural language processing.
Gordon Plotkin and
Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a
Prolog program that inductively inferred logic programs from positive and negative examples. The term
inductive here refers to
philosophical induction, suggesting a theory to explain observed facts, rather than
mathematical induction, proving a property for all members of a well-ordered set. == Models ==