MarketK-means clustering
Company Profile

K-means clustering

k-means clustering is a method of vector quantization, originally from signal processing, that aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean. This results in a partitioning of the data space into Voronoi cells. k-means clustering minimizes within-cluster variances, but not regular Euclidean distances, which would be the more difficult Weber problem: the mean optimizes squared errors, whereas only the geometric median minimizes Euclidean distances. For instance, better Euclidean solutions can be found using k-medians and k-medoids.

Description
Given a set of observations , where each observation is a d-dimensional real vector, k-means clustering aims to partition the n observations into () sets {{math|1=S = {S1, S2, ..., Sk}}} so as to minimize the within-cluster sum of squares (WCSS) (i.e. variance). Formally, the objective is to find: \mathop\operatorname{arg\,min}_\mathbf{S} \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \mathop\operatorname{arg\,min}_\mathbf{S} \sum_{i=1}^k |S_i| \operatorname{Var} S_i where μi is the mean (also called centroid) of points in S_i, i.e. \boldsymbol{\mu_i} = \frac{1}\sum_{\mathbf x \in S_i} \mathbf x, |S_i| is the size of S_i, and \|\cdot\| is the usual L2 norm . This is equivalent to minimizing the pairwise squared deviations of points in the same cluster: \mathop\operatorname{arg\,min}_\mathbf{S} \sum_{i=1}^{k} \, \frac{1}{ |S_i|} \, \sum_{\mathbf{x}, \mathbf{y} \in S_i} \left\| \mathbf{x} - \mathbf{y} \right\|^2 The equivalence can be deduced from the identity |S_i|\sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 = \frac{1}{2} \sum_{\mathbf{x},\mathbf{y} \in S_i}\left\|\mathbf x - \mathbf y\right\|^2. Since the total variance is constant, this is equivalent to maximizing the sum of squared deviations between points in different clusters (between-cluster sum of squares, BCSS). This deterministic relationship is also related to the law of total variance in probability theory. == History ==
History
The term "k-means" was first used by James MacQueen in 1967, though the idea goes back to Hugo Steinhaus in 1956. The standard algorithm was first proposed by Stuart Lloyd of Bell Labs in 1957 as a technique for pulse-code modulation, although it was not published as a journal article until 1982. In 1965, Edward W. Forgy published essentially the same method, which is why it is sometimes referred to as the Lloyd–Forgy algorithm. Early applications of the k-means algorithm were primarily found in signal processing and data compression, particularly in the context of vector quantization. Lloyd's work at Bell Labs focused on representing analog signals using a limited set of discrete values, where clustering was used to reduce the amount of data required while preserving signal quality. Despite its widespread use, limitations such as sensitivity to initial centroid placement and difficulty handling non-spherical clusters were recognized early on, motivating the development of improved clustering methods and initialization techniques. Numerous extensions of k-means have since been developed to address limitations of the original algorithm, including methods such as fuzzy c-means, which allows data points to belong to multiple clusters with varying degrees of membership, and kernel k-means, which uses kernel functions to identify non-linearly separable clusters. == Algorithms ==
Algorithms
Standard algorithm (naive k-means) The most common algorithm uses an iterative refinement technique. Due to its ubiquity, it is often called "the k-means algorithm"; it is also referred to as Lloyd's algorithm, particularly in the computer science community. It is sometimes also referred to as "naïve k-means", because there exist much faster alternatives. Given an initial set of means (see below), the algorithm proceeds by alternating between two steps: • Assignment step: Assign each observation to the cluster with the nearest mean (centroid): that with the least squared Euclidean distance. (Mathematically, this means partitioning the observations according to the Voronoi diagram generated by the means.) S_i^{(t)} = \left \{ x_p : \left \| x_p - m^{(t)}_i \right \|^2 \le \left \| x_p - m^{(t)}_j \right \|^2 \ \forall j, 1 \le j \le k \right\}, where each x_p is assigned to exactly one S^{(t)}, even if it could be assigned to two or more of them. • Update step: Recalculate means (centroids) for observations assigned to each cluster. This is also called refitting. m^{(t+1)}_i = \frac{1}{\left|S^{(t)}_i\right|} \sum_{x_j \in S^{(t)}_i} x_j The objective function in k-means is the WCSS (within cluster sum of squares). After each iteration, the WCSS monotonically decreases, giving a nonnegative monotonically decreasing sequence. This guarantees that the k-means always converges, but not necessarily to the global optimum. The algorithm has converged when the assignments no longer change or equivalently, when the WCSS has become stable. The algorithm is not guaranteed to find the optimal cluster assignment. The algorithm is often presented as assigning objects to the nearest cluster by distance. Using a different distance function other than (squared) Euclidean distance may prevent the algorithm from converging. Various modifications of k-means such as spherical k-means and k-medoids have been proposed to allow using other distance measures. ;Pseudocode The below pseudocode outlines the implementation of the standard k-means clustering algorithm. Initialization of centroids, distance metric between points and centroids, and the calculation of new centroids are design choices and will vary with different implementations. In this example pseudocode, distance() returns the distance between the specified points. function kmeans(k, points) is // Initialize centroids centroids ← list of k starting centroids converged ← false while converged == false do // Create empty clusters clusters ← list of k empty lists // Assign each point to the nearest centroid for i ← 0 to length(points) - 1 do point ← points[i] closestIndex ← 0 minDistance ← distance(point, centroids[0]) for j ← 1 to k - 1 do d ← distance(point, centroids[j]) if d File:K Means Example Step 1.svg|1. k initial "means" (in this case k=3) are randomly generated within the data domain (shown in color). File:K Means Example Step 2.svg|2. k clusters are created by associating every observation with the nearest mean. The partitions here represent the Voronoi diagram generated by the means. File:K Means Example Step 3.svg|3. The centroid of each of the k clusters becomes the new mean. File:K Means Example Step 4.svg|4. Steps 2 and 3 are repeated until convergence has been reached. The algorithm does not guarantee convergence to the global optimum. The result may depend on the initial clusters. As the algorithm is usually fast, it is common to run it multiple times with different starting conditions. However, worst-case performance can be slow: in particular certain point sets, even in two dimensions, converge in exponential time, that is . These point sets do not seem to arise in practice: this is corroborated by the fact that the smoothed running time of k-means is polynomial. The "assignment" step is referred to as the "expectation step", while the "update step" is a maximization step, making this algorithm a variant of the generalized expectation–maximization algorithm. Complexity Finding the optimal solution to the k-means clustering problem for observations in d dimensions is: • NP-hard in general Euclidean space (of d dimensions) even for two clusters, • NP-hard for a general number of clusters k even in the plane, • if k and d (the dimension) are fixed, the problem can be exactly solved in time O(n^{dk+1}), where n is the number of entities to be clustered. Thus, a variety of heuristic algorithms such as Lloyd's algorithm given above are generally used. The running time of Lloyd's algorithm (and most variants) is O(n k d i), where: • n is the number of d-dimensional vectors (to be clustered) • k the number of clusters • i the number of iterations needed until convergence. On data that does have a clustering structure, the number of iterations until convergence is often small, and results only improve slightly after the first dozen iterations. Lloyd's algorithm is therefore often considered to be of "linear" complexity in practice, although it is in the worst case superpolynomial when performed until convergence. • In the worst-case, Lloyd's algorithm needs i = 2^{\Omega(\sqrt{n})} iterations, so that the worst-case complexity of Lloyd's algorithm is superpolynomial. Lloyd's algorithm is the standard approach for this problem. However, it spends a lot of processing time computing the distances between each of the k cluster centers and the n data points. Since points usually stay in the same clusters after a few iterations, much of this work is unnecessary, making the naïve implementation very inefficient. Some implementations use caching and the triangle inequality in order to create bounds and accelerate Lloyd's algorithm. Optimal number of clusters Finding the optimal number of clusters (k) for k-means clustering is a crucial step to ensure that the clustering results are meaningful and useful. Several techniques are available to determine a suitable number of clusters. Here are some of commonly used methods: • Elbow method (clustering): This method involves plotting the explained variation as a function of the number of clusters, and picking the elbow of the curve as the number of clusters to use. However, the notion of an "elbow" is not well-defined and this is known to be unreliable. • Silhouette (clustering): Silhouette analysis measures the quality of clustering and provides an insight into the separation distance between the resulting clusters. A higher silhouette score indicates that the object is well matched to its own cluster and poorly matched to neighboring clusters. • Gap statistic: The Gap Statistic compares the total within intra-cluster variation for different values of k with their expected values under null reference distribution of the data. The optimal k is the value that yields the largest gap statistic. • Davies–Bouldin index: The Davies-Bouldin index is a measure of the how much separation there is between clusters. Lower values of the Davies-Bouldin index indicate a model with better separation. • Calinski-Harabasz index: This Index evaluates clusters based on their compactness and separation. The index is calculated using the ratio of between-cluster variance to within-cluster variance, with higher values indicate better-defined clusters. • Rand index: It calculates the proportion of agreement between the two clusters, considering both the pairs of elements that are correctly assigned to the same or different clusters. Higher values indicate greater similarity and better clustering quality. To provide a more accurate measure, the Adjusted Rand Index (ARI), introduced by Hubert and Arabie in 1985, corrects the Rand Index by adjusting for the expected similarity of all pairings due to chance. Variations Jenks natural breaks optimization: k-means applied to univariate data • k-medians clustering uses the median in each dimension instead of the mean, and this way minimizes L_1 norm (Taxicab geometry). • k-medoids (also: Partitioning Around Medoids, PAM) uses the medoid instead of the mean, and this way minimizes the sum of distances for arbitrary distance functions. • Fuzzy C-Means Clustering is a soft version of k-means, where each data point has a fuzzy degree of belonging to each cluster. • Gaussian mixture models trained with expectation–maximization algorithm (EM algorithm) maintains probabilistic assignments to clusters, instead of deterministic assignments, and multivariate Gaussian distributions instead of means. • k-means++ chooses initial centers in a way that gives a provable upper bound on the WCSS objective. • The filtering algorithm uses k-d trees to speed up each k-means step. • Some methods attempt to speed up each k-means step using the triangle inequality. • Hierarchical variants such as Bisecting k-means, X-means clustering and G-means clustering repeatedly split clusters to build a hierarchy, and can also try to automatically determine the optimal number of clusters in a dataset. • Internal cluster evaluation measures such as cluster silhouette can be helpful at determining the number of clusters. • Minkowski weighted k-means automatically calculates cluster specific feature weights, supporting the intuitive idea that a feature may have different degrees of relevance at different features. These weights can also be used to re-scale a given data set, increasing the likelihood of a cluster validity index to be optimized at the expected number of clusters. • Mini-batch k-means: k-means variation using "mini batch" samples for data sets that do not fit into memory. • Otsu's method Hartigan–Wong method Hartigan and Wong's method \Delta(x,n,m) = \frac{ \mid S_n \mid }{ \mid S_n \mid - 1} \cdot \lVert \mu_n - x \rVert^2 - \frac{ \mid S_m \mid }{ \mid S_m \mid + 1} \cdot \lVert \mu_m - x \rVert^2. Global optimization and meta-heuristics The classical k-means algorithm and its variations are known to only converge to local minima of the minimum-sum-of-squares clustering problem defined as \mathop\operatorname{arg\,min}_\mathbf{S} \sum_{i=1}^{k} \sum_{\mathbf x \in S_i} \left\| \mathbf x - \boldsymbol\mu_i \right\|^2 . Many studies have attempted to improve the convergence behavior of the algorithm and maximize the chances of attaining the global optimum (or at least, local minima of better quality). Initialization and restart techniques discussed in the previous sections are one alternative to find better solutions. More recently, global optimization algorithms based on branch-and-bound and semidefinite programming have produced ‘’provenly optimal’’ solutions for datasets with up to 4,177 entities and 20,531 features. As expected, due to the NP-hardness of the subjacent optimization problem, the computational time of optimal algorithms for k-means quickly increases beyond this size. Optimal solutions for small- and medium-scale still remain valuable as a benchmark tool, to evaluate the quality of other heuristics. To find high-quality local minima within a controlled computational time but without optimality guarantees, other works have explored metaheuristics and other global optimization techniques, e.g., based on incremental approaches and convex optimization, random swaps (i.e., iterated local search), variable neighborhood search and genetic algorithms. It is indeed known that finding better local minima of the minimum sum-of-squares clustering problem can make the difference between failure and success to recover cluster structures in feature spaces of high dimension. == Discussion ==
Discussion
and actual species visualized using ELKI. Cluster means are marked using larger, semi-transparent symbols. on an artificial dataset ("mouse"). The tendency of k-means to produce equal-sized clusters leads to bad results here, while EM benefits from the Gaussian distributions with different radius present in the data set. Three key features of k-means that make it efficient are often regarded as its biggest drawbacks: • Euclidean distance is used as a metric and variance is used as a measure of cluster scatter. • The number of clusters k is an input parameter: an inappropriate choice of k may yield poor results. That is why, when performing k-means, it is important to run diagnostic checks for determining the number of clusters in the data set. • Convergence to a local minimum may produce counterintuitive ("wrong") results (see example in Fig.). A key limitation of k-means is its cluster model. The concept is based on spherical clusters that are separable so that the mean converges towards the cluster center. The clusters are expected to be of similar size, so that the assignment to the nearest cluster center is the correct assignment. When for example applying k-means with a value of k=3 onto the well-known Iris flower data set, the result often fails to separate the three Iris species contained in the data set. With k=2, the two visible clusters (one containing two species) will be discovered, whereas with k=3 one of the two clusters will be split into two even parts. In fact, k = 2 is more appropriate for this data set, despite the data set's containing 3 classes. As with any other clustering algorithm, the k-means result makes assumptions that the data satisfy certain criteria. It works well on some data sets, and fails on others. The result of k-means can be seen as the Voronoi cells of the cluster means. Since data is split halfway between cluster means, this can lead to suboptimal splits as can be seen in the "mouse" example. The Gaussian models used by the expectation–maximization algorithm (arguably a generalization of k-means) are more flexible by having both variances and covariances. The EM result is thus able to accommodate clusters of variable size much better than k-means as well as correlated clusters (not in this example). In counterpart, EM requires the optimization of a larger number of free parameters and poses some methodological issues due to vanishing clusters or badly-conditioned covariance matrices. k-means is closely related to nonparametric Bayesian modeling. == Applications ==
Applications
k-means clustering is rather easy to apply to even large data sets, particularly when using heuristics such as Lloyd's algorithm. It has been successfully used in market segmentation, computer vision, and astronomy among many other domains. It often is used as a preprocessing step for other algorithms, for example to find a starting configuration. Biology In biological applications, the limitations of standard k-means clustering are often addressed by adapting the distance measures used and objective functions to better reflect the structure of experimental data. Rather than relying on Euclidean distance and variance-based measures of cluster scatter, alternative similarity metrics such as the Jaccard index are sometimes used when analyzing genomic datasets with k-means. In these contexts, clustering may be performed directly on distance matrices, and medoids (representative data points that have the smallest average distance to every other point within a cluster) may be selected as cluster centers instead of arithmetic means. This allows clustering methods to capture patterns of shared genomic features or co-occurrence that are not well represented in a traditional Euclidean analysis. Clustering is widely used in biological data analysis to identify patterns in high-dimensional datasets such as gene expression profiles and genomic interaction matrices. Techniques such as k-means clustering partition observations into groups based on similarity, allowing researchers to detect structure in complex datasets. In gene expression studies, clustering is commonly used to group genes with similar expression profiles, often revealing co-regulated genes and underlying biological interactions. In addition to gene expression analysis, clustering approaches are applied to chromatin interaction data to identify regions of coordinated activity. Similarity measures such as the Jaccard index can be used to quantify shared features between observations, and clustering of the resulting distance matrices can reveal structural organization within the genome. These patterns are commonly visualized using heatmaps, where clustered regions correspond to domains of interaction or co-segregation, providing insight into the regulatory organization of the genome. The basic approach is first to train a k-means clustering representation, using the input training data (which need not be labelled). Then, to project any input datum into the new feature space, an "encoding" function, such as the thresholded matrix-product of the datum with the centroid locations, computes the distance from the datum to each centroid, or simply an indicator function for the nearest centroid, or some smooth transformation of the distance. Alternatively, transforming the sample-cluster distance through a Gaussian RBF, obtains the hidden layer of a radial basis function network. This use of k-means has been successfully combined with simple, linear classifiers for semi-supervised learning in NLP (specifically for named-entity recognition) and in computer vision. On an object recognition task, it was found to exhibit comparable performance with more sophisticated feature learning approaches such as autoencoders and restricted Boltzmann machines. • GPU Parallelization (OpenACC): Unlike the OpenMP implementation, the OpenACC program’s parallel directive is called at each necessary step instead of at the very beginning. This exploits the SIMT (Single Instruction, Multiple Threads) architecture of GPUs to perform thousands of distance calculations simultaneously. One major application is identifying stellar populations through chemical tagging. Stars that form together in the same cluster share similar chemical compositions, so clustering algorithms applied to abundance data can recover these groupings even after the stars have spread throughout the galaxy. Studies using k-means on APOGEE infrared spectra measured abundances of up to 13 elements and successfully distinguished known star clusters from surrounding field stars. K-means++ and Scalable Initialization Proper initialization is very important when it comes to finding global optimum in a clustering algorithm. Random initialization is often used for k means clustering, but this can result in poor cluster quality. This is why the k-means++ initialization algorithm was developed. This becomes prohibitively slow when clustering massive datasets into a large number of clusters. To overcome these limitations, researchers developed k-means||, a parallel version of k-means++. Instead of sampling a single point per pass, k-means|| uses an oversampling factor ℓ = Ω (k) to sample multiple points in each round. The algorithm drastically reduces the number of passes required, obtaining a nearly optimal set of centers with a time complexity of O(log n). In practice, as few as five rounds are needed to reach a high-quality solution. After several rounds, the algorithm typically has O(log n) intermediate centers. These are then assigned weights based on how many points are close to them and re-clustered, often using standard k-means++, to reduce the final set to exact k centers. Recent developments Recent advancements in the application of k-means clustering have explored the integration of k-means clustering with deep learning methods, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), to enhance the performance of various tasks in computer vision, natural language processing, and other domains. == Relation to other algorithms ==
Relation to other algorithms
Gaussian mixture model The slow "standard algorithm" for k-means clustering, and its associated expectation–maximization algorithm, is a special case of a Gaussian mixture model, specifically, the limiting case when fixing all covariances to be diagonal, equal and have infinitesimal small variance. Instead of small variances, a hard cluster assignment can also be used to show another equivalence of k-means clustering to a special case of "hard" Gaussian mixture modelling. This does not mean that it is efficient to use Gaussian mixture modelling to compute k-means, but just that there is a theoretical relationship, and that Gaussian mixture modelling can be interpreted as a generalization of k-means; on the contrary, it has been suggested to use k-means clustering to find starting points for Gaussian mixture modelling on difficult data. Principal component analysis The relaxed solution of -means clustering, specified by the cluster indicators, is given by principal component analysis (PCA). The intuition is that k-means describe spherically shaped (ball-like) clusters. If the data has 2 clusters, the line connecting the two centroids is the best 1-dimensional projection direction, which is also the first PCA direction. Cutting the line at the center of mass separates the clusters (this is the continuous relaxation of the discrete cluster indicator). If the data have three clusters, the 2-dimensional plane spanned by three cluster centroids is the best 2-D projection. This plane is also defined by the first two PCA dimensions. Well-separated clusters are effectively modelled by ball-shaped clusters and thus discovered by k-means. Non-ball-shaped clusters are hard to separate when they are close. For example, two half-moon shaped clusters intertwined in space do not separate well when projected onto PCA subspace. k-means should not be expected to do well on this data. It is straightforward to produce counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions. Mean shift clustering Basic mean shift clustering algorithms maintain a set of data points the same size as the input data set. Initially, this set is copied from the input set. All points are then iteratively moved towards the mean of the points surrounding them. By contrast, k-means restricts the set of clusters to k clusters, usually much less than the number of points in the input data set, using the mean of all points in the prior cluster that are closer to that point than any other for the centroid (e.g. within the Voronoi partition of each updating point). A mean shift algorithm that is similar then to k-means, called likelihood mean shift, replaces the set of points undergoing replacement by the mean of all points in the input set that are within a given distance of the changing set. An advantage of mean shift clustering over k-means is the detection of an arbitrary number of clusters in the data set, as there is not a parameter determining the number of clusters. Mean shift can be much slower than k-means, and still requires selection of a bandwidth parameter. Independent component analysis Under sparsity assumptions and when input data is pre-processed with the whitening transformation, k-means produces the solution to the linear independent component analysis (ICA) task. This aids in explaining the successful application of k-means to feature learning. Bilateral filtering k-means implicitly assumes that the ordering of the input data set does not matter. The bilateral filter is similar to k-means and mean shift in that it maintains a set of data points that are iteratively replaced by means. However, the bilateral filter restricts the calculation of the (kernel weighted) mean to include only points that are close in the ordering of the input data. This makes it applicable to problems such as image denoising, where the spatial arrangement of pixels in an image is of critical importance. == Similar problems ==
Similar problems
The set of squared error minimizing cluster functions also includes the k-medoids algorithm, an approach which forces the center point of each cluster to be one of the actual points, i.e., it uses medoids in place of centroids. == Software implementations ==
Software implementations
Different implementations of the algorithm exhibit performance differences, with the fastest on a test data set finishing in 10 seconds, the slowest taking 25,988 seconds (~7 hours). The differences can be attributed to implementation quality, language and compiler differences, different termination criteria and precision levels, and the use of indexes for acceleration. Free Software/Open Source The following implementations are available under Free/Open Source Software licenses, with publicly available source code. • Accord.NET contains C# implementations for k-means, k-means++ and k-modes. • ALGLIB contains parallelized C++ and C# implementations for k-means and k-means++. • AOSP contains a Java implementation for k-means. • CrimeStat implements two spatial k-means algorithms, one of which allows the user to define the starting locations. • ELKI contains k-means (with Lloyd and MacQueen iteration, along with different initializations such as k-means++ initialization) and various more advanced clustering algorithms. • Smile contains k-means and various more other algorithms and results visualization (for java, kotlin and scala). • Julia contains a k-means implementation in the JuliaStats Clustering package. • KNIME contains nodes for k-means and k-medoids. • Mahout contains a MapReduce based k-means. • mlpack contains a C++ implementation of k-means. • Octave contains k-means. • OpenCV contains a k-means implementation. • Orange includes a component for k-means clustering with automatic selection of k and cluster silhouette scoring. • PSPP contains k-means, The QUICK CLUSTER command performs k-means clustering on the dataset. • R contains three k-means variations. • SciPy and scikit-learn contain multiple k-means implementations. • Spark MLlib implements a distributed k-means algorithm. • Torch contains an unsup package that provides k-means clustering. • Weka contains k-means and x-means. Proprietary The following implementations are available under proprietary license terms, and may not have publicly available source code. • Ayasdi • MathematicaMATLABOriginProRapidMinerSAP HANASASSPSSStata == See also ==
tickerdossier.comtickerdossier.substack.com