Publication
Pattern Recognition Letters
Paper

A Fast Partitional Clustering Algorithm based on Nearest Neighbours Heuristics

View publication

Abstract

K-means, along with its several other variants, is the most widely used family of partitional clustering algorithms. Generally speaking, this family of algorithm starts by initializing a number of data points as cluster centres, and then iteratively refines these cluster centres based on the current partition of the dataset. Given a set of cluster centres, inducing the partition over the dataset involves finding the nearest (or most similar) cluster centre for each data point, which is an O(NK) operation, N and K being the number of data points and the number of clusters, respectively. In our proposed approach, we avoid the explicit computation of these distances for the case of sparse vectors, e.g. documents, by utilizing a fundamental operation, namely TOP(x), which gives a list of the top most similar vectors with respect to the vector x. A standard way to store sparse vectors and retrieve the top most similar ones given a query vector, is with the help of the inverted list data structure. In our proposed method, we use the TOP(x) function to first select cluster centres that are likely to be dissimilar to each other. Secondly, to obtain the partition during each iteration of K-means, we avoid the explicit computation of the pair-wise similarities between the centroid and the non-centroid vectors. Thirdly, we avoid recomputation of the cluster centroids by adopting a centrality based heuristic. We demonstrate the effectiveness of our proposed algorithm on TREC-2011 Microblog dataset, a large collection of about 14M tweets. Our experiments demonstrate that our proposed method is about 35x faster and produces more effective clusters in comparison to the standard K-means algorithm.

Date

01 Sep 2018

Publication

Pattern Recognition Letters

Authors

Share