The centroids have not changed between iterations, indicating convergence. The final clusters are:
Cluster 1={d1}Cluster 2={d2}
In this simplified example, we have two clusters with one document in each cluster. In practice, K-means is often applied to larger document sets and with more sophisticated initialization strategies to improve convergence and clustering quality.
K-means clustering is a partitioning method that divides a dataset into K distinct, non-overlapping subsets (clusters). Each data point belongs to the cluster with the nearest mean, and the mean is recalculated as the centroid of the points in the cluster. This process is iteratively repeated until convergence.
K-Means Algorithm:
Initialization:
Randomly select K initial centroids.
Assignment Step:
Assign each data point to the nearest centroid, forming K clusters.
Cluster(i)=argmink∥xi−μk∥2
Where xi is a data point, μk is the centroid of cluster k, and ∥⋅∥ denotes the Euclidean distance.
Update Step:
Update the centroids by calculating the mean of all data points in each cluster.
μk=∣Cluster(k)∣1∑i∈Cluster(k)xi
Repeat Assignment and Update Steps:
Iteratively repeat the assignment and update steps until convergence (when centroids do not change significantly or a specified number of iterations is reached).
Example:
Let's consider a simple example with a table of data points:
Data Point
Feature 1
Feature 2
A
1
2
B
2
3
C
2
2
D
3
3
E
8
7
F
9
8
G
10
7
Initialization:
Choose K = 2 and randomly select initial centroids: μ1=(2,2), μ2=(8,7).
Iteration 1:
Assignment Step:
Assign each point to the nearest centroid.
Cluster 1: {A, B, C, D}
Cluster 2: {E, F, G}
Update Step:
Recalculate centroids.
μ1=41(1+2+2+3,2+3+2+3)=(2,2.5)
μ2=31(8+9+10,7+8+7)=(9,7.333)
Iteration 2:
Repeat assignment and update steps.
Convergence:
Continue iterations until centroids stabilize.
In practice, you may use Python libraries like scikit-learn to apply the K-means algorithm efficiently.
Hierarchical Clustering:
Hierarchical Clustering is a method of cluster analysis that builds a hierarchy of clusters. It can be visualized using a tree-like diagram called a dendrogram. There are two main types of hierarchical clustering: Agglomerative (bottom-up) and Divisive (top-down).
Agglomerative Hierarchical Clustering Algorithm:
Initialization:
Start with each data point as a separate cluster.
Pairwise Similarity Calculation:
Calculate the similarity (or distance) between each pair of clusters. The choice of similarity measure depends on the nature of the data (e.g., Euclidean distance, correlation).
Merge Step:
Merge the two most similar clusters into a new cluster. Update the similarity matrix.
Repeat Steps 2-3:
Repeat the pairwise similarity calculation and merge steps until only a single cluster remains.
Example:
Let's consider a simple example with a table of data points:
Data Point
Feature 1
Feature 2
A
1
2
B
2
3
C
2
2
D
3
3
E
8
7
F
9
8
G
10
7
Agglomerative Hierarchical Clustering Steps:
Step 1: Initialization:
Each data point is initially a separate cluster.
Step 2: Pairwise Similarity Calculation:
Calculate pairwise Euclidean distances between clusters.
Step 3: Merge Step:
Merge the two closest clusters.
Let's say the closest clusters are A and C, merge them into a new cluster AC.