Clustering
1. K-Means Clustering:
K-means clustering is a partitioning method that divides a dataset into K distinct, non-overlapping subsets (clusters). Each data point belongs to the cluster with the nearest mean, and the mean is recalculated as the centroid of the points in the cluster. This process is iteratively repeated until convergence.
K-Means Clustering:
K-means clustering is a partitioning method that divides a dataset into K distinct, non-overlapping subsets (clusters). Each data point belongs to the cluster with the nearest mean, and the mean is recalculated as the centroid of the points in the cluster. This process is iteratively repeated until convergence.
Formula for K-Means:
Initialization:
- Randomly select K initial centroids.
Assignment Step:
- Assign each data point to the nearest centroid, forming K clusters.
- Where is a data point, is the centroid of cluster , and denotes the Euclidean distance.
Update Step:
- Update the centroids by calculating the mean of all data points in each cluster.
Repeat Assignment and Update Steps:
- Iteratively repeat the assignment and update steps until convergence (when centroids do not change significantly or a specified number of iterations is reached).
Example:
Let's consider a simple example with a table of data points:
Data Point Feature 1 Feature 2 A 1 2 B 2 3 C 2 2 D 3 3 E 8 7 F 9 8 G 10 7
Initialization:
- Choose K = 2 and randomly select initial centroids: , .
Iteration 1:
Assignment Step:
- Assign each point to the nearest centroid.
- Cluster 1: {A, B, C, D}
- Cluster 2: {E, F, G}
- Assign each point to the nearest centroid.
Update Step:
- Recalculate centroids.
- Recalculate centroids.
Iteration 2:
- Repeat assignment and update steps.
Convergence:
- Continue iterations until centroids stabilize.
In practice, you may use Python libraries like scikit-learn to apply the K-means algorithm efficiently.
2. Hierarchical Clustering:
Agglomerative Hierarchical Clustering Algorithm:
Initialization:
- Start with each data point as a separate cluster.
Pairwise Similarity Calculation:
- Calculate the similarity (or distance) between each pair of clusters. The choice of similarity measure depends on the nature of the data (e.g., Euclidean distance, correlation).
Merge Step:
- Merge the two most similar clusters into a new cluster. Update the similarity matrix.
Repeat Steps 2-3:
- Repeat the pairwise similarity calculation and merge steps until only a single cluster remains.
Example:
Let's consider a simple example with a table of data points:
Hierarchical Clustering:
Hierarchical Clustering is a method of cluster analysis that builds a hierarchy of clusters. It can be visualized using a tree-like diagram called a dendrogram. There are two main types of hierarchical clustering: Agglomerative (bottom-up) and Divisive (top-down).
Agglomerative Hierarchical Clustering Algorithm:
Initialization:
- Start with each data point as a separate cluster.
Pairwise Similarity Calculation:
- Calculate the similarity (or distance) between each pair of clusters. The choice of similarity measure depends on the nature of the data (e.g., Euclidean distance, correlation).
Merge Step:
- Merge the two most similar clusters into a new cluster. Update the similarity matrix.
Repeat Steps 2-3:
- Repeat the pairwise similarity calculation and merge steps until only a single cluster remains.
Example:
Let's consider a simple example with a table of data points:
Data Point | Feature 1 | Feature 2 |
---|---|---|
A | 1 | 2 |
B | 2 | 3 |
C | 2 | 2 |
D | 3 | 3 |
E | 8 | 7 |
F | 9 | 8 |
G | 10 | 7 |
Agglomerative Hierarchical Clustering Steps:
Step 1: Initialization:
- Each data point is initially a separate cluster.
Step 2: Pairwise Similarity Calculation:
- Calculate pairwise Euclidean distances between clusters.
Step 3: Merge Step:
- Merge the two closest clusters.
- Let's say the closest clusters are A and C, merge them into a new cluster AC.
Cluster | Feature 1 | Feature 2 |
---|---|---|
AC | 1.5 | 2 |
B | 2 | 3 |
D | 3 | 3 |
E | 8 | 7 |
F | 9 | 8 |
G | 10 | 7 |
- Repeat steps 2-3 until only one cluster remains.
- Start with each data point as a separate cluster.
- Calculate the Euclidean distance between each pair of clusters using the average-linkage method.
- Merge the two clusters with the smallest distance.
how to Calculate pairwise Euclidean distances between clusters.?
To calculate pairwise Euclidean distances between clusters, you need to consider the distances between the data points in different clusters. The distance between two clusters can be computed using various linkage methods, such as single-linkage, complete-linkage, or average-linkage. Let's focus on the average-linkage method for simplicity.
Here's a step-by-step guide to calculating pairwise Euclidean distances between clusters using the average-linkage method:
Example Data: Let's use a set of data points and their coordinates:
Step 1: Initialization:
Step 2: Pairwise Euclidean Distance Calculation:
Where and are clusters, and is the Euclidean distance between data points and .
Example Calculation:
Repeat this process for all pairs of clusters.
Step 3: Merge Clusters:
Updated Clusters:
Repeat Steps 2-3 until only one cluster remains.
In practice, hierarchical clustering algorithms, such as those implemented in Python libraries like scipy and scikit-learn, handle the details of pairwise distance calculations and clustering efficiently.