Why use stochastic models? – OMSCS 7641: Machine Learning (2024)

Table of Contents
k-means clustering Takeaways

In this blog post, we will explore the importance of stochastic models in the context of unsupervised learning. We will start with k-means clustering, which deterministically clusters points based on heuristics, and build up to Expectation Maximization (EM), which can use any parametrized probabilistic distribution to cluster data.

k-means clustering

k-means clustering is a method used to partition $latex n$ observations into $latex k$ clusters in which each observation belongs to the cluster with the nearest mean. Given a set of observations $latex (x_1, x_2, …, x_n)$, we want to partition into $latex k$ sets $latex S = \{S_1, S_2, …, S_k\}$. We do this by minimizing the intra-cluster squared Euclidean distance:

$latex \min_S \sum_{i=1}^k \sum_{x \in S_i} \| x – \mu_i \|^2_2$

$latex i\hbar\frac{\partial}{\partial t}\left|\Psi(t)\right>=H\left|\Psi(t)\right>$

where $latex \mu_i = \dfrac{1}{S_i} \sum_{x \in S_i}x$ is the mean of points in $latex S_i$

To give a minimal example of k-means clustering, we are going to generate data according to $latex x \sim \mathcal{N}(\mu, I)$, where $latex I$ is the identity matrix. Then we’re going to cluster the data with sklearn.cluster.KMeans.

Why use stochastic models? – OMSCS 7641: Machine Learning (1)

We obtained an accuracy of 96.8%, which is pretty good! Due to its use of Euclidean distance, K-means actually makes an assumption about the underlying data distribution: that it is spherical. These are types of multivariate distributions that are symmetrically distributed in all directions in space, where the distribution does not depend on the direction but only on the distance from the center. A Gaussian distribution is an example of that, and it is exactly what we used to generate our data!

We can actually easily plot the covariance of these classes:

Why use stochastic models? – OMSCS 7641: Machine Learning (2)

However, if our data doesn’t follow a spherical distribution, then k-means clustering does less well. If we now generate some new data according to

$latex x \sim \mathcal{N}(\mu, \Sigma) $ where $ \Sigma \neq I$.

Reapplying k-means on this new data reveals very different results and an accuracy of 46.5%

Why use stochastic models? – OMSCS 7641: Machine Learning (3)

Visualizing the reach of each cluster reveals exactly why k-means isn’t performing well:

Why use stochastic models? – OMSCS 7641: Machine Learning (4)

No wonder k-means does poorly. Can we break the assumptions of spherical distributions and just plug in any probabilistic distribution $latex p(x)$ we want? Of course, we can! as long as it is parameterized (and thus solvable) $latex p(x;\theta)$.

Since we want our distribution to best express the true data, we simply have to maximize the expectation of data $latex x$ given some distribution parameters $latex \theta$

$latex \max_\theta \mathbb{E} \big[ p(x | \theta) \big] $

Since we are dealing with probabilities, we can instead optimize the log of the expectation to simplify solving

$latex \max_\theta \mathbb{E} \big[ \log p(x | \theta) \big] $

This technique is called Expectation Maximization (EM) and the optimization process can be solved with any optimization approach. If the problem is simple, we can even find an analytical solution. However, for most practical applications, we use Maximum a posteriori estimation (MAP) or simply gradient-based ascent with learning rate $\alpha$:

$latex \theta \leftarrow \theta + \alpha \nabla_\theta \mathbb{E} \big[ \log p(x | \theta) \big] $

Expectation Maximization has two main benefits over k-means clustering:

  • Richer data models that can capture more complex distributions.
  • Soft clustering, which computes the probability of each point belonging to a specific cluster.

If we decide to use a normal distribution: $latex p(x; \theta) := \mathcal{N}(x; \mu, \Sigma)$ then EM simply becomes:

$latex\mu \leftarrow \mu + \alpha \big( \Sigma^{-1} (x-\mu) \big) $

$latex \Sigma \leftarrow \Sigma + \dfrac{\alpha}{2} \big( \Sigma^{-1} – \Sigma^{-1} (x-\mu) (x-\mu)^T \Sigma^{-1} \big) $

Let’s see this in action!

Why use stochastic models? – OMSCS 7641: Machine Learning (5)

Our more expressive models netted us a much higher accuracy of 89.2%! Let’s now circle back. What if we assume the covariance matrix is $latex \Sigma = I$ not learnable? Suddenly, our EM algorithm became even simpler:

$latex \mu \leftarrow \mu + \alpha \big( \cancel{\Sigma^{-1}} (x-\mu) \big) $

$latex \cancel{\Sigma \leftarrow \Sigma + \dfrac{\alpha}{2} \big( \Sigma^{-1} – \Sigma^{-1} (x-\mu) (x-\mu)^T \Sigma^{-1} \big) } $

This update equation is almost exactly an update equation we can use for k-means clustering (which actually only sums over inter-cluster points). As such, k-means clustering can be seen as a special case of Expectation Maximization (EM).

Note that in practice, k-means doesn’t use gradient descent/ascent but instead estimates the means from the data points directly. You can view these two as just different optimization options to solve the same problem.

If we restrict our previous model to have $latex \Sigma = I$, then we get:

Why use stochastic models? – OMSCS 7641: Machine Learning (6)

These probabilistic models are way more than classifiers. Since we are learning a probability distribution, we can now sample from it! $latex x \sim p(x)$. They are effectively generative models that can fit data and generate new artificial data!

Why use stochastic models? – OMSCS 7641: Machine Learning (7)

Takeaways

  • k-means is a simple algorithm that assumes that all data follows a spherical distribution.
  • We can parametrize each cluster as a probabilistic distribution, which gives us more expressiveness.
  • We can then optimize this with Expectation Maximization (EM).

Probabilistic models with EM allow us to:

  • Use more complex models that can capture non-spherical distributions.
  • Soft classify each data point with individual probabilities of belonging to a class.
  • Generate new artificial data.

Like all things, EM isn’t perfect and has a few limitations:

  • It is prone to getting stuck in local maxima, and the solution heavily depends on the initial values. Can you think of any techniques we studied in this course to alleviate this?
  • It still assumes that each class is uni-modal. This can be addressed by using multiple Gaussians and adding them together to form more complex distributions. This is known as Gaussian Mixture Models (GMMS).
  • Heavy reliance on the assumed underlying model. While EM is more expressive than k-means clustering, it still works on an assumption about an underlying distribution, which might not be correct. Non-parametric approaches such as Kernel Density Estimation and Gaussian Processes are alternatives that tackle this issue.
Why use stochastic models? – OMSCS 7641: Machine Learning (2024)
Top Articles
Latest Posts
Article information

Author: Nicola Considine CPA

Last Updated:

Views: 5911

Rating: 4.9 / 5 (49 voted)

Reviews: 88% of readers found this page helpful

Author information

Name: Nicola Considine CPA

Birthday: 1993-02-26

Address: 3809 Clinton Inlet, East Aleisha, UT 46318-2392

Phone: +2681424145499

Job: Government Technician

Hobby: Calligraphy, Lego building, Worldbuilding, Shooting, Bird watching, Shopping, Cooking

Introduction: My name is Nicola Considine CPA, I am a determined, witty, powerful, brainy, open, smiling, proud person who loves writing and wants to share my knowledge and understanding with you.