Expectation-Maximization (EM) is an iterative algorithm used in statistics and machine learning for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. The algorithm alternates between performing an expectation (E) step, which creates a function for the expectation of the log-likelihood evaluated using the current estimate for the parameters, and a maximization (M) step, which computes parameters maximizing the expected log-likelihood found on the E step.