Cross-entropy is a measure used to quantify the difference between two probability distributions. It is commonly used in machine learning to evaluate the performance of a classification model, where it measures the difference between the actual and predicted probability distributions.