A model checkpoint is a saved state of a trained model at a specific point during training. This allows the training process to resume from the last saved state in case of interruption, or for the model to be used later without retraining from scratch.