A type of recurrent neural network architecture used in deep learning that is designed to better capture dependencies in sequences of data. GRUs are similar to Long Short-Term Memory (LSTM) networks but are simpler and often faster to train, making them effective for tasks like speech recognition, language modeling, and time series prediction.