A technique used in training neural networks where the gradient of a non-differentiable function is approximated as a constant during backpropagation. This allows the use of non-differentiable functions in the network while still enabling gradient-based optimization.