Change8
Error2 reports

Fix OutOfMemoryError

in PyTorch Lightning

Solution

OutOfMemoryError in PyTorch Lightning typically occurs when the GPU runs out of memory during training. Fix this by reducing the `batch_size` in your DataLoader, using gradient accumulation with `accumulate_grad_batches` in the Trainer, or offloading computations to the CPU using `torch.cuda.empty_cache()` periodically and enabling `model.half()` for mixed precision training if applicable. Consider larger GPUs or distributed training for further memory relief.

Timeline

First reported:Jan 22, 2025
Last reported:Feb 20, 2025

Need More Help?

View the full changelog and migration guides for PyTorch Lightning

View PyTorch Lightning Changelog