PyTorch Lightning
Data & MLPretrain, finetune ANY AI model of ANY size on 1 or 10,000+ GPUs with zero code changes.
Release History
2.6.014 fixes7 featuresVersion 2.6.0 introduces several new features like WeightAveraging callbacks and Torch-Tensorrt integration, alongside numerous bug fixes across PyTorch Lightning and Fabric components.
2.5.61 featureThis release introduces a new `name()` function to the accelerator interface and removes support for the deprecated lightning-habana package.
2.5.56 fixesThis patch release for PyTorch Lightning and Lightning Fabric focuses on bug fixes, including issues with `LightningCLI`, `ModelCheckpoint` saving logic, and progress bar resetting. It also includes updates for PyTorch 2.8 compatibility.
2.5.45 fixes1 featureThis patch release for PyTorch Lightning focuses on bug fixes across checkpointing, callbacks, and strategy integrations. Lightning Fabric also added support for NVIDIA H200 GPUs.
2.5.313 fixes5 featuresThis release brings numerous bug fixes across PyTorch Lightning and Lightning Fabric, including improvements to checkpointing, logging, profiling, and progress bar rendering. New features include more flexibility in ModelCheckpoint options and handling of training_step returns.
2.5.28 fixes1 featureThis release introduces the `toggled_optimizer` context manager to LightningModule and resolves several bugs related to CLI integration, DDP synchronization, and checkpointing. Users are advised to update `fsspec` for cross-device checkpointing.
2.5.1.post0This is a post-release update (2.5.1.post0) following version 2.5.1, with details available in the linked comparison.
2.5.110 fixes4 featuresThis release introduces enhancements for logging integrations like MLflow and CometML, allows customization of LightningCLI argument parsing, and fixes several bugs related to logging latency, checkpoint resumption, and logger behavior. Legacy support for `lightning run model` has been removed in favor of `fabric run`.