Change8

PEFT

AI & LLMs

🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.

Latest: v0.18.18 releases3 breaking changesView on GitHub →

Release History

v0.18.1
Jan 9, 2026
v0.18.0Breaking13 features
Nov 13, 2025

This release introduces seven new PEFT methods including RoAd, ALoRA, and DeLoRA, alongside significant enhancements like stable integration interfaces and support for negative weights in weighted LoRA merging. It also drops support for Python 3.9 and requires an upgrade for compatibility with Transformers v5.

v0.17.1Breaking2 fixes1 feature
Aug 21, 2025

This patch release fixes bugs related to the new target_parameters feature, specifically ensuring existing parameterizations are preserved and preventing incorrect behavior when loading multiple adapters.

v0.17.010 fixes4 features
Aug 1, 2025

This release introduces two major new PEFT methods, SHiRA and MiSS (which deprecates Bone), and significantly enhances LoRA by enabling direct targeting of nn.Parameter, crucial for MoE layers. It also adds utility for injecting adapters directly from a state_dict.

v0.16.0Breaking7 fixes8 features
Jul 3, 2025

This release introduces three major new PEFT methods: LoRA-FA, RandLoRA, and C3A, alongside significant enhancements like QLoRA support and broader layer compatibility for LoRA and DoRA. It also includes critical compatibility updates related to recent changes in the Hugging Face Transformers library.

v0.15.21 fix
Apr 15, 2025

This patch resolves an issue where prompt learning methods, including P-tuning, were failing to operate correctly.

v0.15.11 fix
Mar 27, 2025

This patch addresses a critical bug (#2450) related to saving checkpoints when using DeepSpeed ZeRO stage 3 with `modules_to_save`.

v0.15.012 fixes6 features
Mar 19, 2025

This release introduces significant new features including CorDA initialization for LoRA and the Trainable Tokens tuner, alongside enhancements to LoRA targeting and Hotswapping capabilities. It also deprecates PEFT_TYPE_TO_MODEL_MAPPING and replaces AutoGPTQ support with GPTQModel.