PEFT
AI & LLMs🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Release History
v0.18.1v0.18.0Breaking13 featuresThis release introduces seven new PEFT methods including RoAd, ALoRA, and DeLoRA, alongside significant enhancements like stable integration interfaces and support for negative weights in weighted LoRA merging. It also drops support for Python 3.9 and requires an upgrade for compatibility with Transformers v5.
v0.17.1Breaking2 fixes1 featureThis patch release fixes bugs related to the new target_parameters feature, specifically ensuring existing parameterizations are preserved and preventing incorrect behavior when loading multiple adapters.
v0.17.010 fixes4 featuresThis release introduces two major new PEFT methods, SHiRA and MiSS (which deprecates Bone), and significantly enhances LoRA by enabling direct targeting of nn.Parameter, crucial for MoE layers. It also adds utility for injecting adapters directly from a state_dict.
v0.16.0Breaking7 fixes8 featuresThis release introduces three major new PEFT methods: LoRA-FA, RandLoRA, and C3A, alongside significant enhancements like QLoRA support and broader layer compatibility for LoRA and DoRA. It also includes critical compatibility updates related to recent changes in the Hugging Face Transformers library.
v0.15.21 fixThis patch resolves an issue where prompt learning methods, including P-tuning, were failing to operate correctly.
v0.15.11 fixThis patch addresses a critical bug (#2450) related to saving checkpoints when using DeepSpeed ZeRO stage 3 with `modules_to_save`.
v0.15.012 fixes6 featuresThis release introduces significant new features including CorDA initialization for LoRA and the Trainable Tokens tuner, alongside enhancements to LoRA targeting and Hotswapping capabilities. It also deprecates PEFT_TYPE_TO_MODEL_MAPPING and replaces AutoGPTQ support with GPTQModel.