Repository History
Explore all analyzed open source repositories

PEFT: State-of-the-Art Parameter-Efficient Fine-Tuning
PEFT (Parameter-Efficient Fine-Tuning) is a cutting-edge library from Hugging Face designed to efficiently adapt large pretrained models for various downstream applications. It dramatically reduces computational and storage costs by fine-tuning only a small subset of model parameters. This approach enables achieving performance comparable to fully fine-tuned models, making advanced AI accessible on more modest hardware.

LLaMA-Factory: Unified Efficient Fine-Tuning for 100+ LLMs & VLMs
LLaMA-Factory is an open-source project offering a unified and efficient framework for fine-tuning over 100 large language models (LLMs) and vision-language models (VLMs). Recognized at ACL 2024, it provides a comprehensive suite of tools and algorithms for various training approaches. This repository simplifies the complex process of adapting powerful models for specific tasks with ease and scalability.