LLMBox: A Comprehensive Python Library for LLM Training and Evaluation

Summary
LLMBox is a comprehensive Python library designed for implementing Large Language Models, offering a unified training pipeline and extensive model evaluation capabilities. It provides a one-stop solution for both training and utilizing LLMs, emphasizing flexibility and efficiency. Developers can leverage its diverse training strategies and blazingly fast inference for their LLM projects.
Repository Info
Tags
Click on any tag to explore related repositories
Introduction
LLMBox is a powerful and comprehensive Python library engineered for the implementation, training, and evaluation of Large Language Models (LLMs). It stands out as a one-stop solution, providing a unified training pipeline alongside extensive model evaluation tools. Designed with a focus on practical library design, LLMBox ensures high levels of flexibility and efficiency throughout both the training and utilization stages of LLMs.
Key features include diverse training strategies such as Supervised Fine-tuning (SFT), Pre-training (PT), PPO, and DPO. It supports comprehensive SFT datasets, tokenizer vocabulary merging, and advanced data construction strategies like Self-Instruct and Evol-Instruct. For efficiency, LLMBox integrates Parameter Efficient Fine-Tuning (LoRA, QLoRA), Flash Attention, and Deepspeed. On the utilization front, it offers blazingly fast inference through KV Cache management and vLLM, comprehensive evaluation across 59+ datasets and benchmarks, various evaluation methods, In-Context Learning (ICL) strategies, Chain-of-Thought (CoT) evaluation, and quantization support.
Installation
Getting started with LLMBox is straightforward. Follow these steps to set up the library:
git clone https://github.com/RUCAIBox/LLMBox.git && cd LLMBox
pip install -r requirements.txt
If you are primarily evaluating OpenAI or OpenAI-compatible models, you can opt for the minimal requirements:
pip install -r requirements-openai.txt
For any installation issues, refer to the troubleshooting documentation.
Examples
LLMBox simplifies both the training and utilization of LLMs with easy-to-run examples.
Quick Start with Training
To train an SFT model, for instance, based on LLaMA-2 (7B) with deepspeed3:
cd training
bash download.sh
bash bash/run_ds3.sh
Quick Start with Utilization
To utilize your model or evaluate an existing one, you can run a command like this, which defaults to using OpenAI GPT-3.5 Turbo on the CoPA dataset in a zero-shot manner:
python inference.py -m gpt-3.5-turbo -d copa
For more detailed training configurations, you can use a command similar to this example for fine-tuning LLaMA-2:
python train.py \
--model_name_or_path meta-llama/Llama-2-7b-hf \
--data_path data/ \
--dataset alpaca_data_1k.json \
--output_dir $OUTPUT_DIR \
--num_train_epochs 2 \
--per_device_train_batch_size 8 \
--gradient_accumulation_steps 2 \
--save_strategy "epoch" \
--save_steps 2 \
--save_total_limit 2 \
--learning_rate 1e-5 \
--lr_scheduler_type "constant"
Why Use LLMBox
LLMBox offers compelling reasons for developers and researchers working with LLMs:
- Unified and Comprehensive Solution: It provides a single, integrated platform for the entire LLM lifecycle, from diverse training strategies to extensive evaluation. This eliminates the need to stitch together multiple tools.
- Flexibility and Efficiency: The library's design prioritizes both flexibility in adapting to various research needs and efficiency in resource utilization, supporting techniques like Flash Attention, Deepspeed, LoRA, and QLoRA.
- Diverse Training Capabilities: With support for SFT, PT, PPO, DPO, tokenizer merging, and advanced data augmentation methods like Self-Instruct and Evol-Instruct, LLMBox caters to a wide range of training scenarios.
- Blazingly Fast Inference: Achieve significant speedups in local inference, up to 6x, by leveraging KV Cache management and integration with vLLM, crucial for rapid experimentation and deployment.
- Extensive Model Evaluation: Evaluate LLMs comprehensively across 59+ commonly used datasets and benchmarks. It accurately reproduces results from original papers and supports various evaluation methods, including perplexity, probability, and generation, along with ICL and CoT strategies.
- Ease of Use: Detailed documentation and clear examples make it easy to get started, debug, and integrate new models or datasets, streamlining the development process.
Links
Explore LLMBox further through these official resources:
- GitHub Repository: https://github.com/RUCAIBox/LLMBox
- Documentation: https://github.com/RUCAIBox/LLMBox/tree/main/docs
- Research Paper: LLMBox: A Comprehensive Library for Large Language Models