KBLaM: Knowledge Base Augmented Language Models for Enhanced LLMs

KBLaM: Knowledge Base Augmented Language Models for Enhanced LLMs

Summary

KBLaM, developed by Microsoft, is the official implementation of "Knowledge Base Augmented Language Models" presented at ICLR 2025. This innovative method enhances Large Language Models by directly integrating external knowledge bases, offering an efficient alternative to traditional Retrieval-Augmented Generation (RAG) and in-context learning. It eliminates external retrieval modules and scales computationally linearly with knowledge base size, rather than quadratically.

Repository Info

Updated on February 28, 2026
View on GitHub

Introduction

KBLaM, developed by Microsoft, is the official implementation of "Knowledge Base Augmented Language Models" (ICLR 2025). This project introduces a novel approach to augmenting Large Language Models (LLMs) with external knowledge. Unlike Retrieval-Augmented Generation (RAG), KBLaM eliminates the need for external retrieval modules, and it offers a significant advantage over in-context learning by ensuring its computational overhead scales linearly with the knowledge base size, rather than quadratically. This makes KBLaM a promising solution for integrating vast amounts of knowledge into LLMs more efficiently.

Installation

Getting started with KBLaM is straightforward. You can install the package directly from the repository:

pip install -e .

To utilize Llama models, you will need to generate a token from Hugging Face and log in via the command line:

pip install huggingface_hub
huggingface-cli login

Examples

KBLaM provides tools for synthetic dataset construction and model training. For dataset generation, you'll need an Azure OpenAI endpoint. You can construct synthetic knowledge bases and question-answer pairs using dataset_generation/gen_synthetic_data.py and generate KB embeddings with dataset_generation/generate_kb_embeddings.py. Supported embeddings include text-embedding-ada-002 and all-MiniLM-L6-v2.

An example of model training is provided:

python train.py --dataset_dir <Your dataset directory> --train_dataset synthetic --N 120000 --B 20 --total_steps 601  --encoder_spec OAI --use_oai_embd --key_embd_src key --use_data_aug --use_cached_embed

It is recommended to use the --use_cached_embed argument to prevent recomputation of embeddings, which can be time-consuming.

Why Use KBLaM

KBLaM offers a compelling alternative for enhancing LLMs with knowledge. Its core strength lies in its ability to directly integrate knowledge without external retrieval, leading to improved efficiency and scalability. The method trains adapters over the knowledge part, leaving the base LLM unmodified for text input. This means if no knowledge base is provided, the model behaves exactly like the base model. KBLaM is intended for research purposes, focusing on accuracy of retrieval, refusal rate, and alignment of answers with the knowledge base. It currently supports popular models like meta-llama/Meta-Llama-3-8B-Instruct, meta-llama/Llama-3.2-1B-Instruct, and microsoft/Phi-3-mini-4k-instruct.

Links