LitServe: Build Custom Inference Engines for AI Models

LitServe: Build Custom Inference Engines for AI Models

Summary

LitServe is a powerful framework from Lightning AI designed to help developers build custom inference engines for a wide range of AI models and systems. It provides expert control over serving, supporting agents, multi-modal systems, RAG, and pipelines without the typical MLOps overhead. This framework offers a flexible and efficient solution for deploying AI models, whether self-hosted or managed on the Lightning AI platform.

Repository Info

Updated on October 29, 2025
View on GitHub

Tags

Click on any tag to explore related repositories

Introduction

LitServe, developed by Lightning AI, is a robust framework designed to empower developers to build custom inference engines with unparalleled control. It eliminates the complexities of traditional MLOps and YAML configurations, allowing you to focus on creating high-performance serving solutions for a diverse range of AI applications. Whether you're working with individual models, sophisticated agents, multi-modal systems, Retrieval Augmented Generation (RAG) pipelines, or complex inference workflows, LitServe provides the flexibility and speed you need. Written in Python, this project has garnered significant community interest, boasting 3607 stars and 247 forks on GitHub.

Installation

Getting started with LitServe is straightforward. You can install it using pip:

pip install litserve

For more installation options and detailed instructions, refer to the official documentation.

Examples

LitServe simplifies the creation of inference pipelines and agents. Here are a couple of quick examples to illustrate its power:

Inference Pipeline Example

This example demonstrates a toy inference pipeline with multiple models:

import litserve as ls

# define the api to include any number of models, dbs, etc...
class InferencePipeline(ls.LitAPI):
    def setup(self, device):
        self.model1 = lambda x: x**2
        self.model2 = lambda x: x**3

    def predict(self, request):
        x = request["input"]    
        # perform calculations using both models
        a = self.model1(x)
        b = self.model2(x)
        c = a + b
        return {"output": c}

if __name__ == "__main__":
    # 12+ features like batching, streaming, etc...
    server = ls.LitServer(InferencePipeline(max_batch_size=1), accelerator="auto")
    server.run(port=8000)

Test the server with a curl command:

curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"input": 4.0}'

Agent Example

Here's a minimal agent that fetches news using the OpenAI API:

import re, requests, openai
import litserve as ls

class NewsAgent(ls.LitAPI):
    def setup(self, device):
        self.openai_client = openai.OpenAI(api_key="OPENAI_API_KEY")

    def predict(self, request):
        website_url = request.get("website_url", "https://text.npr.org/")
        website_text = re.sub(r'<[^>]+>', ' ', requests.get(website_url).text)

        # ask the LLM to tell you about the news
        llm_response = self.openai_client.chat.completions.create(
           model="gpt-3.5-turbo", 
           messages=[{"role": "user", "content": f"Based on this, what is the latest: {website_text}"}],
        )
        output = llm_response.choices[0].message.content.strip()
        return {"output": output}

if __name__ == "__main__":
    server = ls.LitServer(NewsAgent())
    server.run(port=8000)

Test it:

curl -X POST http://127.0.0.1:8000/predict -H "Content-Type: application/json" -d '{"website_url": "https://text.npr.org/"}'

You can explore over 100+ community-built templates for various model types and use cases.

Why Use LitServe?

LitServe stands out for its unique approach to AI inference, offering several compelling advantages:

  • Deploy Any Pipeline or Model: It supports a vast array of AI systems, including agents, RAG, chatbots, and models for vision, audio, speech, and text, providing the flexibility to serve any custom logic.
  • No MLOps Glue Code: The LitAPI abstraction allows you to build complete AI systems, such as multi-model setups, agents, and RAG, all within a single, coherent framework.
  • Instant Setup: Easily connect models, databases, and data sources in just a few lines of code using the setup() method.
  • Optimized Performance: Built on FastAPI, LitServe is specifically optimized for AI workloads, delivering at least a 2x speedup over plain FastAPI. It includes features like GPU autoscaling, intelligent batching, and streaming for efficient inference.
  • Expert-Friendly Control: Unlike rigid serving engines, LitServe provides low-level control over critical aspects like batching, caching, streaming, and multi-model orchestration, enabling you to build highly customized solutions.
  • Flexible Deployment: You can self-host LitServe with full control or leverage one-click deployment to Lightning AI for managed services, including autoscaling, security, and high uptime.
  • OpenAPI and OpenAI Compatibility: Ensures broad compatibility and ease of integration with existing tools and workflows.

Links