langcorn: Serve LangChain LLM Apps and Agents with FastAPI

Summary
Langcorn is an innovative API server designed to effortlessly deploy LangChain models and pipelines. It leverages the high-performance FastAPI framework, offering a robust and scalable solution for serving large language model applications. With features like easy installation, built-in authentication, and support for custom API keys, Langcorn streamlines the process of bringing your LLM projects to production.
Repository Info
Introduction
Langcorn is an API server that simplifies the deployment of LangChain models and pipelines. It integrates seamlessly with FastAPI, providing a high-performance, scalable, and robust solution for serving your language processing applications and agents. Designed for LLMops, Langcorn automates the serving process, allowing developers to focus on building powerful LLM-powered experiences.
Key features include easy deployment, ready-to-use authentication, well-documented RESTful API endpoints, asynchronous processing, and support for custom pipelines and processing.
Installation
Getting started with Langcorn is straightforward. You can install the package using pip:
pip install langcorn
Examples
Quick Start
To run a single LangChain chain, define it in a Python file (e.g., examples.ex1:chain) and then start the Langcorn server:
langcorn server examples.ex1:chain
Alternatively, you can run it as a Python module:
python -m langcorn server examples.ex1:chain
Serving Multiple Chains
Langcorn supports serving multiple chains simultaneously. Simply list them when starting the server:
python -m langcorn server examples.ex1:chain examples.ex2:chain
Integrating with FastAPI
For more control, you can integrate Langcorn directly into an existing FastAPI application:
from fastapi import FastAPI
from langcorn import create_service
app: FastAPI = create_service("examples.ex1:chain")
# To serve multiple chains:
# app: FastAPI = create_service("examples.ex2:chain", "examples.ex1:chain")
# Then run your FastAPI app with Uvicorn:
# uvicorn main:app --host 0.0.0.0 --port 8000
Why Use It
Langcorn addresses the challenges of deploying LangChain applications by offering a streamlined, high-performance solution. Its integration with FastAPI ensures robust API serving, while features like built-in authentication and custom API key handling provide necessary security and flexibility. Developers benefit from automatic documentation via FastAPI's /docs endpoint, asynchronous processing for faster responses, and the ability to manage LLM kwargs per request. It also provides structured handling for LLM memory, making it ideal for conversational AI applications. Langcorn simplifies the operational aspects of LLM deployment, allowing you to focus on the core logic of your AI agents and applications.
Links
- GitHub Repository: https://github.com/msoedov/langcorn
- Live Example: https://langcorn-ift9ub8zg-msoedov.vercel.app/docs#/