Helicone: Open Source LLM Observability Platform and AI Gateway

Helicone: Open Source LLM Observability Platform and AI Gateway

Summary

Helicone is an open-source LLM observability platform and AI Gateway for AI engineers. It provides one-line code integration to monitor, evaluate, and experiment with large language models, offering features like cost tracking, prompt management, and intelligent routing. The platform supports a wide range of inference providers and frameworks, simplifying LLM development and deployment.

Repository Info

Updated on March 1, 2026
View on GitHub

Introduction

Helicone is an open-source LLM observability platform and AI Gateway built for AI engineers. It simplifies the process of monitoring, evaluating, and experimenting with large language models through a single line of code. Developed by Y Combinator W23 alumni, Helicone provides essential tools for tracing agent interactions, tracking costs and latency, managing prompts, and intelligently routing requests across over 100 AI models.

Installation

Getting started with Helicone is straightforward. First, obtain your API key by signing up on the Helicone website and add credits at helicone.ai/credits.

Then, integrate it into your application by updating the baseURL and adding your API key, as shown in this TypeScript example:

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://ai-gateway.helicone.ai",
  apiKey: process.env.HELICONE_API_KEY,
});

const response = await client.chat.completions.create({
  model: "gpt-4o-mini",  // claude-sonnet-4, gemini-2.0-flash or any model from https://www.helicone.ai/models
  messages: [{ role: "user", content: "Hello!" }]
});

For self-hosting, Helicone offers a simple Docker setup using docker-compose. Clone the repository, navigate to the docker directory, copy .env.example to .env, and run ./helicone-compose.sh helicone up. Enterprise users can also leverage a production-ready Helm chart.

Examples

The quick start guide above serves as a primary example of integrating Helicone's AI Gateway. Once integrated, you can view your logs and access over 100 models through a unified API on your Helicone dashboard.

Helicone also features a powerful Playground UI, allowing you to rapidly test and iterate on prompts, sessions, and traces, streamlining your development workflow. Its extensive integration support means you can easily connect with popular LLM providers like OpenAI, Anthropic, and Google Gemini, as well as frameworks such as LangChain and LlamaIndex.

Why Use

Helicone stands out as a comprehensive solution for LLM operations due to several key features:

  • AI Gateway: Access over 100 AI models with a single API key, benefiting from intelligent routing and automatic fallbacks.
  • Quick Integration: Effortlessly log requests from various providers and frameworks with minimal code changes.
  • Observability & Analytics: Inspect and debug traces, sessions, and track critical metrics like cost, latency, and quality.
  • Prompt Management: Version prompts using production data and deploy them via the AI Gateway without code modifications, ensuring your prompts remain under your control.
  • Fine-tuning: Integrate with partners like OpenPipe and Autonomi for efficient fine-tuning processes.
  • Enterprise Ready: Compliant with SOC 2 and GDPR standards, making it suitable for enterprise-level applications.
  • Generous Free Tier: Start monitoring your LLM applications with a free tier of 10,000 requests per month, no credit card required.

Links