LangWatch: The Platform for LLM Evaluations and AI Agent Testing

LangWatch: The Platform for LLM Evaluations and AI Agent Testing

Summary

LangWatch is an open-source platform designed for end-to-end LLM evaluations and AI agent testing. It helps teams test, simulate, evaluate, and monitor LLM-powered agents both before release and in production. Built for robust regression testing, simulations, and production observability, LangWatch eliminates the need for custom tooling.

Repository Info

Updated on April 28, 2026
View on GitHub

Tags

Click on any tag to explore related repositories

Introduction

LangWatch is a comprehensive platform for LLM evaluations and AI agent testing. It empowers teams to rigorously test, simulate, evaluate, and monitor their LLM-powered agents from development through to production. Designed for teams requiring robust regression testing, detailed simulations, and production observability, LangWatch offers a unified solution without the need for fragmented custom tools.

Key features include end-to-end agent simulations, a combined loop for evaluation, observability, and prompt optimization, and an AI Gateway for governance and cost control. The platform is built on open standards like OpenTelemetry, ensuring flexibility and preventing vendor lock-in.

Installation

Getting started with LangWatch is straightforward, with options for cloud, local, and self-hosted deployments.

Cloud

The easiest way to begin is by creating a free account on the LangWatch cloud platform:

  1. Create a free account
  2. Create a project and obtain your API key.

Local Setup

Using npx (Node.js required):

npx @langwatch/server

This command installs necessary components (uv, postgres, redis, clickhouse, AI gateway) into ~/.langwatch/, sets up environment variables, and starts all services. LangWatch will be available at http://localhost:5560.

Using Docker Compose:

git clone https://github.com/langwatch/langwatch.git
cd langwatch
cp langwatch/.env.example langwatch/.env
docker compose up -d --wait --build

After running, LangWatch will be accessible at http://localhost:5560.

Deployment Options

For self-hosting on your own infrastructure, LangWatch supports:

Examples

To quickly ship safer agents, start with these guides after creating a free account:

LangWatch also offers extensive integrations with popular frameworks and model providers, including LangChain, LangGraph, OpenAI, Anthropic, and many more, thanks to its OpenTelemetry-based tracing platform.

Why Use LangWatch?

LangWatch provides full visibility into agent behavior and the necessary tools to systematically enhance reliability, performance, and cost efficiency, all while maintaining control over your AI system. Its unique value proposition includes:

  • End-to-end Agent Simulations: Run realistic scenarios against your full stack to pinpoint agent failures.
  • Integrated Workflow: Combine tracing, dataset creation, evaluation, and prompt optimization in one seamless loop.
  • Open Standards: Built on OpenTelemetry, ensuring no vendor lock-in and compatibility across frameworks and LLM providers.
  • AI Gateway: An OpenAI/Anthropic-compatible proxy offering virtual keys, hierarchical budgets, inline guardrails, and automatic fallback.
  • Enhanced Collaboration: Features like run reviews, failure annotations, and GitHub integration for prompt management streamline team collaboration and accelerate fixes.

Links