Weave by Weights & Biases: A Toolkit for AI-Powered Applications

Weave by Weights & Biases: A Toolkit for AI-Powered Applications

Summary

Weave is an open-source toolkit developed by Weights & Biases designed for building and managing AI-powered applications. It provides robust features for logging, debugging, and evaluating language model inputs and outputs, streamlining the development workflow for generative AI. Weave aims to bring rigor and best practices to the experimental process of AI software development.

Repository Info

Updated on November 3, 2025
View on GitHub

Tags

Click on any tag to explore related repositories

Introduction

Weave by Weights & Biases is a powerful, open-source toolkit designed to streamline the development of AI-powered applications, particularly those leveraging Generative AI and Large Language Models (LLMs). Built by the team behind Weights & Biases, Weave aims to bring structure, best practices, and composability to the inherently experimental process of building AI software. It provides a comprehensive suite of tools to manage the entire LLM workflow, from initial experimentation to robust evaluations and production deployment.

Installation

To get started with Weave, ensure you have Python 3.9 or higher and a free Weights & Biases account.

  • Install Weave:
    pip install weave
  • Import and initialize:
    import weave
    weave.init("my-project-name")
  • Trace your functions:

    Decorate any function you want to track with @weave.op().

Examples

Weave allows you to trace any function, from API calls to LLMs to custom data transformations, providing a detailed trace tree of inputs and outputs.

Basic Tracing

import weave
weave.init("weave-example")

@weave.op()
def sum_nine(value_one: int):
    return value_one + 9

@weave.op()
def multiply_two(value_two: int):
    return value_two * 2

@weave.op()
def main():
    output = sum_nine(3)
    final_output = multiply_two(output)
    return final_output

main()

Fuller Example with OpenAI

This example demonstrates how to trace an LLM call to extract structured information.

import weave
import json
from openai import OpenAI

@weave.op()
def extract_fruit(sentence: str) -> dict:
    client = OpenAI()

    response = client.chat.completions.create(
    model="gpt-3.5-turbo-1106",
    messages=[
        {
            "role": "system",
            "content": "You will be provided with unstructured data, and your task is to parse it one JSON dictionary with fruit, color and flavor as keys."
        },
        {
            "role": "user",
            "content": sentence
        }
        ],
        temperature=0.7,
        response_format={ "type": "json_object" }
    )
    extracted = response.choices[0].message.content
    return json.loads(extracted)

weave.init('intro-example')

sentence = "There are many fruits that were found on the recently discovered planet Goocrux. There are neoskizzles that grow there, which are purple and taste like candy."

extract_fruit(sentence)

Why Use Weave?

Weave addresses critical challenges in Generative AI development by enabling you to:

  • Log and debug language model inputs, outputs, and traces effectively.
  • Build rigorous, apples-to-apples evaluations for various language model use cases.
  • Organize all the information generated across the LLM workflow, from experimentation to evaluations to production.
  • Bring rigor, best practices, and composability to the inherently experimental process of developing Generative AI software, without introducing unnecessary cognitive overhead.

Links