dlt: The Open-Source Python Library for Easy Data Loading

dlt: The Open-Source Python Library for Easy Data Loading

Summary

dlt, the data load tool, is an open-source Python library designed to simplify and automate data loading tasks. It efficiently extracts, normalizes, and loads data from various sources into well-structured datasets. Highly versatile, dlt supports diverse data sources and destinations, making it suitable for deployment in a wide range of environments.

Repository Info

Updated on December 15, 2025
View on GitHub

Tags

Click on any tag to explore related repositories

Introduction

dlt, the data load tool, is an open-source Python library designed to automate tedious data loading tasks. It simplifies the process of extracting, normalizing, and loading data from various, often messy sources into well-structured datasets. With over 4.6k stars and 400 forks on GitHub, dlt is a popular choice for data engineers and developers. It's highly versatile, capable of being deployed in diverse environments such as Google Colab notebooks, AWS Lambda functions, Airflow DAGs, or local development setups. dlt also boasts an LLM-native workflow, making it easy to integrate with AI-assisted development.

Installation

dlt supports Python versions 3.9 through 3.14, with experimental support for 3.14. Installation is straightforward using pip:

pip install dlt

Examples

Get started quickly by loading data from an API into a DuckDB destination. Here's a quick example demonstrating how to load chess game data from the chess.com API:

import dlt
from dlt.sources.helpers import requests

# Create a dlt pipeline that will load
# chess player data to the DuckDB destination
pipeline = dlt.pipeline(
    pipeline_name='chess_pipeline',
    destination='duckdb',
    dataset_name='player_data'
)

# Grab some player data from Chess.com API
data = []
for player in ['magnuscarlsen', 'rpragchess']:
    response = requests.get(f'https://api.chess.com/pub/player/{player}')
    response.raise_for_status()
    data.append(response.json())

# Extract, normalize, and load the data
pipeline.run(data, table_name='player')

You can also try dlt directly in their Colab Demo or on their wasm-based playground.

Why Use dlt?

dlt is designed to be easy to use, flexible, and scalable, offering a comprehensive set of features for modern data pipelines:

  • Diverse Data Sources: Extract data from a wide array of sources, including REST APIs, SQL databases, cloud storage, and Python data structures.
  • Automated Schema Management: It automatically infers schemas and data types, normalizes data, and handles complex nested data structures, simplifying data preparation.
  • Flexible Destinations: Supports a variety of popular data destinations and allows for the creation of custom destinations, enabling both ETL and reverse ETL workflows.
  • Pipeline Automation: Automates critical pipeline maintenance tasks such as incremental loading, schema evolution, and the enforcement of schema and data contracts.
  • Data Access and Transformation: Provides Python and SQL data access, robust transformation capabilities, pipeline inspection tools, and data visualization options, including integration with Marimo Notebooks.
  • Anywhere Deployment: dlt can be deployed wherever Python runs, from Airflow and serverless functions to any other cloud environment of your choice.

Links