Riffusion (hobby): Real-time Music Generation with Stable Diffusion

Riffusion (hobby): Real-time Music Generation with Stable Diffusion

Summary

Riffusion (hobby) is an innovative Python library that applies stable diffusion models to generate music and audio in real-time. This project enables creative exploration of soundscapes through spectrogram image processing, offering tools for command-line use, an interactive Streamlit app, and a Flask API server. While no longer actively maintained, it remains a significant open-source contribution to AI-driven audio synthesis.

Repository Info

Updated on November 22, 2025
View on GitHub

Tags

Click on any tag to explore related repositories

Introduction

Riffusion (hobby) is a pioneering open-source project that leverages stable diffusion for real-time music and audio generation. Developed in Python, it transforms textual prompts into unique soundscapes by manipulating spectrogram images. This repository serves as the core for Riffusion's image and audio processing, offering a diffusion pipeline that combines prompt interpolation with image conditioning. It also provides utilities for converting between spectrogram images and audio clips, an interactive Streamlit application, and a Flask server for model inference via an API. Please note, this project is no longer actively maintained.

Installation

To get started with Riffusion, it is highly recommended to set up a virtual Python environment. The project has been tested with Python 3.9 and 3.10.

First, create and activate a virtual environment (e.g., using conda):

conda create --name riffusion python=3.9
conda activate riffusion

Next, install the required Python dependencies:

python -m pip install -r requirements.txt

For handling audio formats beyond WAV, ffmpeg is necessary. Install it using your system's package manager or conda:

sudo apt-get install ffmpeg          # Linux
brew install ffmpeg                  # macOS
conda install -c conda-forge ffmpeg  # Conda

Examples

Riffusion offers several ways to interact with its capabilities, from a command-line interface to an interactive web app and an API server.

Command-Line Interface (CLI)

The CLI allows for common tasks, such as converting images to audio:

python -m riffusion.cli image-to-audio --image spectrogram_image.png --audio clip.wav

Riffusion Playground (Streamlit App)

Explore Riffusion interactively using its Streamlit app:

python -m riffusion.streamlit.playground

Access the playground in your browser at http://127.0.0.1:8501/.

Model Server (Flask API)

Run Riffusion as a Flask server to provide inference via an API, enabling integration with other applications, such as the Riffusion web app:

python -m riffusion.server --host 127.00.1 --port 3013

The model endpoint is available at http://127.0.0.1:3013/run_inference via POST request.

Why Use It

Riffusion stands out for its innovative application of stable diffusion to the domain of real-time music generation. It provides a unique platform for artists, developers, and researchers to experiment with AI-driven audio synthesis, offering granular control over soundscapes through prompt engineering and image conditioning. Despite its maintenance status, it remains a valuable resource for understanding and exploring the intersection of AI, audio processing, and creative expression, pushing the boundaries of what's possible with generative models in music.

Links