multiresolution-time-series-transformer: Long-term Forecasting with MTST

Summary
This repository provides a PyTorch implementation of the Multi-Resolution Time-Series Transformer (MTST) for long-term forecasting. Based on the Zhang et al. (2024) paper, MTST processes temporal data at different resolutions to effectively capture both short-term and long-term patterns. It offers a flexible and robust solution for advanced time series prediction tasks.
Repository Info
Tags
Click on any tag to explore related repositories
Introduction
The multiresolution-time-series-transformer repository presents a PyTorch implementation of the Multi-Resolution Time-Series Transformer (MTST) model, designed for accurate long-term forecasting. Inspired by the paper by Zhang et al. (2024), this model addresses the challenge of capturing diverse temporal patterns in time series data by processing information at multiple resolutions.
Unlike traditional methods, MTST employs a multi-resolution approach, utilizing stride-based subsampling to analyze data at high, mid, and low temporal scales. This allows the model to effectively capture both fine-grained, short-term fluctuations and broader, long-term trends. The architecture integrates these multi-resolution features through interpolation and concatenation, feeding them into a series of transformer blocks for robust sequence modeling.
Installation
To get started with the MTST implementation, follow these simple steps:
git clone https://github.com/VenkatachalamSubramanianPeriyaSubbu/multiresolution-time-series-transformer
cd multiresolution-time-series-transformer
pip install -r requirements.txt
This will clone the repository and install all necessary dependencies, preparing your environment for model training and inference.
Examples
Quick Start
Here's a quick example to initialize the MTST model and perform a forward pass:
import torch
from src.model.mtst import MTST
# Initialize model
model = MTST(
input_dim=6, # Number of input features
embed_dim=64, # Embedding dimension
heads=8, # Attention heads
dropout=0.1, # Dropout rate
n_layers=10, # Transformer layers
output_len=5, # Forecast horizon
max_len=5000 # Max sequence length
)
# Example input: (batch_size, seq_len, input_dim)
x = torch.randn(32, 30, 6)
# Forward pass with resolution factors
forecast = model(x, high_res=1, mid_res=4, low_res=10)
print(f"Forecast shape: {forecast.shape}") # [32, 5]
Training
To run the complete training pipeline, including data preprocessing, model initialization, and evaluation, execute the train.py script:
python train.py
This script handles data loading, model training, loss tracking, and saves the trained model and visualizations.
Why Use MTST?
The Multi-Resolution Time-Series Transformer offers several compelling advantages for long-term forecasting tasks:
- Multi-Resolution Processing: It captures patterns at different temporal scales, from fine-grained details to long-term trends, leading to more comprehensive understanding of the data.
- Transformer-Based Architecture: Leveraging the power of attention mechanisms, MTST excels at modeling complex dependencies within sequences, a hallmark of modern deep learning for sequential data.
- Intelligent Feature Fusion: The model intelligently combines features from different resolutions through interpolation and concatenation, ensuring that all temporal insights are effectively integrated.
- Robust Performance: Based on the original paper's findings, MTST demonstrates state-of-the-art performance on various benchmarks, consistently outperforming single-resolution transformers and showing strong capabilities across different prediction horizons.
- Flexibility and Customization: With configurable parameters for embedding dimensions, attention heads, and layers, the model can be adapted to a wide range of time series datasets and forecasting challenges.
Links
- GitHub Repository: https://github.com/VenkatachalamSubramanianPeriyaSubbu/multiresolution-time-series-transformer
- Original Paper: https://arxiv.org/abs/2311.04147