Repository History
Explore all analyzed open source repositories

PEFT: State-of-the-Art Parameter-Efficient Fine-Tuning
PEFT (Parameter-Efficient Fine-Tuning) is a cutting-edge library from Hugging Face designed to efficiently adapt large pretrained models for various downstream applications. It dramatically reduces computational and storage costs by fine-tuning only a small subset of model parameters. This approach enables achieving performance comparable to fully fine-tuned models, making advanced AI accessible on more modest hardware.
Spotlight: Deep Recommender Models with PyTorch
Spotlight is a Python library built on PyTorch for developing deep and shallow recommender models. It offers a comprehensive set of building blocks for various loss functions, representations, and utilities for handling recommendation datasets. This tool is designed for rapid exploration and prototyping of new recommender systems.

PETSA: Parameter-Efficient Test-Time Adaptation for Time Series Forecasting
PETSA offers a parameter-efficient solution for Test-Time Adaptation (TTA) in time series forecasting, addressing the performance degradation caused by non-stationary data. It adapts pre-trained models during inference by updating small calibration modules, reducing memory and compute costs. This method, which includes low-rank adapters, dynamic gating, and a specialized loss, improves forecasting accuracy across diverse backbones and datasets.

pytorch-deep-learning: Learn PyTorch for Deep Learning from Zero to Mastery
This repository provides comprehensive materials for the "Learn PyTorch for Deep Learning: Zero to Mastery" course. It offers a hands-on, code-first approach to mastering PyTorch, covering fundamentals to advanced topics like computer vision and model deployment. With over 16,000 stars, it's a highly popular resource for beginners in machine learning and deep learning.

Text Generation Inference: High-Performance LLM Serving by Hugging Face
Text Generation Inference (TGI) is a robust toolkit from Hugging Face designed for deploying and serving Large Language Models (LLMs) with high performance. It powers Hugging Face's production services, including Hugging Chat and their Inference API. TGI offers optimized text generation, supporting popular open-source LLMs and implementing advanced features for efficient and scalable inference.