<- Back to all posts

FastVideo Training Infra Quick Start

2026-03-08

A quick-start guide for the new FastVideo training framework (fastvideo/train), covering code checkout, how to run training, example commands, and links to the related PR and RFC.

Pull the Code:

Try running:

git fetch origin pull/1159/head:train-clean-refactor && git checkout train-clean-refactor

Or:

git clone git@github.com:FoundationResearch/FastVideo.git && git checkout train-clean-refactor

Run Training:

  1. Edit lines 39-40 in examples/train/run.sh to point to your own Conda environment path (or simply delete those two lines).
  1. (Optional) If you want to run a Finetune:
bash examples/training/finetune/wan_t2v_1.3B/crush_smol/download_dataset.sh

PS: You don't need to wait for the full download to finish — Ctrl-C halfway through is fine. You'll end up with a subset of the dataset, which is enough.

  1. Run:
WANDB_MODE=online WANDB_API_KEY='your wandb api key' \
bash examples/train/run.sh <run_yaml_path>

Example:

WANDB_MODE=online FASTVIDEO_ATTENTION_BACKEND=VIDEO_SPARSE_ATTN \
WANDB_API_KEY='your key' \
bash examples/train/run.sh examples/train/finetune_wan2.1_t2v_1.3B_vsa_phase3.4_0.9sparsity.yaml

All new architecture training-related code lives in the fastvideo/train directory. Feedback is very welcome!

PR: [feat] Refactor training framework into fastvideo/train by alexzms · Pull Request #1159 · hao-ai-lab/FastVideo

RFC: [RFC]: Unified, YAML-Driven Training Architecture for Video Diffusion Models · Issue #1158 · hao-ai-lab/FastVideo