PatchVSR: Breaking Video Diffusion Resolution Limits with Patch-wise Video Super-Resolution

1Tsinghua University,  2Kling Team, Kuaishou Technology,
3Beijing Institute of Technology
CVPR 2025

TL;DR: We propose PatchVSR, the first exploration of utilizing a pre-trained T2V base model for patch-level video super-resolution. To accomplish this, we propose an effective dual-branch adapter consisting of a patch condition branch and a global context branch. Additionally, our proposed multi-patch joint modulation scheme achieves consistent results across patches.


Abstract

Pre-trained video generation models hold great potential for generative video super-resolution (VSR). However, adapting them for full-size VSR, as most existing methods do, suffers from unnecessary intensive full-attention computation and fixed output resolution. To overcome these limitations, we make the first exploration into utilizing video diffusion priors for patch-wise VSR. This is non-trivial because pre-trained video diffusion models are not native for patch-level detail generation. To mitigate this challenge, we propose an innovative approach, called PatchVSR, which integrates a dual-stream adapter for conditional guidance. The patch branch extracts features from input patches to maintain content fidelity while the global branch extracts context features from the resized full video to bridge the generation gap caused by incomplete semantics of patches. Particularly, we also inject the patch's location information into the model to better contextualize patch synthesis within the global video frame. Experiments demonstrate that our method can synthesize high-fidelity, high-resolution details at the patch level. A tailor-made multi-patch joint modulation is proposed to ensure visual consistency across individually enhanced patches. Due to the flexibility of our patch-based paradigm, we can achieve highly competitive 4K VSR based on a 512x512 resolution base model, with extremely high efficiency.

Method

Flowchart of our PatchVSR. Building upon a pre-trained latent T2V model, we incorporate a patch condition branch and a global context branch. These branches extract features from partitioned video patches and the resized full video together with a binary mask that indicates the location of the ROI patch, respectively. Particularly, local patch features are added to the output of each block, while the global context feature is fused with the backbone feature through newly introduced cross-attention modules (G-CA). For simplicity, we have omitted other conditional inputs such as text prompts and time steps from this diagram. The processed patches are fused via a joint modulation scheme to produce a coherent super-resolution video.

Multi-patch Joint Modulation

Patch Partition Visualization. The input video is divided into non-overlapping segments, as the solid blue boxes mark. For joint modulation, auxiliary patches are created, indicated by the red dashed boxes, resulting in an overlapping ratio of 0.5.