VideoGigaGAN

How VideoGigaGAN Works?

Overview: Why is it challenging?

Method Overview

method_overview

Our Video Super-Resolution (VSR) model is built upon the asymmetric U-Net architecture of the image GigaGAN upsampler.

To enforce temporal consistency, we first inflate the image upsampler into a video upsampler by adding temporal attention layers into the decoder blocks. We also enhance consistency by incorporating the features from the flow-guided propagation module.

To suppress aliasing artifacts, we use Anti-aliasing block in the downsampling layers of the encoder.

Lastly, we directly shuttle the high frequency features via skip connection to the decoder layers to compensate for the loss of details in the BlurPool process.

Ablation study

Strong hallucination capability of image GigaGAN results in temporally flickering artifacts, especially aliasing caused by the artifacted LR input.

We progressively add components to the base model to handle these artifacts

Input

ablation_study_1

Output

ablation_study_2

Diff

Image one
Image two

Comparison with previous methods

Compared to previous models, our models provides a detail-rich result with comparable temporal consistency.

Image one
Image two
Image one
Image two
VideoGigaGAN logo

VideoGigaGAN

© 2024 VideoGigaGAN Templates. All rights reserved

Twitter

Company

About usBlogContact us