bloc97 / Anime4K

A High-Quality Real Time Upscaler for Anime Video
https://bloc97.github.io/Anime4K/
MIT License
18.53k stars 1.35k forks source link

Add Anime4K-WebGPU project #221

Open plasmas opened 11 months ago

plasmas commented 11 months ago

Hi,

We created a WebGPU implementation for Anime4K. It features cross-platform APIs for integration into any WebGPU code, and can be used for real-time video upscaling, as well as any texture upscaling. We created a live demo and released an NPM package:

The package currently has a selected number of pipelines for each category (deblur, upscale, etc.), which run smoothly with any modern NVIDIA GPUs. It also works in Chrome on Macs, but may experience frame drops with heavy pipelines. Optimizations are still underway as more browser vendors support WebGPU.

I added the project to README.md. Any question is welcome!

arianaa30 commented 8 months ago

@plasmas how easy is it to add other upscale model sizes like M, L, VL as well? It currently has 2xUUL only.

plasmas commented 8 months ago

@plasmas how easy is it to add other upscale model sizes like M, L, VL as well? It currently has 2xUUL only.

As of now, conversion from original glsl shader to wgsl shaders is automated, but putting together wgsl shaders into a single pipeline requires knowledge about the model and webGPU.

A good example is Upscale_Shader.glsl. The steps to create its webGPU pipeline are:

  1. Run shader.py on this glsl shader to generate wgsl files, each containing a conv2d layer.
  2. Create UpscaleCNN.ts, which implements Anime4KPipeline. Helper pipelines like Conv2d, Depth2Space, and Overlay are created and chained according the the glsl stages / model arch. Shaders generated in the previous step are used to create Conv2d helper pipelines.

Note that shader.py may generate glitchy wgsl shaders for complex conv2d stages that also involves upscaling (e.g. Anime4K-v4.1-Upscale-GAN-x3-(L)-Conv-4x3x3x24 in Upscale_GAN_x3_L.glsl), and will need additional tweaking.

This way to convert from glsl to wgsl can be buggy. A better way would be converting from tensorflow models (#220) directly. Hope this helps.

arianaa30 commented 8 months ago

@plasmas Yeah I'm only interested in the simple upscalers (no GANs). But one thing: I measured the SSIM of your CNNx2UL (downloaded the Canvas on web demo), and realized it is much lower (0.77) than that of the PyTorch converted model (0.97) in #220 . So aren't they really equivalent?! I actually measured the upscale-VL on PyTorch but I assume UL should even be higher.

plasmas commented 8 months ago

@plasmas Yeah I'm only interested in the simple upscalers (no GANs). But one thing: I measured the SSIM of your CNNx2UL (downloaded the Canvas on web demo), and realized it is much lower (0.77) than that of the PyTorch converted model (0.97) in #220 . So aren't they really equivalent?! I actually measured the upscale-VL on PyTorch but I assume UL should even be higher.

Anime4K_Upscale_CNN_x2_UL.glsl was followed closely to create CNNx2UL pipeline, and weights and model architecture should have little difference from the original glsl shader, after reviewing. One possibility is that FP16 is used to store intermediate tensors due to webGPU limits, although computation in each stage is still in FP32.

Further comparison between the recovered PyTorch model and glsl/wgsl might be needed. Please consider creating an issue under Anime4K-WebGPU for more questions.

arianaa30 commented 8 months ago

@plasmas Thanks. I created an issue under the original repo.

plasmas commented 5 months ago

In v1.0.0, we have removed all pipelines whose shaders are not in release v4.0.1. Also, all 6 preset modes featured in v4.x are added.