-
First of all, thanks for all the work.
Currently we can use style transfer from JSON output [as given in README.md](https://github.com/DeepMotionEditing/deep-motion-editing#style-from-videos)
Can …
-
https://github.com/skq024/Real-time-Coherent-Style-Transfer-For-Videos/blob/e18020a54680ac3a4c16d41a57edf8cff78d825d/totaldata.py#L9
Hi, thanks for your good work!
When I am gonna re-train a new m…
-
getting this error when using Deforum with controlnet tile and temporalnet on my videos:
Neither the base nor the masking frames for ControlNet were found. Using the regular pipeline
Im getting …
-
We're currently deviating from the WebCL Working Draft in at least the following:
• No callbacks;
• No ‘name’ and ‘message’ fields in exceptions;
• No semi-automatic (hierarchic) release() of CL reso…
-
Hi! I just tried to run the project, and I'm getting the following error:
Traceback (most recent call last):
```
File "network.py", line 252, in
first_layer.set_input(input_tensor, shape=…
-
1. I want to transfer higher piexl videos like 1k, 2K even 4k, however the size of my server's VRAM is 8GB (NV GTX 1070). With cudnn, the maximized size of image I can take from video is 1080x720p. S…
-
### Area Select
react-native-pytorch-core (core package)
### Description
I want to apply Animegan model using Playtorch on video, since I am developing React Native Application. However -
- Ther…
-
I'm experimenting with the pre-trained style-transfer model with little success.
Why can I only get good results from the animations you provided for your demo (and the files in the xia_mocap folde…
-
thanks for making the Gradio demo https://huggingface.co/spaces/jjeamin/ArcaneStyleTransfer, can a video version also be setup? Similar to for example, https://huggingface.co/spaces/nateraw/animegan-v…
AK391 updated
2 years ago
-
I'm having v3.0.1 with rails 5.2.0, try upload file to Minio with protocol S3.
File was uploaded to rails backend and then transfer to minio later, I checked this views file app/views/trestle/active_…