-
Hello! When I run the ttrain/mf-lmd6remi-1.sh on the docker provided by the author on a RTX3090, an error occurs.
```
Traceback (most recent call last):
File "/opt/miniconda/bin/fairseq-train",…
-
Hello,
got this error while tried to interpolate 1 hour 24FPS 576p video into `48FPS` closer to its end (83101 frame of 95203)
> RuntimeError: CUDA out of memory. Tried to allocate 1.14 GiB (GP…
-
I have an RTX 3090, I'm using Ubuntu through SSH on another computer. On a fresh install of A1111, I went to extensions => then added the git and clicked installed, but it stays over 10 minutes at "pr…
-
When I was using the RTX3090 graphics card (24GB graphics memory) for the second stage of training, I encountered a problem of insufficient graphics memory. I remember you said before that 24GB graphi…
-
### 🐛 Describe the bug
When I train my Neural Network in a RTX3090 with latest drivers 520.61.05 and CUDA Version: 11.8 and AMD Ryzen 9 5950X 16-Core Processor I get the error `Segmentation fault (…
-
Thank you very much for the efforts . Recently, I tried to reproduce the training process, but found that the training time was quite long. May I ask the author how long the model ran and what kind of…
-
Hi, I want to know, What is the minimum GPU memory required? I had inferenced the model in GPU RTX3090 24G, but it turned out of CUDA out of memory, what can I do to reduce the GPU memory ? Thanks.
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmagic/issues) and [Discussions](https://github.com/open-mmlab/mmagic/discussions) but cannot get the expected help.
-…
-
code:
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
learn.fine_tune(2, 1e-2)
platform:
w…
-
Hello, thank you for bringing wonderful work!
I have the following questions and I hope you can provide assistance:
I only have one RTX3090 graphics card with 24GB of memory, so in the training conf…