Closed piwawa closed 2 months ago
Can you try setting DEBUG = True
in django_react/settings.py
?
Can you try setting
DEBUG = True
indjango_react/settings.py
?
After setting DEBUG = True
, It worked!
But the processing time is very long, almost 5 minutes for Demucs v4 Fine-tuned and Demucs v4.
I found it not using GPUs during processing (the processes are others):
(cu118) (cu118) [bingo06@zernithos ~]$ nvidia-smi
Wed May 1 21:25:14 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:1A:00.0 Off | N/A |
| 27% 34C P8 5W / 250W | 414MiB / 11264MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 4090 Off | 00000000:68:00.0 Off | Off |
| 0% 38C P8 4W / 450W | 5427MiB / 24564MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 1798 C python3 154MiB |
| 0 N/A N/A 939429 C python 256MiB |
| 1 N/A N/A 1798 C python3 4930MiB |
| 1 N/A N/A 939429 C python 486MiB |
+-----------------------------------------------------------------------------------------+
How to set CUDA device?
GPU separation isn't officially supported on non-Docker setups - I would consider using Docker
Can you try setting the environment variable CPU_SEPARATION=0
?
If that doesn't work, try installing these Python dependencies:
pip install torch==1.13.1+cu116 torchaudio==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
GPU separation isn't officially supported on non-Docker setups - I would consider using Docker
Can you try setting the environment variable
CPU_SEPARATION=0
? If that doesn't work, try installing these Python dependencies:pip install torch==1.13.1+cu116 torchaudio==0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
Still not using GPU. Does it use the following code after running python manage.py collectstatic && python manage.py runserver 127.0.0.1:8000
It should, but it's the Celery worker that needs to be restarted:
celery -A api worker -l INFO -Q slow_queue -c 1
What if you manually change that line of code to self.device = 'cuda'
?
It should, but it's the Celery worker that needs to be restarted:
celery -A api worker -l INFO -Q slow_queue -c 1
What if you manually change that line of code to
self.device = 'cuda'
?
After restarting this process celery -A api worker -l INFO -Q slow_queue -c 1
, it successfully called GPU! I can get fast segmentation now!
But i have another problem, how to make this label display completely as 'Accompaniment'?
Great! You can change the text here:
I just simply changed the part to this:
export const AccompShortBadge = (props: BadgeProps): JSX.Element => {
const { faded, title } = props;
return (
<Badge pill className={props.className} variant={faded ? 'accomp-faded' : 'accomp'} title={title}>
Accompaniment
</Badge>
);
};
And i got this display effect:
Obviously it doesn't looks good, how to make the label fit this text?
You can play around with the width here:
I have uploaded the file on web:
I can't play music on web or separate any audio!!