magenta / mt3

MT3: Multi-Task Multitrack Music Transcription
Apache License 2.0
1.41k stars 185 forks source link

Unable to run the script offline #70

Open MohammedMehdiTBER opened 2 years ago

MohammedMehdiTBER commented 2 years ago

Is there any way to clone this project and run it on my pc offline?

jsphweid commented 2 years ago

I was able to do this by making a Docker image that is close to the Google Colab environment.

Maybe start with:

FROM ubuntu:18.04

RUN apt-get update && apt-get install -y libssl-dev openssl wget build-essential zlib1g-dev git libfluidsynth1 libasound2-dev libjack-dev libffi-dev libbz2-dev liblzma-dev libsqlite3-dev

Then install python 3.7.13, install t5x, mt3, create an infer.py script or however you want to do it. You can use the colab to find gin files as well: ismir2021.gin mt3.gin model.gin, then checkpoint files, sf2 file.

When it doubt, just refer back to their colab code: https://github.com/magenta/mt3/blob/main/mt3/colab/music_transcription_with_transformers.ipynb and translate it to your setup.

It's a lot. There's probably better ways to do this but this was the only way I could make it happen. Maybe someone from Magenta will respond with a better way. Good luck!

EDIT for GPU:

follow instructions for setting up cuda https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html install cuda-toolkit install cuda drivers

and then cudnn (basically from: https://github.com/google/jax)

You need to make sure everything is compatible (i.e. toolkit version is compatible with driver, also compatible with the correct jax/jaxlib version on the image)

I had issues because I my server/box is ubuntu22.04 and cudnn doesn't have a nice installer for that yet but I found this amazingly helpful video: https://www.youtube.com/watch?v=4LvgOmxugFU

Also had issues with the docker container being able to run ptxas (which apparently tf needs), so I ended up using devel version (FROM nvidia/cuda:11.7.0-devel-ubuntu18.04) https://github.com/google/jax/discussions/6843#discussioncomment-2681311 There's probably a more correct way however.

I finally got it working with my 1050ti (very weak GPU) -- but my quick test still went about 5x faster than CPU.

yueyin85 commented 2 years ago

@jsphweid How long did you use the 1050ti? Recently, it takes nearly 20 minutes to use google colab (gpu mode). What I am talking about here is the time it takes to process the next step after uploading the file. In the past, at most 5 minutes. I don't know if it's because of google colab or something else?

jsphweid commented 2 years ago

@yueyin85 ya I don't know why colab would take longer than that considering the cards that it uses (make sure GPU/TPU is selected?). For the 1050ti, it takes at least a few minutes. I recently ran 4000 files through it and it ran for a week straight getting most of the way through before I upgraded to a 3090.

Also I put together this docker image that runs mt3 with nvidia gpu via a little flask server: https://github.com/jsphweid/mt3-docker

3090 takes less than a minute for pure inference (calls to model/gpu) most of the time. Pre/Post processing take under a second. Only really works 1 request at a time though. There's probably lots that could be done. Still much easier to a batch of files compared to colab though.