Open Tetsujinfr opened 4 years ago
Thanks for this. I already managed to install librosa from source on the Xavier NX, after multiple difficulties (had to install z3 from source to be able to install llvm9.0 from source to be able to install llvm-lite with pip3, what a pain!). Finally I have pip3 installed librosa with no error.
Now I am stuck with the inference notebook error "can not find tf.contrib module". It seems this is due to TF recent versions not using the contrib module anymore but some core module instead. So it looks like I am struggling with libraries dependencies versions mismatches. Will try to fix some py files and hopefully it will work. But basically, getting tacotron2 to work in 2020 on a brand new Jetson Xavier NX with Jetpack 4.4 is not a piece of cake...
https://github.com/NVIDIA/tacotron2/blob/master/requirements.txt
You need tf 1.x
I had similar problem trying to get numba installed.
I finally builded TBB and got rid of the error "TBB too old" and could finally pip3 numba Here is the solution https://stackoverflow.com/questions/10726537/how-to-install-tbb-from-source-on-linux-and-make-it-work
Merci Patrick for sharing your solution, that is really great.
Did you manage to get the Tacotron2 inference notebook to work?
I am stuck with an error message at the waveglow cell stage where it says:
inverse: MAGMA library not found in compilation. Please rebuild with MAGMA
.
So I did compile mamba from source (took me a while), installed it as a standalone library then tried again but I have the same error message. I suspect that the pytorch wheel from Nvidia (v1.5) that I use on the Jetson has not been compiled with the mamba library and I would have to build it from source with this library. Did you have similar difficulties or that just worked fine for you?
Hello I just needed to get numba working.
@Tetsujinfr, I have the same issue. Did you manage to make it work?
I have tried a lot of things since that date but I do not think I managed to get Tacotron2 to work. The model is just too memory intensive anyway, so I think I abandoned. Those models seems to be designed for desktop GPU usage. I find it surprisingly difficult to get a text to speech model to work on Jetson NX. I remember I did move to fastspeech type of models: very memory intensive too and still quite slow to infer but it did work somehow. Fyi I am using a 4Gb swap file with a SSD, ontop the 8Gb Ram of the NX.
I am unsuccessful installing librosa on jetson, just returns a bunch of errors. Seems to come from numba and TBB (which I think is an Intel library).
Can you tell if this can work on Jetson? thanks
My errors fyi: