When following the colab notebook two errors arise one of which has been mentioned previously and the second relates to an include which does not appear to be used:
"AttributeError: torch._inductor.config.fx_graph_cache does not exist"
Removing the include still does not allow me to execute the notebook. I have also tried with a local GPU (poetry and pip based install) and have also tried with CPU.
Using device=cuda
Loading model ...
using dtype=float16
Time to load model: 10.76 seconds
Compiling...Can take up to 2 mins.
---------------------------------------------------------------------------
BackendCompilerFailed Traceback (most recent call last)
[<ipython-input-5-1ac00c833092>](https://localhost:8080/#) in <cell line: 4>()
2 from fam.llm.fast_inference import TTS
3
----> 4 tts = TTS()
50 frames
[/usr/lib/python3.10/concurrent/futures/_base.py](https://localhost:8080/#) in __get_result(self)
401 if self._exception:
402 try:
--> 403 raise self._exception
404 finally:
405 # Break a reference cycle with the exception in self._exception
BackendCompilerFailed: backend='inductor' raised:
RuntimeError: Internal Triton PTX codegen error:
ptxas /tmp/compile-ptx-src-b01c43, line 70; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 70; error : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 72; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 72; error : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 74; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 74; error : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 76; error : Feature '.bf16' requires .target sm_80 or higher
ptxas /tmp/compile-ptx-src-b01c43, line 76; error : Feature 'cvt with .f32.bf16' requires .target sm_80 or higher
ptxas fatal : Ptx assembly aborted due to errors
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
I also followed the Readme and tried both the pip and poetry based installs
When following the colab notebook two errors arise one of which has been mentioned previously and the second relates to an include which does not appear to be used: "AttributeError: torch._inductor.config.fx_graph_cache does not exist"
Removing the include still does not allow me to execute the notebook. I have also tried with a local GPU (poetry and pip based install) and have also tried with CPU.
Any help appreciated, attached my 'patch' which has the workaround from previous issue and include removed MLECO-4788-MetaVoice-model-for-speech-generation-par.txt
Thanks, Liam