Closed BjornTheProgrammer closed 1 year ago
Why are your using v0.10.0-alpha.4
? Try with the latest 1.4.0
if itās still doesnāt work, itās because your are using an M1 Mac. Which as far as I know is not supported yet for inference. Iām not even sure training will ever be supported on Apple Silicone laptopsā¦
Thank you, I was using an M1 MacBook, and I apologize for wasting your time, I attempted many fixes not stated here to solve the import error and apparently fixed them, but couldn't get the final TensorFlow issue to work. So I have given up on utilizing the lm_optomizer on my Mac. Thanks again for your help.
Welcome to the šøSTT project! We are excited to see your interest, and appreciate your support!
This repository is governed by the Contributor Covenant Code of Conduct. For more details, see the CODE_OF_CONDUCT.md file.
If you've found a bug, please provide the following information:
Describe the bug When attempting to run the optimizer python script to determine the best alpha and beta for a customer scorer, it errors with the following output on an M1 Macbook 2020 running macOS Monterey 12.5.1.
To Reproduce Steps to reproduce the behavior:
docker pull ghcr.io/coqui-ai/stt-train:v0.10.0-alpha.4
docker run -it --name stt-test --entrypoint /bin/bash ghcr.io/coqui-ai/stt-train:v0.10.0-alpha.4
python3 generate_lm.py --input_txt /code/data/text.txt --output_dir /code/data/ --top_k 500000 --kenlm_bins /code/kenlm/build/bin/ --arpa_order 5 --max_arpa_memory "85%" --arpa_prune "0|0|1" --binary_a_bits 255 --binary_q_bits 8 --binary_type trie --discount_fallback
python3 lm_optimizer.py
Expected behavior To receive an optimized alpha and beta
Environment (please complete the following information):
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS Monterey 12.5.1
TensorFlow installed from (our builds, or upstream TensorFlow): Whatever build that is within the docker
TensorFlow version (use command below): Whatever that is within the docker
Python version: Whatever that is within the docker
Bazel version (if compiling from source): Whatever that is within the docker
GCC/Compiler version (if compiling from source): Whatever that is within the docker
CUDA/cuDNN version: Whatever that is within the docker
GPU model and memory: No GPU being utilized
Exact command to reproduce:
Additional context Add any other context about the problem here.