Closed omerasif-itu closed 4 years ago
Still working on that .. did not reach good number yet.
Right. I have tested few test samples and got a few good result. I was experimenting with common voice Arabic data and also experimenting transfer learning approach to modeling. I wonder If I could use your current best checkpoint for experimenting further; like there is checkpoint availble here
Salam everyone, @omerasif-itu what hardware config did you use to get those good results (and how much time did it take)? Thanks
@rachidio I am such a naive. I tested training sample by mistake so that's why results were fine. now I just tested first surah with unknown reciter (Aziz Alili - audio files from everyayah.com ) and results are fine with two word errors. Setup: I tried google colab with gpu runtime for inference. on average 1 second inference time. As of current Google Colab free provide K80 with 12G RAM with 12 Hours single run.
If you asking about training and evaluation time then @tarekeldeeb can guide you better. And if he can please take some time to release model, checkpoints and other details i.e. training/evaluation time that would be helpful.
P.S. Btw, I am just getting started with deepspeech framework and deep learning ASR.
Thank you @omerasif-itu, indeed I also used an azure instance with GPU K80, I trained the model with run-quran.sh provided by @tarekeldeeb after the import of Quran data set via import-quran.sh. Unfortunately, it was running for 8 Epochs (each ~ 90 mins), but does not seem to learn anything, you can see in the following picture the steps_loss (train and test). Thanks
I did a smoke test: (with only one sample - train and test on single file)
Test on data/test/train.csv - WER: 0.600000, CER: 0.222222, loss: 24.068550
python -u DeepSpeech.py --noshow_progressbar \
--alphabet_config_path files/alphabet.txt \
--train_files data/test/train.csv \
--test_files data/test/train.csv \
--train_batch_size 1 \
--test_batch_size 1 \
--n_hidden 512 \
--epochs 100 \
--checkpoint_dir data/test/ckpt \
--scorer '' \
And testing with only one reciter: AL-Husary
Test on data/quran/quran_test.csv - WER: 0.210742, CER: 0.118998, loss: 63.033562
Note: I have used --learning_rate 0.001
as compared to default rate of 0.0001
for faster learning. and also decreases n_hidden
units to 512
python3 -u DeepSpeech.py \
--train_files "$COMPUTE_DATA_DIR/quran_train.csv" \
--dev_files "$COMPUTE_DATA_DIR/quran_dev.csv" \
--test_files "$COMPUTE_DATA_DIR/quran_test.csv" \
--alphabet_config_path "$COMPUTE_DATA_DIR/quran-alphabets.txt" \
--scorer "$COMPUTE_DATA_DIR/lm/quran.scorer" \
--export_dir "$COMPUTE_DATA_DIR" \
--train_batch_size 32 \
--dev_batch_size 32 \
--test_batch_size 32 \
--use_allow_growth "true" \
--noearly_stop \
--epochs 30 \
--export_language "ar" \
--n_hidden 512 \
--dropout_rate 0.5 \
--learning_rate 0.001 \
--checkpoint_dir "${COMPUTE_DATA_DIR}/checkpoints" \
--max_to_keep 2 \
"$@"
IMO, It might take a day or two to get good training loss with at least 100 epochs
and _2048 n_hidden
_ units.
What I can suggest is to train on 1 or 2 reciter to see results in hours and then train on all 7.
@aibrahim- can you help here please and provide a checkpoint?
@omerasif-itu how did you run that test/benchmark to reach the report: Test on data/quran/quran_test.csv - WER: 0.210742, CER: 0.118998, loss: 63.033562
Was that after training? Version 0.71? How many tests are there in your csv?
Btw, I have reach almost the same numbers before .. but when I tried a new Imam number were much lower.
@omerasif-itu how did you run that test/benchmark to reach the report: Test on data/quran/quran_test.csv - WER: 0.210742, CER: 0.118998, loss: 63.033562
Was that after training? Version 0.71? How many tests are there in your csv?
I run this experiment with only one imam recitations (Al Husary):
Al Husary --> train.csv, dev.csv, test.csv
python3 -u DeepSpeech.py \ --train_files "$COMPUTE_DATA_DIR/quran_train.csv" \ --dev_files "$COMPUTE_DATA_DIR/quran_dev.csv" \ --test_files "$COMPUTE_DATA_DIR/quran_test.csv" \ --alphabet_config_path "$COMPUTE_DATA_DIR/quran-alphabets.txt" \ --scorer "$COMPUTE_DATA_DIR/lm/quran.scorer" \ --export_dir "$COMPUTE_DATA_DIR" \ --train_batch_size 32 \ --dev_batch_size 32 \ --test_batch_size 32 \ --use_allow_growth "true" \ --noearly_stop \ --epochs 30 \ --export_language "ar" \ --n_hidden 512 \ --dropout_rate 0.5 \ --learning_rate 0.001 \ --checkpoint_dir "${COMPUTE_DATA_DIR}/checkpoints" \ --max_to_keep 2 \
And Just executed once (single run on Google Colab. It run for an hour. May be) with above script. I ran to show @rachidio some results as he was confused about his training run and this is not a complete training report.
Unfortunately, it was running for 8 Epochs (each ~ 90 mins), but does not seem to learn anything,
Indeed, It requires more time with this amount of hours as end to end models are data and time hungry.
I tested with deepspeech 0.7.1. I ran exp. in colab so can't check test files count right now. But as you write the code for splitting the data into train/dev/test
csv's, you can apply that ratio to 6236 audio sample for single reciter. for example: for 70/20/10
ratio: Test set could be 6236*0.10=623
audio samples.
I see several problems:
Make sure the scorer is LFS downlaoded. 8 epochs are enough to see lots of learning, loss should have reached below 40.
IMO, It might take a day or two to get good training loss with at least 100 epochs and 2048 n_hidden units.
What I can suggest is to train on 1 or 2 reciter to see results in hours and then train on all 7.
I totally agree on those parameters. The only reason I changed these parameters is to show @rachidio that setup is working correctly and to see quickly that deepspeech model is learning something at least. Else your run-quran.sh
script is totally fine.
@rachidio Are you able to resolve the training/loss issue?
Please check my latest model, trained with a mix between professional reciters and Tarteel amateur recitations.
Steps:
1.Generate csv files using bin/import_quran.py
& bin/import_quran_tusers.py
run-quran.sh
script. Final WER:
Test on data/quran/tusers/quran_test.csv - WER: 0.099118, CER: 0.065586, loss: 39.312599
I only used 30 epochs as the script, but I believe there's room for improvement as the training was getting better loss results until the last epoch.
@aibrahim- Thank you for sharing model and results. Will definitely check it out. :+1:
@omerasif-itu thank you for help, actually I realized that it need more time to reach out low loss, I will consider switching to a machine with at least two K80 to be able to run the training with full dataset and n_hidden : 2048 @aibrahim- thank you for sharing the model, I tried it with my own recitation and it is doing well. Can you please share you setup info (GPU, RAM)?
I'm using my own machine for this, Nvidia GeForce GTX 1070 8GB and 16 GB RAM. Training with the data described above (using the default parameters for the scripts) takes about 14 minutes / epoch.
I will be sharing another model tomorrow trained with more data and more epochs.
Salam @tarekeldeeb
Make sure the scorer is LFS downlaoded. 8 epochs are enough to see lots of learning, loss should have reached below 40.
When you say to make sure the scorer 'quran.scorer' is LFS downloaded, how do you check that after everything is installed:
apt-get update && apt-get install -y git-lfs
git lfs install
git clone https://github.com/tarekeldeeb/DeepSpeech-Quran.git
Because, when I run this command inside DeepSpeech directory:
git lfs ls-files
I get only the original scorer from Mozilla team
d0cf926ab9 - data/lm/kenlm.scorer
try using git lfs pull
to download lfs objects/files.
Actually, I have the scorer but apparently it is not tracked/downloaded by git-lfs
/DeepSpeech/data/quran/lm# ls
lm.binary quran.scorer vocab-500000.txt
@omerasif-itu does git lfs ls-files
lists the quran scorer in your setup?
Under my setup:
$ git lfs ls-files
d0cf926ab9 * data/lm/kenlm.scorer
It seems Quran Language Model files are not being tracked under git lfs at the moment.
I am guessing it might not necessary as it is 1.7MB
in size.
$ du -sh data/quran/lm/*
732K data/quran/lm/lm.binary
1.7M data/quran/lm/quran.scorer
388K data/quran/lm/vocab-500000.txt
While data/lm
has large scorer file.
$ du -sh data/lm/*
4.0K data/lm/README.rst
8.0K data/lm/generate_lm.py
8.0K data/lm/generate_package.py
910M data/lm/kenlm.scorer
Yes, data/quran/lm/*
is the one I used for training.
For instructions on how to reproduce these files, check the commands.txt
I've accomplished the training of an "imam" model with all reciters imported by the import_quran.sh
script, it reached a score of WER: 0.051800, CER: 0.040469, loss: 39.811401
. although it does not perform very well on new non professional data (my own recitation).
@aibrahim- I noticed that the models you shared lately, are ~48 MB while the model I generated is ~ at least x4 that with ~189MB, have you done some post compression/quantization of the trained model? how much was the n_hidden
of your training?
@rachidio I haven't done any post compression. I used 1024 n_hidden.
For better performance with new non-professional data, try using the bin/import_quran_tusers.py
script with the Imam model, and then train a new model with both data.
Did you use different parameters from the ones in the run-quran.sh
?
Ok, in my case I have used 2048 n_hidden
, and all default parameters from run-script.sh
.
I think I will try to train a new model with 1048 n_hidden as it yields great scores with less training time (min/epochs), and smaller model. thank you @aibrahim-
Here are my results:
Platform: Google Colab
Parameters:
--train_batch_size 96 \
--dev_batch_size 96 \
--test_batch_size 96 \
--use_allow_growth "true" \
--noearly_stop \
--epochs 30 \
--n_hidden 1024 \
--dropout_rate 0.5 \
--learning_rate 0.0001 \
Result:
WER: 0.056551, CER: 0.039540, loss: 24.844383
Assalam Alaikum @tarekeldeeb I pass through this your work and I was tried to git and see the data( records and CSV) but I can't succeed in download it via the link that's put in the files such this for all data files " version https://git-lfs.github.com/spec/v1 oid sha256:f0cfe426c0853af7606dce51caa05196ae0d379831ba6b7d235fb1d32a16d997 size 3789260" I already installed Git lfs please, can you tell in clear steps how I can get it and thank you
Assalam Alaikum @Modanisa-deep:
Please take a look at this script in Google Colab for downloading data [bin/import_quran.sh
]. You can run it within Google Colab and then try on your local machine. For Training you can use bin/run-quran.sh
script under bin
directory.
thank you @omerasif-itu oh thank you it's nice I get it but now I see the workflow picture that's loaded here https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/4
and I have a question what is the meaning and the role of " filter user recordings "stage can you explain to me in clear, please ?
@tarekeldeeb @aibrahim- can better guide you on that. What I understand from the flow is: the irrelevant (noisy, wrong, unrelated) audio files from tarteel users dataset is being filtered out so to have clean and good data for training.
Exlpore the following method in bin/import_quran_tusers.py
def _eval_audio(location):{ ... }
the irrelevant (noisy, wrong, unrelated) audio files
And it is being done by looking at the difference of The Noble Quran text and transcriptions of tusers recitations.
@omerasif-itu Oh thank you This makes little sense So I will ask @tarekeldeeb for further clarification
@omerasif-itu and @tarekeldeeb can you provide me with the WER and CER values that you get them in the last version of your models in both ( output_graph.pb ) and ( output_graph_imams_tusers.pb ) and also what is the specification of the server that used?
Result for combined dataset ( imam recitations + tusers ); Imam Recitations + tusers filtered data
Final WER:
Test on data/quran/tusers/quran_test.csv - WER: 0.099118, CER: 0.065586, loss: 39.312599
Result for Imam recitations only training; Imam Recitations
Result:
WER: 0.056551, CER: 0.039540, loss: 24.844383
Please note that results may vary in each case due to using different hyper-parameters.
what is the specification of the server that used?
You need at least a good graphic card (GPU) for training in a rapid manner. A CPU might work but training would be extremely slow. Given you mentioned server, If you have a simple CPU based web server then you can try to train the model but It will be slow still (I have tried training on a Intel Xeon based web server and the training was pretty much slow as compared to a GPU environment) and if you have external GPU attached to it then go for it. If you don't have a local GPU environment readily available, You can train your model using Google Colab and Kaggle Notebooks. In this way, you can carry out experiments quickly. Note that these platforms has limitation of usage. On Google Colab you can run code only for 12 hours. Kaggle provide 30 Hours per week of GPU utilization.
I'm using my own machine for this, Nvidia GeForce GTX 1070 8GB and 16 GB RAM.
Here is one setup @aibrahim- used for training. Please read this thread from beginning so you may find useful hints for your work. Regards.
@omerasif-itu ok your GPU memory is 16 GB? h have my own server workstation TITAN XP that's supported with GPU ( 12 GB GBU memory and 1 TB SSD disk) so it's v.good " Please note that results may vary in each case due to using different hyper-parameters" can you provide me with two cases hyper-parameters, please?
ok your GPU memory is 16 GB?
Correction: This is not my setup. Please see this reply: (https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-639159802)
can you provide me with two cases hyper-parameters, please?
Please take a look at: case:1 case:2 Default hyper-parameters provided by tareek Getting started with Mozilla Deepspeech Training Deepspeech Hyper-Parameters
Please read this thread from beginning so you may find useful hints for your work. Regards.
Correction: This is not my setup. Please see this reply: (#6 (comment))
ok I know it's for @aibrahim- Thank you v. much @omerasif-itu
Here are my results:
Platform: Google Colab
Parameters:
--train_batch_size 96 \ --dev_batch_size 96 \ --test_batch_size 96 \ --use_allow_growth "true" \ --noearly_stop \ --epochs 30 \ --n_hidden 1024 \ --dropout_rate 0.5 \ --learning_rate 0.0001 \
Result:
WER: 0.056551, CER: 0.039540, loss: 24.844383
Salam, Can you please share your Colab notebook for training? Thanks !
I train on a local machine.
Here are my results: Platform: Google Colab Parameters:
--train_batch_size 96 \ --dev_batch_size 96 \ --test_batch_size 96 \ --use_allow_growth "true" \ --noearly_stop \ --epochs 30 \ --n_hidden 1024 \ --dropout_rate 0.5 \ --learning_rate 0.0001 \
Result:
WER: 0.056551, CER: 0.039540, loss: 24.844383
Salam, Can you please share your Colab notebook for training? Thanks !
You can look at script mentioned here. (Note that It is written almost 10 Months ago, so some commands/dependencies might break.)
Assalam Alaikum @Modanisa-deep: Please take a look at this script in Google Colab for downloading data [
bin/import_quran.sh
]. You can run it within Google Colab and then try on your local machine. For Training you can usebin/run-quran.sh
script underbin
directory.
assalam aliakom i want to ask about Tarteel data , when ا run the script and download the Tarteel records and got it , are these records clean and it's CSV file correct where i can use them ? If not clean, so how i get it’s correct and clean version?
Get Outlook for iOShttps://aka.ms/o0ukef
From: Omer Asif notifications@github.com Sent: Saturday, March 6, 2021 10:25:36 AM To: tarekeldeeb/DeepSpeech-Quran DeepSpeech-Quran@noreply.github.com Cc: Suhad Mohammad AL-issa smalissa17@cit.just.edu.jo; Mention mention@noreply.github.com Subject: Re: [tarekeldeeb/DeepSpeech-Quran] best model training checkpoints for fine-tuning (#6)
Here are my results: Platform: Google Colab Parameters:
--train_batch_size 96 \ --dev_batch_size 96 \ --test_batch_size 96 \ --use_allow_growth "true" \ --noearly_stop \ --epochs 30 \ --n_hidden 1024 \ --dropout_rate 0.5 \ --learning_rate 0.0001 \
Result: WER: 0.056551, CER: 0.039540, loss: 24.844383
Salam, Can you please share your Colab notebook for training? Thanks !
You can look at script mentioned here.
Assalam Alaikum @Modanisa-deep: Please take a look at this scripthttps://colab.research.google.com/drive/1HO57B7ZA4-vn5bm-vL1zRnmuFV99g_n4?usp=sharing in Google Colabhttps://colab.research.google.com/ for downloading data [bin/import_quran.sh]. You can run it within Google Colab and then try on your local machine. For Training you can use bin/run-quran.sh script under bin directory.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-791895129, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALUHX2UDU26EW7OQAWRLDWLTCHRHZANCNFSM4NOYGO7A.
@aibrahim- @tarekeldeeb @omerasif57 Is this the whole step?
git clone https://github.com/tarekeldeeb/DeepSpeech-Quran.git
git lfs pull
bin/import_quran.py
& bin/import_quran_tusers.py
bin/run-quran.sh
@aibrahim- @tarekeldeeb @omerasif57 assalam alikum I need the checkpoint folder that was generated in Imam+tusrers model, please can you upload it or point me to its location? with thanks.
I have updated repo with a link to: https://drive.google.com/drive/folders/1Uzcljj1yPin9QPuNTxliOSu8haHP9Xb2?usp=sharing
Regards,
-- Tarek Eldeeb | طارق الديب https://www.linkedin.com/in/tarekeldeeb Sr. SW R&D Manager | FPGA Expert
On Tue, Jul 27, 2021 at 10:19 AM Suhad Al-Issa @.***> wrote:
@aibrahim- https://github.com/aibrahim- @tarekeldeeb https://github.com/tarekeldeeb @omerasif57 https://github.com/omerasif57 assalam alikum I need the checkpoint folder that was generated in Imam+tusrers model, please can you upload it or point me to its location? with thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887312016, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAWG2OB5DTAX5RY3NEZ3ATTZZTZRANCNFSM4NOYGO7A .
thanks but is these files are the total files?
Best Regards, Suhad
From: Tarek Eldeeb @.> Sent: Tuesday, July 27, 2021 5:29:15 PM To: tarekeldeeb/DeepSpeech-Quran @.> Cc: Suhad Mohammad AL-issa @.>; Mention @.> Subject: Re: [tarekeldeeb/DeepSpeech-Quran] best model training checkpoints for fine-tuning (#6)
I have updated repo with a link to: https://drive.google.com/drive/folders/1Uzcljj1yPin9QPuNTxliOSu8haHP9Xb2?usp=sharing
Regards,
-- Tarek Eldeeb | طارق الديب https://www.linkedin.com/in/tarekeldeeb Sr. SW R&D Manager | FPGA Expert
On Tue, Jul 27, 2021 at 10:19 AM Suhad Al-Issa @.***> wrote:
@aibrahim- https://github.com/aibrahim- @tarekeldeeb https://github.com/tarekeldeeb @omerasif57 https://github.com/omerasif57 assalam alikum I need the checkpoint folder that was generated in Imam+tusrers model, please can you upload it or point me to its location? with thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887312016, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAWG2OB5DTAX5RY3NEZ3ATTZZTZRANCNFSM4NOYGO7A .
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887561443, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALUHX2Q7MR2DK2QCRFNN2JLTZ27DVANCNFSM4NOYGO7A.
Yes
On Tue, Jul 27, 2021, 4:36 PM Suhad Al-Issa @.***> wrote:
thanks but is these files are the total files?
Best Regards, Suhad
From: Tarek Eldeeb @.> Sent: Tuesday, July 27, 2021 5:29:15 PM To: tarekeldeeb/DeepSpeech-Quran @.> Cc: Suhad Mohammad AL-issa @.>; Mention @.> Subject: Re: [tarekeldeeb/DeepSpeech-Quran] best model training checkpoints for fine-tuning (#6)
I have updated repo with a link to:
https://drive.google.com/drive/folders/1Uzcljj1yPin9QPuNTxliOSu8haHP9Xb2?usp=sharing
Regards,
-- Tarek Eldeeb | طارق الديب https://www.linkedin.com/in/tarekeldeeb Sr. SW R&D Manager | FPGA Expert
On Tue, Jul 27, 2021 at 10:19 AM Suhad Al-Issa @.***> wrote:
@aibrahim- https://github.com/aibrahim- @tarekeldeeb https://github.com/tarekeldeeb @omerasif57 https://github.com/omerasif57 assalam alikum I need the checkpoint folder that was generated in Imam+tusrers model, please can you upload it or point me to its location? with thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887312016 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAAWG2OB5DTAX5RY3NEZ3ATTZZTZRANCNFSM4NOYGO7A
.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub< https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887561443>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/ALUHX2Q7MR2DK2QCRFNN2JLTZ27DVANCNFSM4NOYGO7A
.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887567074, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAWG2MP556CD326AQRHOJLTZ275TANCNFSM4NOYGO7A .
thanks a lot
Best Regards, Suhad
From: Tarek Eldeeb @.> Sent: Tuesday, July 27, 2021 8:32:31 PM To: tarekeldeeb/DeepSpeech-Quran @.> Cc: Suhad Mohammad AL-issa @.>; Mention @.> Subject: Re: [tarekeldeeb/DeepSpeech-Quran] best model training checkpoints for fine-tuning (#6)
Yes
On Tue, Jul 27, 2021, 4:36 PM Suhad Al-Issa @.***> wrote:
thanks but is these files are the total files?
Best Regards, Suhad
From: Tarek Eldeeb @.> Sent: Tuesday, July 27, 2021 5:29:15 PM To: tarekeldeeb/DeepSpeech-Quran @.> Cc: Suhad Mohammad AL-issa @.>; Mention @.> Subject: Re: [tarekeldeeb/DeepSpeech-Quran] best model training checkpoints for fine-tuning (#6)
I have updated repo with a link to:
https://drive.google.com/drive/folders/1Uzcljj1yPin9QPuNTxliOSu8haHP9Xb2?usp=sharing
Regards,
-- Tarek Eldeeb | طارق الديب https://www.linkedin.com/in/tarekeldeeb Sr. SW R&D Manager | FPGA Expert
On Tue, Jul 27, 2021 at 10:19 AM Suhad Al-Issa @.***> wrote:
@aibrahim- https://github.com/aibrahim- @tarekeldeeb https://github.com/tarekeldeeb @omerasif57 https://github.com/omerasif57 assalam alikum I need the checkpoint folder that was generated in Imam+tusrers model, please can you upload it or point me to its location? with thanks.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub < https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887312016 , or unsubscribe < https://github.com/notifications/unsubscribe-auth/AAAWG2OB5DTAX5RY3NEZ3ATTZZTZRANCNFSM4NOYGO7A
.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub< https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887561443>, or unsubscribe< https://github.com/notifications/unsubscribe-auth/ALUHX2Q7MR2DK2QCRFNN2JLTZ27DVANCNFSM4NOYGO7A
.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887567074, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAWG2MP556CD326AQRHOJLTZ275TANCNFSM4NOYGO7A .
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/tarekeldeeb/DeepSpeech-Quran/issues/6#issuecomment-887698660, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ALUHX2Q52DJA7RYPXAMIIKTTZ3US7ANCNFSM4NOYGO7A.
Hi @tarekeldeeb, Can you please provide best-dev checkpoints for fine-tuning the model?