Open anasessam12 opened 1 year ago
Here is the .yml file I used:
langchar: "" character: "0123456789!\"#$%&'()*+,-./:;<=>?@[\]^`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\u0660\ \u0661\u0662\u0663\u0664\u0665\u0666\u0667\u0668\u0669\xAB\xBB\u061F\u060C\u061B\ \u0621\u0622\u0623\u0624\u0625\u0626\u0627\u0627\u064B\u0628\u0629\u062A\u062B\u062C\ \u062D\u062E\u062F\u0630\u0631\u0632\u0633\u0634\u0635\u0636\u0637\u0638\u0639\u063A\ \u0641\u0642\u0643\u0644\u0645\u0646\u0647\u0648\u0649\u064A\u064B\u064C\u064D\u064E\ \u064F\u0650\u0651\u0652\u0653\u0654\u0670\u0671\u0679\u067E\u0686\u0688\u0691\u0698\ \u06A9\u06AD\u06AF\u06BA\u06BE\u06C0\u06C1\u06C2\u06C3\u06C6\u06C7\u06C8\u06CB\u06CC\ \u06D0\u06D2\u06D3\u06D5" experiment_name: 'arabic' train_data: 'all_data/en_train_filtered' valid_data: 'all_data/en_val' manualSeed: 1111 workers: 6 batch_size: 2 num_iter: 50000 valInterval: 1000 saved_model: 'saved_models/arabic.pth' FT: False optim: False # default is Adadelta lr: 0.1 beta1: 0.9 rho: 0.95 eps: 0.00000001 grad_clip: 5
select_data: 'en_train_filtered' # this is dataset folder in train_data batch_ratio: '1' total_data_usage_ratio: 1.0 batch_max_length: 34 imgH: 64 imgW: 600 rgb: False contrast_adjust: False sensitive: True PAD: True contrast_adjust: 0.0 data_filtering_off: False
Transformation: 'None' FeatureExtraction: 'ResNet' SequenceModeling: 'BiLSTM' Prediction: 'CTC' num_fiducial: 20 input_channel: 1 output_channel: 512 hidden_size: 512 decode: 'greedy' new_prediction: True freeze_FeatureFxtraction: False freeze_SequenceModeling: False
sample of dataset :
what is your dataset are you use?
Here is the .yml file I used:
langchar: "" character: "0123456789!\"#$%&'()*+,-./:;<=>?@[\]^`{|}~ abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\u0660\ \u0661\u0662\u0663\u0664\u0665\u0666\u0667\u0668\u0669\xAB\xBB\u061F\u060C\u061B\ \u0621\u0622\u0623\u0624\u0625\u0626\u0627\u0627\u064B\u0628\u0629\u062A\u062B\u062C\ \u062D\u062E\u062F\u0630\u0631\u0632\u0633\u0634\u0635\u0636\u0637\u0638\u0639\u063A\ \u0641\u0642\u0643\u0644\u0645\u0646\u0647\u0648\u0649\u064A\u064B\u064C\u064D\u064E\ \u064F\u0650\u0651\u0652\u0653\u0654\u0670\u0671\u0679\u067E\u0686\u0688\u0691\u0698\ \u06A9\u06AD\u06AF\u06BA\u06BE\u06C0\u06C1\u06C2\u06C3\u06C6\u06C7\u06C8\u06CB\u06CC\ \u06D0\u06D2\u06D3\u06D5" experiment_name: 'arabic' train_data: 'all_data/en_train_filtered' valid_data: 'all_data/en_val' manualSeed: 1111 workers: 6 batch_size: 2 num_iter: 50000 valInterval: 1000 saved_model: 'saved_models/arabic.pth' FT: False optim: False # default is Adadelta lr: 0.1 beta1: 0.9 rho: 0.95 eps: 0.00000001 grad_clip: 5
Data processing
select_data: 'en_train_filtered' # this is dataset folder in train_data batch_ratio: '1' total_data_usage_ratio: 1.0 batch_max_length: 34 imgH: 64 imgW: 600 rgb: False contrast_adjust: False sensitive: True PAD: True contrast_adjust: 0.0 data_filtering_off: False
Model Architecture
Transformation: 'None' FeatureExtraction: 'ResNet' SequenceModeling: 'BiLSTM' Prediction: 'CTC' num_fiducial: 20 input_channel: 1 output_channel: 512 hidden_size: 512 decode: 'greedy' new_prediction: True freeze_FeatureFxtraction: False freeze_SequenceModeling: False
sample of dataset :