Closed BoneGoat closed 4 years ago
@BoneGoat I think you should increase your n_examples a lot. n_examples is basically the amount of data the model will be trained on. 10000 examples is way too less for a new language. Similarly, you should also increase your vx, vy n_examples.
Also, I suggest to keep epochs=15. 2 is way too less.
Thank you for the quick response! I have increased n_examples with a couple of zeros and bumped epochs to 15. One epoch will now take around 8hrs. I have tensorflow-gpu installed but the training isn't using the GPU. Is there a way to utilise the GPU for faster training?
That is odd. If tensorflow-gpu is available, it should use gpu for training. Make sure your tensorflow import is using GPU (https://stackoverflow.com/questions/38559755/how-to-get-current-available-gpus-in-tensorflow, https://www.tensorflow.org/api_docs/python/tf/config/list_physical_devices).
You might also want to increase the batch size by a lot.
My setup was broken so it wasn't using the GPU. Thanks for your help!
Describe the bug and error messages (if any) I trained Deepsegment on 1GB of custom data in Swedish. All was successful but when I run inference the model does not segment the text.
**The code snippet which gave this error***
cc.sv.100.vec is Facebook fasttext 300 vec reduced to 100 in Swedish.
Specify versions of the following libraries
Expected behavior I expected Deepsegment to segment the text.
Screenshots Nope