Closed jtlxlf closed 1 year ago
Thanks for your attention! We train the SDT with identical training parameter settings on Chinese, English, and Japanese datasets.
Hello, when running the code you gave, the memory of cuda seems to be increasing during the training process, and it is easy to have insufficient memory in the middle?
Hi~ Since each character consists of trajectory points with different lengths, the memory of cuda will change with the number of trajectory points of batch data. The released code is verified on a single RTX3090
GPU with a batch size of 128
. If you are faced with an insufficient memory problem, you could set a smaller batch size.
Hi, very lucky to see your paper on handwriting font generation and am amazed at your contribution. In the process of training the code you gave, I have a question: Are the training parameter settings for English and Japanese font generation different from Chinese, and if they are different, can you tell me the specific parameter settings for other language training, thank you again.