Closed sukun1045 closed 4 years ago
Hi, The GPU I used is GeForce RTX 2080. With early stopping mechanisms, it would not take much time for training (around 1-2 hours).
Thanks for reply! I have one more question: in the paper, you also do experiments on harmonic function recognition, are you using the same Harmony Transformer to train that multi-task problem? Is it simply using multiple Fully Connected Layers at the end of the Transformer decoder? Is this part of code also available?
Hi, For harmonic function recognition, I used the same Harmony Transformer with multiple FC Layers at the output layer (the same approach as in my previous work 'Functional Harmony Recognition of Symbolic Music Data with Multi-task Recurrent Neural Networks'). You can refer to the code at https://github.com/Tsung-Ping/functional-harmony
Thanks you!
Hello, really interesting work! I am curious what GPUs are you using to implement the Harmony Transformer and how much time does training take? Thank you.