Hello,
Congratulations on the successful development of the SEED model! I am impressed by its ability and wanna to reproduce it locally. However, I am encountering some confusing problems. The config of the codebook training of the seed tokenizer says that it takes up to 500 epochs training upon 500m data. I am wondering is it the right config of the codebook training since it takes so many gpu hours to finish this. It would be thankful if you can clarify this or provide some advice. Thanks for your generous help.
Hello, Congratulations on the successful development of the SEED model! I am impressed by its ability and wanna to reproduce it locally. However, I am encountering some confusing problems. The config of the codebook training of the seed tokenizer says that it takes up to 500 epochs training upon 500m data. I am wondering is it the right config of the codebook training since it takes so many gpu hours to finish this. It would be thankful if you can clarify this or provide some advice. Thanks for your generous help.