Open Cloud65000 opened 4 months ago
Since the in-house images are all labeled in Chinese, I'm curious about how to enable the model to support detecting images like that. I simply translated the label from English to Chinese, but there are some ambiguities and I think it cannot be solved by translation. So I decide to change "bert-base-uncased" to "bert-base-chinese". I viewed the config file of these two, and the main difference between them is the vocabulary size. I'm not sure whether the simple change will work and curious about whether I need to tune the Chinese Bert or not. If I need to tune the Chinese Bert, do you have any suggestions? Like which dataset I can use? The in-house dataset or the 13 datasets given by official mm-detection? Can I view the OVD task like a downstream task and set small learning rate to tune the Chinese Bert?
After simply running through all the code given by official mm-detection, and doing some continue-pretraining as the way to finetune, I notice that the official code just simply download the original bert-base-uncased and never tune that, so I'm not sure what to do next.
same question here, I want to use bert-chinese to train the mmgd. But I think it needs a full dataset re-pretraining process. the problem is that the dataset mmgd used for pretraining is labeled in English and quite difficult to translate it to Chinese, needs time to align those labels. Any ideas would be welcome.
Since the in-house images are all labeled in Chinese, I'm curious about how to enable the model to support detecting images like that. I simply translated the label from English to Chinese, but there are some ambiguities and I think it cannot be solved by translation. So I decide to change "bert-base-uncased" to "bert-base-chinese". I viewed the config file of these two, and the main difference between them is the vocabulary size. I'm not sure whether the simple change will work and curious about whether I need to tune the Chinese Bert or not. If I need to tune the Chinese Bert, do you have any suggestions? Like which dataset I can use? The in-house dataset or the 13 datasets given by official mm-detection? Can I view the OVD task like a downstream task and set small learning rate to tune the Chinese Bert?