Closed junming-yang closed 6 days ago
Hi, the above warning message you saw is related to some naming issues in the config file which should be fixed now in the updated model, but this won't affect the actual performance and no additional configuration is needed.
From our experience and the benchmark results, the 8B model outperforms the 13B on some of the tasks so your observation is normal for some cases. But make sure you set the correct conv_mode
for the model (which should be vicuna_v1
for the 13B model.
Thanks for your response!
When I use the
inference.py
template to run Cambrian-13B model, there is a message:I noticed the performance of 13B is worse than 8B. Does this mean that Cambrian-13B needs additional configuration?