lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Apache License 2.0
36.98k stars 4.56k forks source link

Please help add SeaLLM-7B-v2 with 7.54 on MT-bench on the leaderboard #3013

Open nxphi47 opened 9 months ago

nxphi47 commented 9 months ago

Hi Lmsys team,

We released SeaLLM-7B-v2 last week - a multilingual model that achieves 7.54 on the English MT-bench.

Can you please check and verify the results and add our model to the leaderboard!

Generation and GPT-4 ratings files:

https://huggingface.co/SeaLLMs/SeaLLM-7B-v2/tree/main/evaluation/mt_bench

Please find the files in the link above.

Steps to reproduce the results:

  1. Add the format to conversations.py
# Seallm
register_conv_template(
    Conversation(
        name="seallm",
        system_template="""<|im_start|>system
{system_message}""",
        system_message="""You are a helpful, intelligent and safe assistant.""",
        roles=("<|im_start|>user", "<|im_start|>assistant"),
        sep_style=SeparatorStyle.CHATML,
        sep="</s>",
        stop_token_ids=[0, 1],
    )
)
  1. Add the following to model_adapter.py

class SeaLLMAdapter(BaseModelAdapter):
    """The model adapter for SeaLLMAdapter"""

    use_fast_tokenizer = False

    def match(self, model_path: str):
        return "sea" in model_path.lower()

    def get_default_conv_template(self, model_path: str) -> Conversation:
        return get_conv_template("seallm")
nxphi47 commented 9 months ago

@merrymercy @infwinston Please help. Let me know if you have any questions.

if you can add it to the Chatbot Arena, that is even better. Thanks!