Open erjiaxiao opened 1 month ago
Has this been solved? I am having the same issue. Could there be some instruction (an example script and inference code) on how to finetune with llama-3.1-8b instruct model and then evaluate with the same? I think it is a problem with template setup.
Thanks!
Hello! I used the following demo code but got a weird inference output: ['�������� |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |> |>+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b+b/b| blog+b+b+b+b+b+b+b+b+b+b+b/b| blog|~||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||']
Thank you!