alibaba / FederatedScope

An easy-to-use federated learning platform
https://www.federatedscope.io
Apache License 2.0
1.26k stars 206 forks source link

Some questions of evaluating LLama-7b with helm #731

Closed zhentongLi closed 7 months ago

zhentongLi commented 9 months ago

Hello! May I ask, when evaluating with helm, there are some parameters in one of the commands, and I don't know exactly what it does. image Could you explain the two steps in detail?Thank you!

qbc2016 commented 9 months ago

Hello, we explained the meaning of each parameter above. F753D2ED-94D9-4A08-8136-73F54226EC00

zhentongLi commented 9 months ago

Currently I have fine-tuned the model with dolly-15k@llm data, the configuration file is llama_modelscope.yaml, and after that I want to evaluate it, and I see that there are some steps in the readme.md of this eval_for_helm package that aren't quite clear in the deployment process via Conda. For example, the package structure in the helm_fs package is not clear. And maybe there is a mistake in this readme.md. Could you give me the clear structure and another advice about the evaluation? Thank you very much! image

qbc2016 commented 9 months ago

Thank you for pointing it out, the second term should be PATH_WORKDIR=~/helm_fs/src/crfm-helm . There are two steps during the evaluation with helm. The first step is to evaluate, i.e., run the code in Start to evaluate ; you may also add the path of your yaml file in the command as --yaml xx/xx/llama_modelscope.yaml . Note that you should change the relative path infederate.save_to to absolute path. The second step is to view the results, which is inLaunch webserver to view results . You can try it out and feel free to ask any questions.