Open wxp16 opened 6 months ago
use huggingface pipleline to run inference task, but found finetuned HuggingFaceH4/zephyr-7b-beta and model HuggingFaceH4/zephyr-7b-beta generates exactly otuputs. Does anyone have any clue about this error?
HuggingFaceH4/zephyr-7b-beta
given you put the same model-id, it is most probably downloading the hugging face official repo weights to run the inference.
use huggingface pipleline to run inference task, but found finetuned
HuggingFaceH4/zephyr-7b-beta
and modelHuggingFaceH4/zephyr-7b-beta
generates exactly otuputs. Does anyone have any clue about this error?