HazyResearch / manifest

Prompt programming with FMs.
Apache License 2.0
440 stars 46 forks source link

How to load model with half-precision, such as float16 since only have limited gpu memory #123

Open HiddenAlaska opened 8 months ago

HiddenAlaska commented 8 months ago

Description of the bug

can not load model with half precision. And haven't figured out how to transfer model to CPU or GPU?

To Reproduce

run model gpt-j-6B as in the demo use local huggingface method

Expected behavior

return a repsonse.

Error Logs/Screenshots

requests.exceptions.HTTPError: {'message': '"LayerNormKernelImpl" not implemented for \'Half\''}

Environment (please complete the following information)

Thanks in advance.