Closed simonw closed 11 months ago
An interesting wart is that a lot of these models aren't configured for instructions - instead, the JSON file here https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/metadata/models.json includes suggested prompts to get them to respond to a question, e.g. { "order": "a", "md5sum": "4acc146dd43eb02845c233c29289c7c5", "name": "Hermes", "filename": "nous-hermes-13b.ggmlv3.q4_0.bin", "filesize": "8136777088", "requires": "2.4.7", "ramrequired": "16", "parameters": "13 billion", "quant": "q4_0", "type": "LLaMA", "description": "<strong>Best overall model</strong><br><ul><li>Instruction based<li>Gives long responses<li>Curated with 300,000 uncensored instructions<li>Trained by Nous Research<li>Cannot be used commercially</ul>", "url": "https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML/resolve/main/nous-hermes-13b.ggmlv3.q4_0.bin", "promptTemplate": "### Instruction:\n%1\n### Response:\n" } I'm not yet doing anything with those, but maybe I should.
An interesting wart is that a lot of these models aren't configured for instructions - instead, the JSON file here https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/metadata/models.json includes suggested prompts to get them to respond to a question, e.g.
{ "order": "a", "md5sum": "4acc146dd43eb02845c233c29289c7c5", "name": "Hermes", "filename": "nous-hermes-13b.ggmlv3.q4_0.bin", "filesize": "8136777088", "requires": "2.4.7", "ramrequired": "16", "parameters": "13 billion", "quant": "q4_0", "type": "LLaMA", "description": "<strong>Best overall model</strong><br><ul><li>Instruction based<li>Gives long responses<li>Curated with 300,000 uncensored instructions<li>Trained by Nous Research<li>Cannot be used commercially</ul>", "url": "https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML/resolve/main/nous-hermes-13b.ggmlv3.q4_0.bin", "promptTemplate": "### Instruction:\n%1\n### Response:\n" }
I'm not yet doing anything with those, but maybe I should.
Originally posted by @simonw in https://github.com/simonw/llm-gpt4all/issues/1#issuecomment-1627817469
I did this in:
Originally posted by @simonw in https://github.com/simonw/llm-gpt4all/issues/1#issuecomment-1627817469