From the Ollama docs, "If an empty prompt is provided [to the generate endpoint], the model will be loaded into memory.". This is useful so that, for example, the user can be shown that the model is loading, and the user can continue to edit their prompt before a response actually starts being generated.
This way, it is clearer why the prompt argument is optional (since someone might assume it would behave the same as providing an empty string as the prompt), and so that it is clear that none of the other attributes for GenerateRequest will have an effect when doing this.
But, because I understand that ollama-js is generally supposed to be 1:1 with the actual API, I want to get some feedback on this first before I file a PR.
From the Ollama docs, "If an empty prompt is provided [to the generate endpoint], the model will be loaded into memory.". This is useful so that, for example, the user can be shown that the model is loading, and the user can continue to edit their prompt before a response actually starts being generated.
Technically, this could be solved by just making the
prompt
attribute of the GenerateRequest interface, shown below, optional https://github.com/ollama/ollama-js/blob/6e7e49649892c814bc3bec6088c924717eb4d479/src/interfaces.ts#L49 However, I think it would be sensible for there to be a dedicated function, probably of the below formThis way, it is clearer why the
prompt
argument is optional (since someone might assume it would behave the same as providing an empty string as the prompt), and so that it is clear that none of the other attributes forGenerateRequest
will have an effect when doing this.But, because I understand that
ollama-js
is generally supposed to be 1:1 with the actual API, I want to get some feedback on this first before I file a PR.