Closed kmichal closed 1 year ago
@kmichal All LLM models does only the one task - returning next most likely token to be. Even GPT4. API calls performed by wraparound code, for example, you may analyze a user question to model using another NLP model, then perform any actions you wish, calling any API, for example, and then one might with to put both API call result data and user query as context to LLM, so it will get more context and will be able to return best tokens.
Hi,
Would anyone would be able to tell me whether it is possible to have this kind of model perform a task like calling an API and how would one do it?