Open Senthselvi opened 2 months ago
The generate_text
and agenerate_text
methods in the LlamaIndexLLMWrapper
class are designed to handle synchronous and asynchronous text generation, respectively. Here is the code for these methods:
def generate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: float = 1e-8,
stop: t.Optional[t.List[str]] = None,
callbacks: Callbacks = None,
) -> LLMResult:
kwargs = self.check_args(n, temperature, stop, callbacks)
li_response = self.llm.complete(prompt.to_string(), **kwargs)
return LLMResult(generations=[[Generation(text=li_response.text)]])
async def agenerate_text(
self,
prompt: PromptValue,
n: int = 1,
temperature: t.Optional[float] = None,
stop: t.Optional[t.List[str]] = None,
callbacks: Callbacks = None,
) -> LLMResult:
if temperature is None:
temperature = 1e-8
kwargs = self.check_args(n, temperature, stop, callbacks)
li_response = await self.llm.acomplete(prompt.to_string(), **kwargs)
return LLMResult(generations=[[Generation(text=li_response.text)]])
In the provided code snippet, the generate_text
method synchronously generates text using the complete
method of the llm
object, while the agenerate_text
method asynchronously generates text using the acomplete
method of the llm
object. Both methods return an LLMResult
object containing the generated text encapsulated in Generation
objects [1].
To continue talking to Dosu, mention @dosu.
this is my def complete(self, prompt: str, kwargs: Any) -> CompletionResponse: payload = { self.ollama.prompt_key: prompt, "model": self.ollama.model, "options": self.ollama._model_kwargs, "stream": False, kwargs, }
LLMResult is from lanchain I am not uisng langchain--PromptValue, NameError: name 'PromptValue' is not defined
what is the code for llamaindex def generate_text( self, prompt: PromptValue, n: int = 1, temperature: float = 1e-8, stop: t.Optional[t.List[str]] = None, callbacks: Callbacks = None, ) -> LLMResult: return self.get_llm_result(prompt)