response = client.complete(prompt = "Answer question Q. ",context="Q: What is the currency in myanmmar",
openai_key=settings.openai_api_key,
temperature=0.0,
data_synthesis=True,
finetune=True,)
This call returns an error, because the temperature param cannot be processed.
File "/opt/conda/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/opt/conda/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/opt/conda/lib/python3.10/site-packages/llm_vm/completion/optimize.py", line 43, in new_thread
t[0] = foo()
File "/opt/conda/lib/python3.10/site-packages/llm_vm/completion/optimize.py", line 254, in promiseCompletion
best_completion = self.call_big(prompt, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/llm_vm/client.py", line 100, in CALL_BIG
return self.teacher.generate(prompt, max_len,**kwargs)
TypeError: BaseOnsiteLLM.generate() got an unexpected keyword argument 'temperature'
When Bloom, for example, is the big model:
This call returns an error, because the temperature param cannot be processed.