Open chris-chatsi opened 2 months ago
Hi @chris-chatsi , thanks for your feedback.
The KeyError occurred because the API did not return logits. Can you test your MaaS model with the following code to check whether you can print logits of the prompt ? You only need to replace the value of your_api_endpoint
and your_api_key
.
import urllib.request
import json
your_api_endpoint = ''
your_api_key = ''
def test_custom_connection(api_url, api_key, prompt='hello hello hello world.'):
data = {
"prompt": prompt,
"temperature": 0,
"max_tokens": 0,
"echo": "True",
"logprobs": 0
}
body = str.encode(json.dumps(data))
req = urllib.request.Request(api_url, body, {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key)})
res = urllib.request.urlopen(req).read()
res = json.loads(res)
logits = - np.array(res['choices'][0]['logprobs']['token_logprobs'])
print(logits.shape)
return logits
print(test_custom_connection(your_api_endpoint, your_api_key))
Describe the bug
I installed the custom tool into Azure Promptflow. I am using a llama-7b-text-generation MaaS running on Azure.
When testing my Promptflow, the first problem was that the
torch
library was not installed in the runtime environment. Once I installed that I received an errorRun failed: KeyError: 0
.I grabbed the following requirements from the example found in Promptflow GitHub but still had no luck.
For now I am going to try to use the Please let me know any other information I can provide.
Steps to reproduce
Expected Behavior
Receive an error with the traceback.
Logs
Additional Information
No response