Closed ulan-yisaev closed 6 months ago
Thanks @ulan-yisaev. Do you want to contribute the change in a PR?
Hi @saratvemulapalli , Sure thing, I'll be happy to contribute.
Hi @ulan-yisaev i see you have used embedding models outside of Cohere, Bedrock and OpenAI. I had been trying similar for Hugging Face text generation model to do post_process_function but not able to do that. Any idea how you can achieve this post_process_function ?
I have this
[
{
"generated_text": "Your Generated text"
}
]
and i want to convert it to this below using post process fuction
{
"completion": "Your Generated text"
}
I have tried this till now
"post_process_function": "\n def json = \"{\" +\n \"\\\"completion\\\":\\\"\" + params['response'][0].generated_text + \"\\\" }\";\n return json;\n "
Also @saratvemulapalli if you have any idea on this
Hi @ramda1234786 , Please note that I haven't tested generation models, as my work primarily focuses on embedding models. But I suppose you could try the following function:
"post_process_function": "\n def generatedText = params.response[0].generated_text;\n def json = \"{\\\"completion\\\":\\\"\" + generatedText + \"\\\"}\";\n return json;\n"
Thanks for your response @ulan-yisaev . I tried below but no luck
I get this from predict API without post_process_function
{
"inference_results": [
{
"output": [
{
"name": "response",
"dataAsMap": {
"response": [
{
"generated_text": "The title is Rush, year is 2013, budget is 500000, earning is 300000, genere is action.What is the budget of Rush?\n\nThe budget of Rush is 500000...................."
}
]
}
}
],
"status_code": 200
}
]
}
but when i add post_process_function as you mentioned , i get this below error
{
"error": {
"root_cause": [
{
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"generatedText = params.response[0].generated_text;\n def ",
" ^---- HERE"
],
"script": " ...",
"lang": "painless",
"position": {
"offset": 28,
"start": 6,
"end": 62
}
}
],
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"generatedText = params.response[0].generated_text;\n def ",
" ^---- HERE"
],
"script": " ...",
"lang": "painless",
"position": {
"offset": 28,
"start": 6,
"end": 62
},
"caused_by": {
"type": "null_pointer_exception",
"reason": "Cannot invoke \"Object.getClass()\" because \"callArgs[0]\" is null"
}
},
"status": 400
}
Not sure how to fix this.....
Version 2.12 is still under development. If you need RAG with OpenSearch my recommendation is to try Sycamore. We know that path works, though it's using 2.11. Once 2.12 is ready, we will have that working with Sycamore as well.
Closing as the connector blueprint was added
Is your feature request related to a problem? I'm proposing to add an AI connector blueprint for the Aleph Alpha Luminous-Base Embedding Model to the current collection of remote inference blueprints in OpenSearch ML Commons.
This model is particularly effective for German language applications, providing nuanced and contextually relevant embeddings. Given the increasing demand for robust language model solutions in different languages, integrating this model could significantly enhance your offerings for German-language processing tasks.
What solution would you like? A markdown file of AI connector blueprint for Aleph Alpha luminous-base embedding model.
What alternatives have you considered? I was able to write one using existing blueprints here: