I deployed this sample with a meta-textgeneration-llama-2-7b-f endpoint generated from Jumpstart and was getting this error from endpoint due to incorrectly formatted input:
ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"Input payload must contain a 'inputs' key and optionally a 'parameters' key containing a dictionary of parameters."
This change adds a 'parameters:' key to the model_kwargs value and corrects the error.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
I deployed this sample with a meta-textgeneration-llama-2-7b-f endpoint generated from Jumpstart and was getting this error from endpoint due to incorrectly formatted input:
This change adds a 'parameters:' key to the model_kwargs value and corrects the error.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.