Closed prototype99 closed 1 month ago
Pretty sure this implies they're plain incompatible? unless I'm missing something
{'error': {'message': 'Model meta/meta-llama-3-8b-instruct is not available', 'code': 400}, 'user_id': 'user_2lVH8sxqxO3P2ybYIiNt43m90pz'} {"model": "meta/meta-llama-3-8b-instruct", "max_tokens": 256, "temperature": 0.6, "presence_penalty": 0, "frequency_penalty": 0, "top_k": 50, "top_p": 0.9, "messages": [{"role": "system", "content": "you are a friend having a conversation with me, my name is sophie"}, {"role": "user", "content": "\nhello\n"}]}
{'error': {'message': 'Model meta/meta-llama-3-70b-instruct is not available', 'code': 400}, 'user_id': 'user_2lVH8sxqxO3P2ybYIiNt43m90pz'} {"model": "meta/meta-llama-3-70b-instruct", "max_tokens": 256, "temperature": 0.6, "presence_penalty": 0, "frequency_penalty": 0, "top_k": 50, "top_p": 0.9, "messages": [{"role": "system", "content": "you are a friend having a conversation with me, my name is sophie"}, {"role": "user", "content": "\nhello\n"}]}
{'error': {'message': 'Model meta/meta-llama-3.1-405b-instruct is not available', 'code': 400}, 'user_id': 'user_2lVH8sxqxO3P2ybYIiNt43m90pz'} {"model": "meta/meta-llama-3.1-405b-instruct", "max_tokens": 256, "temperature": 0.6, "presence_penalty": 0, "frequency_penalty": 0, "top_k": 50, "top_p": 0.9, "messages": [{"role": "system", "content": "you are a friend having a conversation with me, my name is sophie"}, {"role": "user", "content": "\nhello\n"}]}
{'error': {'message': 'Model google/gemini-flash-8b-1.5 is not available', 'code': 400}, 'user_id': 'user_2lVH8sxqxO3P2ybYIiNt43m90pz'} {"model": "google/gemini-flash-8b-1.5", "max_tokens": 256, "temperature": 0.6, "presence_penalty": 0, "frequency_penalty": 0, "top_k": 50, "top_p": 0.9, "messages": [{"role": "system", "content": "you are a friend having a conversation with me, my name is sophie"}, {"role": "user", "content": "\nhello\n"}]}
{'error': {'message': 'No endpoints found for this model.', 'code': 404}, 'user_id': 'user_2lVH8sxqxO3P2ybYIiNt43m90pz'} {"model": "cognitivecomputations/dolphin-llama-3-70b", "max_tokens": 256, "temperature": 0.6, "presence_penalty": 0, "frequency_penalty": 0, "top_k": 50, "top_p": 0.9, "messages": [{"role": "system", "content": "you are a friend having a conversation with me, my name is sophie"}, {"role": "user", "content": "\nhello\n"}]}
{'error': {'message': 'Model codestral-latest is not available', 'code': 400}, 'user_id': 'user_2lVH8sxqxO3P2ybYIiNt43m90pz'} {"model": "codestral-latest", "max_tokens": 256, "temperature": 0.6, "presence_penalty": 0, "frequency_penalty": 0, "top_k": 50, "top_p": 0.9, "messages": [{"role": "system", "content": "you are a friend having a conversation with me, my name is sophie"}, {"role": "user", "content": "\nhello\n"}]}
on the branch I'm working on, all of these are removed anyway, so this may get closed in the near future
some of the models are unable to function using openrouter as deertick outputs Error: 'choices'. affected models: I-8b I-70b I-405b gemini-flash dolphin codestral
gemini-pro-exp outputs this error whenever they tell suggestive/edgy jokes/go outside their assigned boundaries