Open jozzy opened 6 months ago
Groq being the fastest inference compute , add support for models hosted in Groq. LLaMa3
https://console.groq.com/docs/models
There is a similar request open in Spring AI https://github.com/spring-projects/spring-ai/issues/621
an approach to reuse such interrogations could also be thought through
Groq being the fastest inference compute , add support for models hosted in Groq. LLaMa3
https://console.groq.com/docs/models
There is a similar request open in Spring AI https://github.com/spring-projects/spring-ai/issues/621
an approach to reuse such interrogations could also be thought through