Open arun-gupta opened 3 months ago
Supporting Falcon-11B would be great.
TGI-Gaudi and vllm supports Falcon 40B and Flacon 7B. We will validate Falcon-11B
TGI-Gaudi and vllm supports Falcon 40B and Flacon 7B. We will validate Falcon-11B
Great, thanks for the update.
@kevinintel This is marked for the OPEA Hackathon, are you going to complete this in October? If not, can you unassign yourself so we can have someone take this on.
Question, Models are set with environment variables via set_env.sh https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh
What is our strategy? Create multiple set_env.sh. set_env_falcon11B.sh? Or just update the README.md with instructions?
In the Terraform Module we developed, we are creating our own set_env.sh and setting the model. I have plans to contribute links to these modules back to OPEA via README.md links, I'll open the PR as a draft for discussion soon. https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B https://github.com/intel/optimized-cloud-recipes/tree/main/recipes/ai-opea-chatqna-xeon-falcon11B
@arun-gupta Should be able to give some guidance.
This should really be somebody from engineering. @kding1 @mkbhanda ?
I'm also open to a call with OPEA contributors if easier to brainstorm.
I think we need to discuss at least:
On Terraform/Ansible, those have more usecases than just OPEA(and were developed before OPEA), that's why they are today in other repos. Open to discuss the best options for usability and version control.
@lucasmelogithub let us not proliferate set_env.sh(es) with only model_id different :-) That set_env.sh really is a file a user is expected to edit, with proxy, ip address, model id etc values/choices as the case may be. I like how @kevinintel offered to verify that falcon 11B works with TGI and vLLM model servers. Typically these are tested also by the model providers given these two model servers are popular. May I suggest you update the README file with a table that shares all the models verified to work (and add a date) because this list may go out of date too soon! We could also in the set_env.sh file provide a list of model_ids (again this can never hope to be exhaustive! Just a few popular ones that we have tested. And comment all but one as a potential default. What will be crucial is that if a model is very large the VM instance if using docker or Kubernetes worker nodes need to be large enough. So in that sense a model choice, small/medium/large/extra large has other ramifications.
@lucasmelogithub let us not proliferate set_env.sh(es) with only model_id different :-) That set_env.sh really is a file a user is expected to edit, with proxy, ip address, model id etc values/choices as the case may be. I like how @kevinintel offered to verify that falcon 11B works with TGI and vLLM model servers. Typically these are tested also by the model providers given these two model servers are popular. May I suggest you update the README file with a table that shares all the models verified to work (and add a date) because this list may go out of date too soon! We could also in the set_env.sh file provide a list of model_ids (again this can never hope to be exhaustive! Just a few popular ones that we have tested. And comment all but one as a potential default. What will be crucial is that if a model is very large the VM instance if using docker or Kubernetes worker nodes need to be large enough. So in that sense a model choice, small/medium/large/extra large has other ramifications.
Agree with the REAME.MD approach, thanks for the direction. I will create a PR next week with an LLM table. I have sucessfully tested Falcon-11B with TGI, I can test with vllm too and will make the README reflect that.
We(Intel) have partnered with TII/AWS to showcase Falcon-11B on OPEA. AWS will demo OPEA + Falcon-11B using our Intel Cloud Optimization Modules for Terraform/Ansible on AWS on a huge conference(GITEX) next week . https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B
Update ChatQnA example that uses Falcon as the LLM.
This would require to include Falcon as part of the validation at https://github.com/opea-project/GenAIComps/tree/main/comps/llms. And then create an updated ChatQnA that would use this microservice to use Falcon LLM.