opea-project / GenAIExamples

Generative AI Examples is a collection of GenAI examples such as ChatQnA, Copilot, which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
https://opea.dev
Apache License 2.0
252 stars 174 forks source link

Update ChatQnA example with Falcon LLM #560

Open arun-gupta opened 2 months ago

arun-gupta commented 2 months ago

Update ChatQnA example that uses Falcon as the LLM.

This would require to include Falcon as part of the validation at https://github.com/opea-project/GenAIComps/tree/main/comps/llms. And then create an updated ChatQnA that would use this microservice to use Falcon LLM.

lucasmelogithub commented 2 months ago

Supporting Falcon-11B would be great.

kevinintel commented 1 month ago

TGI-Gaudi and vllm supports Falcon 40B and Flacon 7B. We will validate Falcon-11B

lucasmelogithub commented 1 month ago

TGI-Gaudi and vllm supports Falcon 40B and Flacon 7B. We will validate Falcon-11B

Great, thanks for the update.

chickenrae commented 2 weeks ago

@kevinintel This is marked for the OPEA Hackathon, are you going to complete this in October? If not, can you unassign yourself so we can have someone take this on.

lucasmelogithub commented 2 weeks ago

Question, Models are set with environment variables via set_env.sh https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker_compose/intel/cpu/xeon/set_env.sh

What is our strategy? Create multiple set_env.sh. set_env_falcon11B.sh? Or just update the README.md with instructions?

In the Terraform Module we developed, we are creating our own set_env.sh and setting the model. I have plans to contribute links to these modules back to OPEA via README.md links, I'll open the PR as a draft for discussion soon. https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B https://github.com/intel/optimized-cloud-recipes/tree/main/recipes/ai-opea-chatqna-xeon-falcon11B

chickenrae commented 2 weeks ago

@arun-gupta Should be able to give some guidance.

arun-gupta commented 2 weeks ago

This should really be somebody from engineering. @kding1 @mkbhanda ?

lucasmelogithub commented 2 weeks ago

I'm also open to a call with OPEA contributors if easier to brainstorm.

I think we need to discuss at least:

On Terraform/Ansible, those have more usecases than just OPEA(and were developed before OPEA), that's why they are today in other repos. Open to discuss the best options for usability and version control.

mkbhanda commented 2 weeks ago

@lucasmelogithub let us not proliferate set_env.sh(es) with only model_id different :-) That set_env.sh really is a file a user is expected to edit, with proxy, ip address, model id etc values/choices as the case may be. I like how @kevinintel offered to verify that falcon 11B works with TGI and vLLM model servers. Typically these are tested also by the model providers given these two model servers are popular. May I suggest you update the README file with a table that shares all the models verified to work (and add a date) because this list may go out of date too soon! We could also in the set_env.sh file provide a list of model_ids (again this can never hope to be exhaustive! Just a few popular ones that we have tested. And comment all but one as a potential default. What will be crucial is that if a model is very large the VM instance if using docker or Kubernetes worker nodes need to be large enough. So in that sense a model choice, small/medium/large/extra large has other ramifications.

lucasmelogithub commented 2 weeks ago

@lucasmelogithub let us not proliferate set_env.sh(es) with only model_id different :-) That set_env.sh really is a file a user is expected to edit, with proxy, ip address, model id etc values/choices as the case may be. I like how @kevinintel offered to verify that falcon 11B works with TGI and vLLM model servers. Typically these are tested also by the model providers given these two model servers are popular. May I suggest you update the README file with a table that shares all the models verified to work (and add a date) because this list may go out of date too soon! We could also in the set_env.sh file provide a list of model_ids (again this can never hope to be exhaustive! Just a few popular ones that we have tested. And comment all but one as a potential default. What will be crucial is that if a model is very large the VM instance if using docker or Kubernetes worker nodes need to be large enough. So in that sense a model choice, small/medium/large/extra large has other ramifications.

Agree with the REAME.MD approach, thanks for the direction. I will create a PR next week with an LLM table. I have sucessfully tested Falcon-11B with TGI, I can test with vllm too and will make the README reflect that.

We(Intel) have partnered with TII/AWS to showcase Falcon-11B on OPEA. AWS will demo OPEA + Falcon-11B using our Intel Cloud Optimization Modules for Terraform/Ansible on AWS on a huge conference(GITEX) next week . https://github.com/intel/terraform-intel-aws-vm/tree/main/examples/gen-ai-xeon-opea-chatqna-falcon11B

image

lucasmelogithub commented 1 week ago

PR Created https://github.com/opea-project/GenAIExamples/pull/970