Open Murtuza-Chawala opened 1 year ago
If you want to explore Open Source coding models without GPU, the best way about it is to use Google Colab and one of the GPTQ models. At this point in time only the following Wizard Coder models are supported: WizardCoder-15B-1.0-GPTQ, WizardCoder-Python7B-V1.0-GPTQ, WizardCoder-Python-13B-V1.0-GPTQ, WizardCoder-Python-34B-V1.0-GPTQ. They offer decent performance but only with simpler queries. Below is an example how you can run BambooAI with a local model.
!pip install bambooai --upgrade !pip install auto-gptq !pip install accelerate !pip install einops !pip install xformers !pip install bitsandbytes
df=pd.read_csv('https://raw.githubusercontent.com/pgalko/BambooAI/a86d186477a4085e479a3168883d0a114429c87d/examples/test_activity_data.csv') display(df.head())
bamboo = BambooAI(df, local_code_model='WizardCoder-Python-13B-V1.0-GPTQ')
prompt = ''' Calculate standard deviation for the heartrate column. ''' bamboo.pd_agent_converse(prompt)
Please make sure to change the runtime type and use the GPU hardware accelerator for this option to work in the Colab notebook. (Runtime->Change Runtime Type->Hardware Accelerator = T4 GPU.
At present the local models are only called upon for code generation tasks, all other tasks like pseudo code generation, summarization, error correction and ranking are still handled by OpenAI models of choice.
Hi @pgalko Id really love using the Code-Llama LLM, I have just got access to it. I need to know a few things 1) How can I set up with Code-Llama any example would be wonderful 2) What is the Minimum CPU requirement for a smooth running of the application with Code-Llama 3) Can we use Google Colab and add the Code-Llama model to it and run?