Open LMJOK opened 2 months ago
usage: launch_scientist.py [-h] [--skip-idea-generation] [--skip-novelty-check] [--experiment EXPERIMENT] [--model {claude-3-5-sonnet-20240620,gpt-4o-2024-05-13,deepseek-coder-v2-0724,llama3.1-405b,qwen2bedrock/anthropic.claude-3-sonnet-20240229-v1:0,bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0,bedrock/anthropic.claude-3-haiku-20240307-v1:0,bedrock/anthropic.claude-3-opus-20240229-v1:0vertex_ai/claude-3-opus@20240229,vertex_ai/claude-3-5-sonnet@20240620,vertex_ai/claude-3-sonnet@20240229,vertex_ai/claude-3-haiku@20240307}] [--writeup {latex}] [--parallel PARALLEL] [--improvement] [--gpus GPUS] [--num-ideas NUM_IDEAS] launch_scientist.py: error: argument --model: invalid choice: 'qwen2' (choose from 'claude-3-5-sonnet-20240620', 'gpt-4o-2024-05-13', 'deepseek-coder-v2-0724', 'llama3.1-405b', 'qwen2bedrock/anthropic.claude-3-sonnet-20240229-v1:0', 'bedrock/anthropic.claude-3-5-sonnet-20240620-v1:0', 'bedrock/anthropic.claude-3-haiku-20240307-v1:0', 'bedrock/anthropic.claude-3-opus-20240229-v1:0vertex_ai/claude-3-opus@20240229', 'vertex_ai/claude-3-5-sonnet@20240620', 'vertex_ai/claude-3-sonnet@20240229', 'vertex_ai/claude-3-haiku@20240307')
If you use Ollama, you can modify the existing OpenAI API code like so: https://ollama.com/blog/openai-compatibility
So I'm just going to change the way the API of Ollama is invoked from the API of launch_scientist.py?
If you use Ollama, you can modify the existing OpenAI API code like so: https://ollama.com/blog/openai-compatibility若使用 Ollama,您可按如下方式修改现有的 OpenAI API 代码:https://ollama.com/blog/openai-compatibility
So Do I need to replace the OPENAI API calls in the code with the ollama API calls? Replace the way all models are called?
If you use Ollama, you can modify the existing OpenAI API code like so: https://ollama.com/blog/openai-compatibility
I have modified the code, but has not been shown to support, would like to ask how to modify.
error:/home/pc/anaconda3/envs/AI-scientist/bin/python /media/iplab/tony/lmj/bigmodol/AI-scientist/AI-Scientist/01launch_scientist.py Using GPUs: [0, 1] Using ollama with llama2.
Generating idea 1/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 2/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 3/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 4/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 5/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 6/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 7/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 8/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 9/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 10/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 11/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 12/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 13/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 14/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 15/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 16/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 17/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 18/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 19/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 20/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 21/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 22/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 23/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 24/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 25/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 26/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 27/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 28/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 29/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 30/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 31/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 32/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 33/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 34/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 35/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 36/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 37/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 38/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 39/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 40/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 41/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 42/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 43/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 44/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 45/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 46/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 47/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 48/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 49/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Generating idea 50/50 Iteration 1/3 Failed to generate idea: Model meta-llama/llama-2-instruct not supported.
Checking novelty of idea 0: adaptive_block_size Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported.
Checking novelty of idea 1: layerwise_learning_rates Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. Error: Model meta-llama/llama-2-instruct not supported. All ideas evaluated.
Process finished with exit code 0
You should comment out try except so that the real errors show up.
Also, we highly do not advise you try using Llama2 models, any model weaker than the original GPT-4 will fail.
Suppose I want to use the open source large model, what parameters and files do I need to set. I am now using ollama local deployment qwen2.