Open jonatananselmo opened 3 months ago
Hey @jonatananselmo thanks for the feedback and Q!
I will updating the repo with some example references on how to apply function calling along with tutorials, I will let you know once they are updated.
Great @init27, an example using a file reference should be helpful!
Commenting to get notifications.
@init27 Great! May I ask when it will be updated?
@init27 Thanks, but I still want to know how to operate on files when using Code Interpreter
? For example, give a csv file, I want call Code Interpreter
to visualize this csv file. How can I pass the file path to Code Interpreter
?
I have searched in “Model Cards and Prompt Formats” and in the Github repositories about what is the correct way to reference a file in the prompt to be used by the code interpreter but I have not found an example. In the paper, they only mention a few examples where they do it in the following way: file_path = “path/to/file”. But I'm not sure if this is the right way as sometimes it doesn't work and it gets worse when the language is not English. In the repository “llama-agentic-system” there are some examples about using files, but the code has too many layers and I couldn't get to the prompt that is sent to the model. I double checked that the code interpreter tool was enabled by the line “Environment: ipython” at the system prompt. I also tried enabling the brave_search and wolfram_alpha tools, but the result is the same. Here is the prompt I'm trying:
Playing with the temperature parameter, sometimes the model responds with the <|python_tag|> token but sometimes not. I am using Llama 3.1 - 70B - fp8 by NeuralMagic