Open jli113 opened 1 month ago
Tried mistral-nemo(12B) and llama3.1(8B), mistral-nemo has better performance.
there's an error messege coming in this branch which is like, idk why is was happening ? @jli113 can you tell me your machine specs?
idk
Ubuntu 24.04, 128G RAM, gpu is one 4070TS. My error was gpu space not enough, since Ollama already used all the GPU.
@jli113 there's a service known as "ola cloud krutrim" in which they provide you computational resource, but you've to verify your indian phone no., (maybe you can try blocking the element, using ublock)... so they are also giving free access to llama3.1 70b model deployed into their servers, i was trying the same to integrate but really exhausted in doing so, each time throwing me error. idk, im new to these
@xprabhudayal JSON format it is, problem solved.
Are you talking about the final_info.json problem? Same happening with me, in dir run_i, meanwhile where can you find the paper generated version ?
On Tue, Sep 10, 2024, 5:50 PM Jimmy1i @.***> wrote:
JSON format it is, problem solved. image.png (view on web) https://github.com/user-attachments/assets/b51aca25-1424-408a-be74-a071bf434eb5
— Reply to this email directly, view it on GitHub https://github.com/SakanaAI/AI-Scientist/pull/98#issuecomment-2340552075, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6UFP3DEGT6PYQYNNLSNG3DZV3PZTAVCNFSM6AAAAABNRJ5ZL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBQGU2TEMBXGU . You are receiving this because you were mentioned.Message ID: @.***>
Logs have tex outputs: But the PDF has nothing.
@jli113 how can we eliminate this error?, its giving me the error in here, cant we integrate ollama here for the review?
in the last part, success=false
@xprabhudayal This relates to the do_idea part in launch_scientist.py, I updated it in yesterday's commits. Here, Success=True
im a bit disappointed by the model as it doesn't generates the PDF, so far ive reached till here; below is review.txt file, but no pdf
meanwhile how much progress you've made so far, @jli113 :) ? did you find out any tweak to generate the pdf?
@xprabhudayal , see this, In the logs, there are tex outpts, but the pdf file doesn't have any contents.
I think we have to manually filter out tex ones...
On Thu, Sep 19, 2024, 10:04 AM Jimmy1i @.***> wrote:
@xprabhudayal https://github.com/xprabhudayal , see this https://github.com/SakanaAI/AI-Scientist/pull/98#issuecomment-2342561320, In the logs, there are tex outpts, but the pdf file doesn't have any contents.
— Reply to this email directly, view it on GitHub https://github.com/SakanaAI/AI-Scientist/pull/98#issuecomment-2359957260, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6UFP3E45ITWYN5PL5C3WH3ZXJH55AVCNFSM6AAAAABNRJ5ZL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNJZHE2TOMRWGA . You are receiving this because you were mentioned.Message ID: @.***>
@jli113 its not generating coz we're using weaker model than gpt4o,
i tried out with llama3.1:70b groq one but rate limit error...
I'm using vasiliyeskin's codes to make weak models create a full PDF, hope it will work.
Tried llama3.1:70b, still nothing. I have attached the logs. [Uploading 20240928_125325_robust_diffusion_learning_aider.txt…]()
don't know why it is not writing to the tex file.
@xprabhudayal It finally writes to file, not pretty but works. Tired with mistral-small, though requires manual copy and paste to make a whole file:
I'm using vasiliyeskin's codes to make weak models create a full PDF, hope it will work.
Thanks, I'll check it out!
On Mon, Sep 30, 2024, 11:09 AM Jimmy1i @.***> wrote:
@xprabhudayal https://github.com/xprabhudayal It finally writes to file, not pretty but works. Screenshot.from.2024-09-30.13-37-12.png (view on web) https://github.com/user-attachments/assets/4510de22-2240-42be-ae10-5f14b1c3be06
I'm using vasiliyeskin's https://github.com/SakanaAI/AI-Scientist/pull/114 codes to make weak models create a full PDF, hope it will work.
— Reply to this email directly, view it on GitHub https://github.com/SakanaAI/AI-Scientist/pull/98#issuecomment-2382155216, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6UFP3D7XFKKR55D5U26Z7TZZDPZZAVCNFSM6AAAAABNRJ5ZL6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGOBSGE2TKMRRGY . You are receiving this because you were mentioned.Message ID: @.***>
llama3.1:70b created a whole file but only has the abstract.
@jli113 i was just waiting for the ollama to release llama3.2 11b in their library, coz its having the vision 👀.
@jli113 hi, theres a service hyperbolic who were providing $10 initial credits to use the llm's from an endpoint, maybe can we integrate it ?
When using local LLM in Ollama, force models weaker than GPT4 to return answers in JSON format.