TheBlewish / Automated-AI-Web-Researcher-Ollama

A python program that turns an LLM, running on Ollama, into an automated researcher, which will with a single query determine focus areas to investigate, do websearches and scrape content from various relevant websites and do research for you all on its own! And more, not limited to but including saving the findings for you!
MIT License
1.04k stars 101 forks source link

How can I use this project via ChatGPT's API instead of Ollama? #20

Open endNone opened 23 hours ago

synth-mania commented 20 hours ago

This could cost a lot of money to leave running if you are using a paid API

benx13 commented 20 hours ago

check pr #22

nightscape commented 19 hours ago

and also https://github.com/TheBlewish/Automated-AI-Web-Researcher-Ollama/pull/1

synth-mania commented 19 hours ago

Well that sucks. Wasn't expecting @TheBlewish to want to kneecap their own project like that.

@endNone just use the fork for now I guess https://github.com/NimbleAINinja/Automated-AI-Web-Researcher-Hosted

nightscape commented 19 hours ago

It's not my fork, I'm just using it on my machine 😉 But yeah, the fork works well with LM Studio.

synth-mania commented 19 hours ago

Apologies @nightscape , I confused your names. Glad to hear it works well, I'll probably be switching to that fork as well.

TheBlewish commented 9 hours ago

This could cost a lot of money to leave running if you are using a paid API

This is one of my prime concerns given the whole point is to do heaps of searches.

Well that sucks. Wasn't expecting @TheBlewish to want to kneecap their own project like that.

@endNone just use the fork for now I guess https://github.com/NimbleAINinja/Automated-AI-Web-Researcher-Hosted

Hey I am not trying to knee cap my own project, I have like 2 exams for Uni today and have just been trying to reply as best I can to stuff while studying frantically.

Also none of the pull requests from what I can tell actually would even work!

like throwing in some stuff into the config and wrapper scripts about OpenAI I really doubt is going to be sufficient, to make it actually work, and it's also concerning that someone without realizing could spend heaps of money on API calls without realizing it which would suck.

But that being said, please if you wanna make this program compatible with OpenAI endpoints, I would really appreciate it if you could open a discussion in the discussions tab, and show that it works! And if you do I will merge the pull request as long as it doesn't harm the Ollama functionality of the program!

It would be a big help, I just don't want to merge something that likely wouldn't even be functional from what I can see!

TheBlewish commented 9 hours ago

It's not my fork, I'm just using it on my machine 😉 But yeah, the fork works well with LM Studio.

Sorry I only just saw this, I have to do my exams but rest assured everyone after that I will look into this, if it works well I will merge it as long as it doesn't harm Ollama functionality if y'all could make a discussion with the details that would still really help thanks!

synth-mania commented 9 hours ago

Yeah, I don't think many of us are concerned with using web APIs specifically, it's just we all have our preferred hosting platforms @TheBlewish

synth-mania commented 9 hours ago

I may have gotten the wrong idea, I was under the impression you were against adding openAI API support period

TheBlewish commented 9 hours ago

Haha, yeah no I am all for it I just want to make sure it works so that we don't have redundant code that isn't viable, let me know if there's something like that which is functioning, Night scape said there was one that worked well? Can you maybe confirm for me if you get a second? I'll have to check it later though, thanks mate!

synth-mania commented 9 hours ago

@TheBlewish I'm about to clone the fork @nightscape was referring to. I'll tag you with my findings under the pull request from the fork author

benx13 commented 6 hours ago

Haha, yeah no I am all for it I just want to make sure it works so that we don't have redundant code that isn't viable, let me know if there's something like that which is functioning, Night scape said there was one that worked well? Can you maybe confirm for me if you get a second? I'll have to check it later though, thanks mate!

technically for redundancy you could remove all other libraries and use the openai api with all other endpoints such as Ollama llama file .... tested it yesterday with gpt-4o-mini works quite well and not expensive to run 0.6$ per million tokens, the reason cloud is better than local is generation speed rather than low cost besides you could multithread it with cloud which is highly unlikely with local models (latency)