Open endNone opened 23 hours ago
check pr #22
Well that sucks. Wasn't expecting @TheBlewish to want to kneecap their own project like that.
@endNone just use the fork for now I guess https://github.com/NimbleAINinja/Automated-AI-Web-Researcher-Hosted
It's not my fork, I'm just using it on my machine 😉 But yeah, the fork works well with LM Studio.
Apologies @nightscape , I confused your names. Glad to hear it works well, I'll probably be switching to that fork as well.
This could cost a lot of money to leave running if you are using a paid API
This is one of my prime concerns given the whole point is to do heaps of searches.
Well that sucks. Wasn't expecting @TheBlewish to want to kneecap their own project like that.
@endNone just use the fork for now I guess https://github.com/NimbleAINinja/Automated-AI-Web-Researcher-Hosted
Hey I am not trying to knee cap my own project, I have like 2 exams for Uni today and have just been trying to reply as best I can to stuff while studying frantically.
Also none of the pull requests from what I can tell actually would even work!
like throwing in some stuff into the config and wrapper scripts about OpenAI I really doubt is going to be sufficient, to make it actually work, and it's also concerning that someone without realizing could spend heaps of money on API calls without realizing it which would suck.
But that being said, please if you wanna make this program compatible with OpenAI endpoints, I would really appreciate it if you could open a discussion in the discussions tab, and show that it works! And if you do I will merge the pull request as long as it doesn't harm the Ollama functionality of the program!
It would be a big help, I just don't want to merge something that likely wouldn't even be functional from what I can see!
It's not my fork, I'm just using it on my machine 😉 But yeah, the fork works well with LM Studio.
Sorry I only just saw this, I have to do my exams but rest assured everyone after that I will look into this, if it works well I will merge it as long as it doesn't harm Ollama functionality if y'all could make a discussion with the details that would still really help thanks!
Yeah, I don't think many of us are concerned with using web APIs specifically, it's just we all have our preferred hosting platforms @TheBlewish
I may have gotten the wrong idea, I was under the impression you were against adding openAI API support period
Haha, yeah no I am all for it I just want to make sure it works so that we don't have redundant code that isn't viable, let me know if there's something like that which is functioning, Night scape said there was one that worked well? Can you maybe confirm for me if you get a second? I'll have to check it later though, thanks mate!
@TheBlewish I'm about to clone the fork @nightscape was referring to. I'll tag you with my findings under the pull request from the fork author
Haha, yeah no I am all for it I just want to make sure it works so that we don't have redundant code that isn't viable, let me know if there's something like that which is functioning, Night scape said there was one that worked well? Can you maybe confirm for me if you get a second? I'll have to check it later though, thanks mate!
technically for redundancy you could remove all other libraries and use the openai api with all other endpoints such as Ollama llama file .... tested it yesterday with gpt-4o-mini works quite well and not expensive to run 0.6$ per million tokens, the reason cloud is better than local is generation speed rather than low cost besides you could multithread it with cloud which is highly unlikely with local models (latency)
This could cost a lot of money to leave running if you are using a paid API