Closed Specterfox109 closed 1 year ago
@Specterfox109 can you let us know if you are still having issues? We run on donated GPU's so occasionally they are full, but seem to be working now.
They’re workin fine now, thanks
Not working fine for me and it hasn't in weeks
Not working fine for me and it hasn't in weeks
Use SFT-6. SFT-7 and RLHF-2 models are down much more often as they run on preemptible compute
SFT-6 is incapable of using plugins. Regardless, this issue is hardly fixed if 2/3 models remain affected. If the hardware is having a hard time keeping up, this should be handled in a way that doesn't require copy-pasting the prompt and opening an entirely new chat just to try again. I expect it to retry until it succeeds or at least offer the option of retrying. Right now there isn't even the possibility of "regenerating" on the initial prompt.
SFT-6 is incapable of using plugins. Regardless, this issue is hardly fixed if 2/3 models remain affected. If the hardware is having a hard time keeping up, this should be handled in a way that doesn't require copy-pasting the prompt and opening an entirely new chat just to try again. I expect it to retry until it succeeds or at least offer the option of retrying. Right now there isn't even the possibility of "regenerating" on the initial prompt.
There is no point in "retrying" because when SFT-7 and RLHF-2 are down, it's not because the hardware is overloaded, it's because the hardware is not available at all. We run those two models on donated compute which sometimes is not available because the owners of the compute have a commercial use for it.
It would be good for the website/server to handle failures better and allow regenerate option if the initial prompt generation fails, but this would require someone to work on it and implement that feature. If you want to work on it, feel free to submit a pull request.
There is no point in "retrying" because when SFT-7 and RLHF-2 are down, it's not because the hardware is overloaded, it's because the hardware is not available at all
I see, thanks for clearing that up. As you say, it would be good if such failures were handled better. It should probably state that the model is entirely unavailable and allow retrying with a different one.
It can be a bit glitchy from what I’ve learned, give it time
For all of the AI models I get a “server is busy please try again” notice, originally it was just with OA_SFT_Llama_30B_7 and OA_RLHF_Llama_30B_2_7K but now it’s with all of them, making this AI completely unsuitable, I’m using the browser version of the AI, and please don’t give me an “unplanned 3515 message” or if you do at least clarify what it means because I don’t understand what it meant, thank you for your time