Open Bruno-Alumn opened 2 months ago
Related issues:
The top answer does suggest implementing a "back-off" feature, similar to rate-limit prevention techniques. Is it possible to do this? or is there another cause to the issue to do with my machine?
@Bruno-Alumn Based on a quick skim of the error + StackOverflow page, I think you're right in thinking this is more of a problem with Google + humanify
not having a good 'error recovery mechanism' rather than anything specific with your machine.
Thank you very much for the reply @0xdevalias! Ill look into contributing to the project to solve the issue though it seems to have already been mentioned in the thread you linked. Either way, for my current situation, I decided to use OpenAI. It is true, the speed it noticeably slower, though i still had one question: Does the following seem like an accurate ratio between input and output tokens?
Does the following seem like an accurate ratio between input and output tokens?
@Bruno-Alumn I don't really know what the ratios should look like specifically.. but I would definitely expect input to be MUCH higher, as that's going to be providing all of the code being referenced; whereas output should just be essentially giving the variable renames back; so that doesn't look overly surprising to me.
I'm trying to process a pretty huge file and just ran into this:
RateLimitError: 429 Rate limit reached for gpt-4o-mini in organization org-abcdefghijklmnopqrstuvwx on requests per day (RPD): Limit 10000, Used 10000
I'm going to see about improving the rate limiting here:
// /src/plugins/openai/openai-rename.ts +import Bottleneck from "bottleneck/light"; +// Math.floor(10_000 / 24) requests/hour +const limiter = new Bottleneck({ + "reservoir": Math.floor(10_000 / 24), + "reservoirRefreshAmount": Math.floor(10_000 / 24), + "reservoirRefreshInterval": 3_600_000 +}); export function openaiRename({ apiKey, baseURL, model, contextWindowSize }: { apiKey: string; baseURL: string; model: string; contextWindowSize: number; }) { const client = new OpenAI({ apiKey, baseURL }); + const wrapped = limiter.wrap(async (code: string): Promise<string> => { return await visitAllIdentifiers( code, async (name, surroundingCode) => { verbose.log(`Renaming ${name}`); verbose.log("Context: ", surroundingCode); const response = await client.chat.completions.create( toRenamePrompt(name, surroundingCode, model) ); const result = response.choices[0].message?.content; if (!result) { throw new Error("Failed to rename", { cause: response }); } const renamed = JSON.parse(result).newName; verbose.log(`Renamed to ${renamed}`); return renamed; }, contextWindowSize, showPercentage ); + }); + return wrapped(); }
Originally posted by @brianjenkins94 in https://github.com/jehna/humanify/issues/167#issuecomment-2430584145
So I'm trying to de-obfuscate a file that has 3781204 characters. I know its quite a bit... I decided to go with Gemini due to the disclaimer of speed with OpenAI. Firstly the progress is incredibly slow (probably due to the file size). Anyways, after running this command and two hours (getting to 3% completion)
I get this error:
Now from what I understand this is completely google's issue as stated in this thread. The top answer does suggest implementing a "back-off" feature, similar to rate-limit prevention techniques. Is it possible to do this? or is there another cause to the issue to do with my machine? tysm