TaxyAI / browser-extension

Automate your browser with GPT-4
MIT License
1.01k stars 395 forks source link

Theory on how to alleviate the token max issue. #28

Open xbwtyz opened 1 year ago

xbwtyz commented 1 year ago

I see you are already working on a method using viewport to cutdown on token amount, but what if apply the following as well

This would of course introduce unnecessary space for error and ultimately slow down the extension, but maybe it would function in niche cases?

CryptoMitch commented 1 year ago

`async function getRelevantDomSegments(dom, instructions, llm) { const segmentSize = 50; // or any other reasonable size let masterMessage = ""; let startIndex = 0; let endIndex = segmentSize;

while (startIndex < dom.length) { const segment = dom.slice(startIndex, endIndex); const modifiedInstructions = What portions of the following message are relevant to ${instructions}: ${segment};

const relevantPortion = await llm(modifiedInstructions); // assuming llm is an async function that sends the text to LLM and returns the result
masterMessage += relevantPortion;

startIndex += segmentSize;
endIndex += segmentSize;

}

const finalInstructions = ${instructions} ${masterMessage}; const result = await llm(finalInstructions);

return result; }`

ibrahimsk1 commented 1 year ago

@xbwtyz cool solution!

My suggestion is after simplifying the html creating indexes and keeping these. Afterwards, asking questions about the user interactions and getting the relavent parts.

tluyben commented 1 year ago

It seems that keeping this in html format to send to gpt is taking up a lot of space (tokens)? Wouldn't it be better to just generate a flat list of [id, type, text] and send that over? So like [1833, 'c' 'Read More'] where you can have types like 'c' for clickable, 'i' for inputable etc.

Edit; doing some testing here https://github.com/tluyben/browser-extension/tree/feature/token-limit-attempt ; seems to work better for sites i have tested.