TechNickAI / AICodeBot

AI-powered tool for developers, simplifying coding tasks and improving workflow efficiency. 🤖
GNU Affero General Public License v3.0
52 stars 14 forks source link

sidekick - whole project files view #46

Open hanselke opened 1 year ago

hanselke commented 1 year ago

i believe using a vectorDB to store the files, then only search for relevant portions would work to get around the input context limit.

https://python.langchain.com/docs/modules/chains/additional/vector_db_text_generation

Could break it down into multiple queries, probably using something like the chain of thoughts to break up different parts. seems like what the community is doing.

falcon-40b-code seems interesting. the encoder can be used standalone to run that vector search, before piping that output to the decoder.

TechNickAI commented 1 year ago

Thanks for the input!

As a neat coincidence, I started playing around with vector stores and embeddings tonight, with a new learn command that will allow you to feed in external repos.

https://github.com/gorillamania/AICodeBot/commit/7f5ae4aa9563c1b94308d295a96b6c63216cbf5b

It's not working for large repos. Hitting API limits for # of tokens with OpenAI. I'd like to explore a local embedding solution anyway.

Once I get the learn command working, I'll look into having sidekick use a similar approach for the local repo

TechNickAI commented 1 year ago

I'm also exploring using ctags to pass along as context

hanselke commented 1 year ago

Mmm what are you trying to improve using ctags? Using them as a sort of summary to reduce context window?

I’m not sure if I want to lose resolution tho. For the use case of ‘fixing my current code’ , I -really- do not want it hallucinating and re-generate the ’things that currently work’.

Do you have a test case example where context window is too large to solve the question you are trying to solve? Would like to take a stab at trying to use prompt engineering to essentially, break it up into smaller questions to try to fit into the context window.

Saw you have a basic unit test setup, I’m thinking what it really needs, is https://github.com/openai/human-eval

I’ve only just started mucking with python recently, and am not familiar with the library eco-system yet, but if you have some suggestions as to how to get a proper ‘code generation test’ going, I think that’s step #1 before taking pot shots at improving it.

On 18 Jul 2023, at 9:42 AM, Nick Sullivan @.***> wrote:

I'm also exploring using ctags to pass along as context

— Reply to this email directly, view it on GitHub https://github.com/gorillamania/AICodeBot/issues/46#issuecomment-1639148688, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMJWFMJ3RNDEMXKRPU6BCLXQXSZHANCNFSM6AAAAAA2MNUFBE. You are receiving this because you authored the thread.

TechNickAI commented 1 year ago

Still working on this.

In the meantime, I added commands to the sidekick prompt, so you can add / drop files for context, without restarting.

/add file /drop file /files # list files

TechNickAI commented 1 year ago

FYI - I got aicodebot learn working.

And then built a sidekick-agent command that accepts a --learned-repo $repo argument.

But the results are currently terrible, because the resulting text from the VectorStore is summarized text that answers a question, but it doesn't actually give the code/documents.

I think the next thing to try is to have a custom Vector Store tool that searches the vector store database, returns the relevant code, and returns that for use by the LLM.

hanselke commented 1 year ago

mm think i see your problem.

i think it would be best resolved by breaking up code into functions. one function per file seems like the easy prototype.

Then you would probably want to have a 'fill context' method that, tries to add all the dependent functions once you can identify the functions that matter.

Once that limit is hit, multi shot approaches seems like the way forward.

"What other functions do you need information on?"

TechNickAI commented 1 year ago

Makes sense, and I'm headed in that direction

In the meantime, the current aicodebot sidekick command is pretty good, because you can add files for context. Today I put a lot of energy into making that a better process, including the ability to add/drop files cleanly, a better management of the token economy, etc.

My current set up is that I use openrouter.ai so that I can use gpt4-32k. With this amount of context, I'm able to solve most of my programming tasks very efficiently.

I'll sit down to solve a problem, think about what files are going to be needed to do it, and load those in, then chat with those files.

hanselke commented 1 year ago

It’s think that’s prohibitively expensive tho. Costs USD$2-3 per call @32k input, and when actively using it, you are probably calling it 5-10 times an hour.

That being said, the models that can be run on 24GB consumer gpu’s probably suck vs gpt4. Heh only one way to find out. I need to figure out how to hack into the LLM/ChatModel class to make a custom remote call.

On 27 Jul 2023, at 2:53 PM, Nick Sullivan @.***> wrote:

Makes sense, and I'm headed in that direction

In the meantime, the current aicodebot sidekick command is pretty good, because you can add files for context. Today I put a lot of energy into making that a better process, including the ability to add/drop files cleanly, a better management of the token economy, etc.

My current set up is that I use openrouter.ai so that I can use gpt4-32k. With this amount of context, I'm able to solve most of my programming tasks very efficiently.

I'll sit down to solve a problem, think about what files are going to be needed to do it, and load those in, then chat with those files.

— Reply to this email directly, view it on GitHub https://github.com/gorillamania/AICodeBot/issues/46#issuecomment-1653013563, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMJWFMPNWJRMRMTEZMXHY3XSIF73ANCNFSM6AAAAAA2MNUFBE. You are receiving this because you authored the thread.