aleeusgr / nix-things

a toolbox
1 stars 0 forks source link

developer productivity tools #56

Closed aleeusgr closed 6 months ago

aleeusgr commented 1 year ago
aleeusgr commented 9 months ago

https://github.com/KillianLucas/open-interpreter/

https://nixos.wiki/wiki/Python

aleeusgr commented 8 months ago

https://huggingface.co/blog/personal-copilot

aleeusgr commented 8 months ago

https://github.com/cursorless-dev/cursorless ❌ depends on Talon, closed source software.

I could find an open source solution and find if cursorless could be used with another engine

aleeusgr commented 7 months ago

https://gpt4all.io/index.html https://discourse.nixos.org/t/gpt4all-nix-derivation/27744

aleeusgr commented 7 months ago

https://github.com/aleeusgr/nix-things/issues/73

aleeusgr commented 6 months ago

Deepseek-chat and deepseek coder are regarded as being the best coding models afaik https://huggingface.co/deepseek-ai/deepseek-coder-33b-instruct

aleeusgr commented 6 months ago

How to choose a model: https://www.tensorops.ai/post/what-are-quantized-llms

While we expect that reducing the precision would result in reduction of the accuracy, Meta researchers have demonstrated that in some cases, not only does the quantized model demonstrate superior performance, but it also allows of reduced latency and enhanced throughput. The same trend can be observed when comparing an 8-bit 13B model with a 16-bit 7B model. In essence, when comparing models with similar inference costs, the larger quantized models can outperform their smaller, non-quantized counterparts. This advantage becomes even more pronounced with larger networks, as they exhibit a smaller quality loss when quantized.

aleeusgr commented 6 months ago

You might be able to have a better experience using EXL2/GPTQ and with bigger models

aleeusgr commented 6 months ago