Open bbkgh opened 2 weeks ago
Using LLMs for analysis is certainly interesting but I'm not sure if asking the LLM to do the analysis is the solution because LLMs are much slower than regular rules. Instead, we should explore if it's possible for the LLM to derive the necessary checks once ahead of time that Ruff can then run as part of its engine. This also removes any need to directly integrate with an LLVM in ruff itself.
I also think LLMs should be used very sparsely. E.g. the first rule is already covered by Ruff.
Hi, I would like to suggest adding a new class of rules to Ruff that utilizes Local/Remote LLM APIs for running rules over code files. I believe this can greatly improve code quality and result in fewer bugs. Additionally, writing rules in plain text is more convenient. For example, we could add a llm_rules.yaml file to project containing something like this: