As it stands right now, this is a great idea, but progress needs some parameters to allow more and better contributions:
A roadmap with the desired features, maybe classified by difficulty or urgency
Some kind of benchmark or success report for different projects and language conversion settings:
This is more common on emulator repos, where the compatibility can be measured by testing the emulated software, but if you pick some popular repos or projects that have stable releases on source and target languages, you can compare what works and what doesn't.
Some decisions on how certain features will be added, for example: what kind of preprocessors for code should be allowed or ignored? should specific prompt hints be added based on certain functions or libraries to make the AI answer less prone to errors or hallucination?
These additions would funnel contributor efforts to specific areas and tasks, rather than anyone expecting for their own unique needs that broadly overlap with a more structured feature project.
As it stands right now, this is a great idea, but progress needs some parameters to allow more and better contributions:
These additions would funnel contributor efforts to specific areas and tasks, rather than anyone expecting for their own unique needs that broadly overlap with a more structured feature project.