Thank you for your exceptional work, which has truly inspired me. I am eager to replicate your achievements, but unfortunately, I lack the necessary GPT resources.
If you have tested the effectiveness of your approach on any open-source LLMs, such as llama and qwen? It would be greatly appreciated if you could share the code and experimental results of open-source LLMs.
Hi,
Thank you for your exceptional work, which has truly inspired me. I am eager to replicate your achievements, but unfortunately, I lack the necessary GPT resources.
If you have tested the effectiveness of your approach on any open-source LLMs, such as llama and qwen? It would be greatly appreciated if you could share the code and experimental results of open-source LLMs.