LLM reads a paper and produce a working prototype for papers without code!
No Llama-index, no LangChain, just get back to the root of the idea.