kyegomez / tree-of-thoughts

Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
https://discord.gg/qUtxnK2NMf
Apache License 2.0
4.16k stars 350 forks source link

link to the official implementation: https://github.com/ysymyth/tree-of-thought-llm #54

Closed ysymyth closed 1 year ago

ysymyth commented 1 year ago

hi @kyegomez thanks for implementing our work!

as pointed out in https://github.com/ysymyth/tree-of-thought-llm/issues/17, would you mind linking to our official repo in your README.md to avoid any confusion? thanks in advance!

ysymyth commented 1 year ago

Hi, why would you close it without resolving it?

Ber666 commented 1 year ago

The response is self-contraditing😅. If you want to claim your repo is "radically different from" the Tree of Thoughts paper, you should not use the title and even the figure from the paper as clickbait.

CyrusOfEden commented 1 year ago

@kyegomez imagine how @lucidrains would respond in this situation.

kyegomez commented 1 year ago

Probably alot nicer @CyrusOfEden if someone did not outright command them to reference their repo. If they had not commanded me we would not be here arguing and wasting time instead of improving the algorithm

CyrusOfEden commented 1 year ago

@kyegomez I see you did everything lucidrains would do to attribute, but I imagine if he were here that he'd work with @ysymyth to find a common ground.

The winds of the world blow, and it is up to you to adjust your sail. There is a way things flow, and when we don't mind the flow we can find ourselves at odds with the world.

As a builder I appreciate your code more than the original repo. I was even excited to collaborate with you, but now I'm not so sure based on your behaviour.

I believe the better move for the long term game would be to update the README to say something akin to "inspired by Shunyu et al's work on Tree of Thoughts (original implementation here)". Keep the repo name, and hats off to you, you have the PyPI package as well.

It would be an honourable thing to do mate, you're tarnishing your own reputation right now with your wringing. Take what you have. Play the long game.

danny-avila commented 1 year ago

@kyegomez I see you did everything lucidrains would do to attribute, but I imagine if he were here that he'd work with @ysymyth to find a common ground.

The winds of the world blow, and it is up to you to adjust your sail. There is a way things flow, and when we don't mind the flow we can find ourselves at odds with the world.

As a builder I appreciate your code more than the original repo. I was even excited to collaborate with you, but now I'm not so sure based on your behaviour.

I believe the better move for the long term game would be to update the README to say something akin to "inspired by Shunyu et al's work on Tree of Thoughts (original implementation here)". Keep the repo name, and hats off to you, you have the PyPI package as well.

It would be an honourable thing to do mate, you're tarnishing your own reputation right now with your wringing. Take what you have. Play the long game.

Thank you for your level-headed comment. I don't mean to resurrect any conflict, but from a neutral, developing standpoint, do you or anyone else have a TL;DR on how the 2 implementations differ?

I do agree that this repo is a little more clear to adapt/read. I will have to test both and compare results

CyrusOfEden commented 1 year ago

@danny-avila

@ysymyth's repo provides 3 specific implementations of tree of thoughts for 3 different games. After 25m it was unclear to me how I would use it in my own projects. It is the code for one of the first of a handful of recent papers that are discovering that guiding the generation of the next token through as a sort of graph search through the possible completions.

Like if you wanted to get an LLM to solve Sudoku, instead of rolling the dice with a zero shot and hoping it'll work, or even adding "let's think step by step", you use the LLM to generate the next number repeatedly, backtracking if the partial solution is incorrect.

The papers are finding that graph search algorithms like Breadth First Search, Depth First Search, and others are a useful "layer of reasoning" to apply on top of LLM generation.

@kyegomez's repo implements BFS and DFS tree of thoughts in a manner where you can upgrade your generative LLM apps to use it. It was quickly apparent to me how I would use it in my projects.

ddxgz commented 1 year ago

It's a lot easier to simply add a link to the paper's github repo, instead of arguing.

Collin-Budrick commented 1 year ago

How hard is it to mention the original inspiration for your repository? Even if it's modified (which is what a fork of any project is). You look better to the community when you're transparent about crediting your work.

I really can't believe this had to be stated.

Your work had the potential to create a collaborative community with the original authors and other developers; but that bridge fell and now other developers who stumble across this will have second thoughts about collaborating with you.

Just seems like a disappointment it turned out this way. There's time to make things right by reaching out the him and setting things straight, but I have a feeling that's wishful thinking.

lucidrains commented 1 year ago

ah hey all, just noticed i was tagged on this

@kyegomez i think what you are doing is valuable work. however, you should empathize with the authors here, as it takes a lot of effort to get to the point of contributing even a single paper. the authors are also staking their future careers on each paper they publish. for their idea to be fairly evaluated in the academic framework, reviewers cannot be misled into thinking this is the official repository. therefore, i encourage you to add a single link to the official repository; that is all that you need to do

CyrusOfEden commented 1 year ago

kyegomez commented 9 months ago

@lucidrains I have referenced their implementations in the readme.