kyegomez / tree-of-thoughts

Plug in and Play Implementation of Tree of Thoughts: Deliberate Problem Solving with Large Language Models that Elevates Model Reasoning by atleast 70%
https://discord.gg/qUtxnK2NMf
Apache License 2.0
4.38k stars 365 forks source link

A* search error #67

Closed luca-git closed 1 year ago

luca-git commented 1 year ago

Not sure which paramters the different search algos take had to go by trial and error, the following works but inevitably runs in this error at some point:

import os
from tree_of_thoughts.openaiModels import OpenAILanguageModel
from tree_of_thoughts.treeofthoughts import TreeofThoughts,OptimizedTreeofThoughts
from tree_of_thoughts import MonteCarloTreeofThoughts, TreeofThoughtsBFS,TreeofThoughtsDFS,TreeofThoughtsBEST,TreeofThoughtsASearch
import openai
from dotenv import load_dotenv
load_dotenv()

api_key = os.getenv("OPENAI_API_KEY")
api_model= "gpt-3.5-turbo"

model = OpenAILanguageModel(api_key=api_key, api_model=api_model)

#tree_of_thoughts= TreeofThoughts(model) #search_algorithm)
# Initialize the MonteCarloTreeofThoughts class with the model
#tree_of_thoughts = MonteCarloTreeofThoughts(model)
#tree_of_thoughts = TreeofThoughtsDFS(model)
# tree_of_thoughts = TreeofThoughtsBEST(model)
# tree_of_thoughts = TreeofThoughtsBFS(model)
tree_of_thoughts = TreeofThoughtsASearch(model)
# Note to reproduce the same results from the tree of thoughts paper if not better, 
# craft an 1 shot chain of thought prompt for your task below

initial_prompt = """Envision a group of three experts working in unison to tackle a question by employing a tree of thoughts strategy. 
Each expert will thoroughly explain their line of thinking at every step, while also considering the insights provided by their peers. 
They will openly recognize any mistakes and build upon the group's shared understanding.They will focus on logic and ensure to avoid 
ungrounded statements or conclusions. Think systemically, strategically and creatively, use logic, explore extreme scenarios.
This iterative process will continue until a definitive solution is reached. Structure the entire response as a markdown table. 
The question is: 

    Which are the strategic implications of generative ai agents being able to consistently provide a correct (meaningful, relevant) 
    answer more than 50% of the time? Use logic, think at scale"""

# Solve a problem with the TreeofThoughts

num_thoughts = 3
max_steps = 3
max_states = 4
pruning_threshold = 0.5

solution = tree_of_thoughts.solve(
    initial_prompt=initial_prompt,
#    num_thoughts=num_thoughts, 
    max_steps=max_steps, 
  #  max_states=max_states, 
    #value_threshold=pruning_threshold,
    pruning_threshold=pruning_threshold,
    # sleep_time=sleep_time
)
print(f"solution: {solution}")

error:


  File ~\...\treeofthoughts.py:267 in reconstruct_path
    path = self.reconstruct_path(came_from, current_state)

TypeError: reconstruct_path() missing 1 required positional argument: 'initial_prompt'
clementzhao commented 1 year ago

Actually there are still other issue, for example use TreeofThoughtsDFS, it will report the wrong paramter num_thoughts, because the code use self.num_thoughts not num_thoughts. After you fix it, and it will report max() error, experience so bad.