Open jesse-lane-ai opened 3 months ago
Is there a workflow to just return the graph ?
Also what testing has been done on inputting raw source html to the source parameter?
it’s not apparent how to request a series of prompts on the knowledge graph. Like if I wanted to ask a series of questions. I don’t want to make multiple api calls on the same website. Maybe I’m not understanding something.
How do I save the knowledge graph and then iterate prompt requests on it without actually calling the website again?
hi @jesse-lane-ai,
one way to achieve the above is to use the cache attribute to store the contents of a website, so that you only have to call the language model on it multiple times, once per search question you might have, instead of re-running the whole pipeline multiple times (automatically handled under the hood).
see more on the cache in the graph config's additional parameters section in the documentation.
Also what testing has been done on inputting raw source html to the source parameter?
that should be always possible in the fetch node
any graph using that should work just fine
Is there a workflow to just return the graph ?
you mean returning the vector store / KG?
not yet, but we already had some requests for it, @VinciGit00
it’s not apparent how to request a series of prompts on the knowledge graph. Like if I wanted to ask a series of questions. I don’t want to make multiple api calls on the same website. Maybe I’m not understanding something.
How do I save the knowledge graph and then iterate prompt requests on it without actually calling the website again?