Closed AnkitaKota closed 9 months ago
π€
Hello,
Thank you for reaching out with your issue. From the error message you provided, it seems like you're trying to save an instance of SemanticSimilarityExampleSelector
, which is currently not supported.
The SemanticSimilarityExampleSelector
is designed to select examples based on semantic similarity at runtime. It doesn't have built-in support for saving or loading its state because it's typically used in a stateless manner. When you create an instance of SemanticSimilarityExampleSelector
, it's configured with a specific set of examples and parameters. These are used to select examples each time you call select_examples()
, but the selector itself doesn't change or store any additional state that would need to be saved or loaded.
If you need to reuse the same SemanticSimilarityExampleSelector
configuration across multiple sessions or scripts, I would recommend encapsulating the configuration in a function or a separate script that you can call to create a new instance of the selector each time you need it.
Here's an example of how you might do this:
from langchain.prompts.example_selector.semantic_similarity import SemanticSimilarityExampleSelector
from langchain.vectorstore import VectorStore
from langchain.embeddings import Embeddings
def create_selector():
# Replace these with your actual examples and embeddings
examples = [...]
embeddings = Embeddings(...)
# Create and return a new selector
return SemanticSimilarityExampleSelector.from_examples(
examples,
embeddings,
VectorStore,
k=4,
input_keys=None
)
Then, in your main script or session, you can call create_selector()
to get a new instance of the selector with the same configuration each time.
I hope this helps! If you have any other questions or need further clarification, please let me know.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Hello,
I'm not trying to pass the instance directly, but instead what I'm trying to do is prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] ) and then passing this prompt to db_chain which is indeed passed to agent.
π€
Hello,
Thank you for reaching out with your issue. The error you're encountering, "ValueError: Saving an example selector is not currently supported", is raised when you try to convert a FewShotPromptWithTemplates
object to a dictionary using the dict()
method and the FewShotPromptWithTemplates
object has an example_selector
attribute. This is because the dict()
method in the FewShotPromptWithTemplates
class checks if the example_selector
attribute is present and raises a ValueError if it is.
Here is the relevant code from the FewShotPromptWithTemplates
class:
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
To resolve this issue, you can avoid calling the dict()
method on a FewShotPromptWithTemplates
object that has an example_selector
attribute. If you need to serialize the FewShotPromptWithTemplates
object, you might need to implement a custom serialization method that can handle the example_selector
attribute.
You can find more information about this in the LangChain source code, specifically in the FewShotPromptWithTemplates
class in the few_shot.py and few_shot_with_templates.py files.
I hope this helps! If you have any other questions or need further clarification, please don't hesitate to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot I'm not calling the dict method explictly. This is how I have it prompt = FewShotPromptTemplate( example_selector=example_selector, example_prompt=example_prompt, suffix="Question: {input}", input_variables=["input"] )
π€
Hi there,
The error you're encountering is due to the dict()
method in the FewShotPromptWithTemplates
class. This method is designed to return a dictionary representation of the prompt. However, if the example_selector
attribute is set, it raises a ValueError
because saving an example selector is not currently supported.
Here's the implementation of the dict()
method in the FewShotPromptWithTemplates
class:
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
Even though you're not explicitly calling the dict()
method on the FewShotPromptTemplate
object, it might be getting called internally somewhere in your code or in the LangChain library itself.
To resolve this issue, you have a couple of options:
Avoid calling the dict()
method on a FewShotPromptWithTemplates
object that has an example_selector
attribute. This means you should not try to convert such an object to a dictionary.
If you need to convert a FewShotPromptWithTemplates
object to a dictionary, you should ensure that the example_selector
attribute is None. You can do this by not setting the example_selector
attribute when creating the FewShotPromptWithTemplates
object or by setting the example_selector
attribute to None before calling the dict()
method.
Here's an example of how you can create a FewShotPromptWithTemplates
object without an example_selector
attribute:
few_shot_prompt = FewShotPromptWithTemplates(
examples=[...], # provide your examples here
example_prompt=..., # provide your example_prompt here
suffix=..., # provide your suffix here
input_variables=..., # provide your input_variables here
# do not provide an example_selector
)
And here's an example of how you can set the example_selector
attribute to None before calling the dict()
method:
few_shot_prompt.example_selector = None
few_shot_prompt_dict = few_shot_prompt.dict()
I hope this helps! Let me know if you have any other questions.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot without passing the example selector how can I choose the few short prompt templates like MaxMarginalRelevanceExampleSelector or SemanticSimilarityExampleSelector as this is crucial part which will help to selecting the most closest semantic example example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(deployment = "am-trial"),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
π€
Hello,
Thank you for reaching out and providing detailed information about your issue. The error you're encountering is due to the fact that the dict()
method is indeed being called internally by the LangChain library when you're trying to use the MaxMarginalRelevanceExampleSelector
or SemanticSimilarityExampleSelector
with the FewShotPromptTemplate
.
The dict()
method is used to serialize the object to a dictionary, and currently, the LangChain framework does not support serializing example_selector
objects. This is why you're seeing the ValueError: Saving an example selector is not currently supported
error message.
However, you can still use MaxMarginalRelevanceExampleSelector
or SemanticSimilarityExampleSelector
with the FewShotPromptTemplate
without calling the dict()
method. Here's an example of how you can do it:
from langchain.prompts.example_selector import MaxMarginalRelevanceExampleSelector
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.prompts import FewShotPromptTemplate, PromptTemplate
example_prompt = PromptTemplate(
input_variables=["input", "output"],
template="Input: {input}\nOutput: {output}",
)
examples = [
{"input": "happy", "output": "sad"},
{"input": "tall", "output": "short"},
{"input": "energetic", "output": "lethargic"},
{"input": "sunny", "output": "gloomy"},
{"input": "windy", "output": "calm"},
]
example_selector = MaxMarginalRelevanceExampleSelector.from_examples(
examples,
OpenAIEmbeddings(),
FAISS,
k=2,
)
mmr_prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
print(mmr_prompt.format(adjective="worried"))
In this example, the MaxMarginalRelevanceExampleSelector
is used to dynamically select examples based on the input. The FewShotPromptWithTemplates
is used to format the selected examples into a chat conversation. The dict()
method is not called, so no ValueError
is raised.
I hope this helps! If you have any other questions or issues, feel free to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot I have it exactly same , do you see anything inbelow code that needs a change ?
π€
Hello,
The error you're encountering, ValueError: Saving an example selector is not currently supported
, is raised when the dict()
method is called on a FewShotPromptWithTemplates
object that has an example_selector
attribute. This is because the dict()
method in this class does not currently support serialization of the example_selector
attribute.
Here is the relevant code from the FewShotPromptWithTemplates
class:
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
In your case, it seems like the dict()
method is being called internally by the LangChain library. This could be happening when you're trying to print the mmr_prompt
object. The print()
function in Python calls the __str__()
method of the object, which in turn might be calling the dict()
method.
To resolve this issue, you can avoid printing the mmr_prompt
object directly. If you need to inspect the object, you can print its attributes individually. If you need to serialize the object, you might need to implement your own serialization method that can handle the example_selector
attribute.
As for the MaxMarginalRelevanceExampleSelector
and SemanticSimilarityExampleSelector
classes, they are used for selecting examples based on different criteria. The SemanticSimilarityExampleSelector
selects examples based on semantic similarity, while the MaxMarginalRelevanceExampleSelector
selects examples based on Max Marginal Relevance, which was shown to improve performance in a research paper.
I hope this helps! Let me know if you have any other questions.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot I don't think the suggestions provided is helping. I'm not printing anything. I have provided my code above. Do you see any issues with it?
π€
Hello,
Thank you for reaching out and providing a detailed description of your issue. The error you're encountering, ValueError: Saving an example selector is not currently supported
, is raised when the dict()
method is called on a FewShotPromptTemplate
object that has an example_selector
attribute. This is because the FewShotPromptTemplate
class does not currently support serializing example_selector
objects to a dictionary.
Here is the relevant code from the FewShotPromptTemplate
class:
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
Even though you're not explicitly calling the dict()
method, it's possible that it's being called implicitly somewhere in your code or in the LangChain framework. For example, if you're trying to serialize the FewShotPromptTemplate
object (e.g., to JSON), the dict()
method would be called.
To resolve this issue, you could either avoid serializing the FewShotPromptTemplate
object, or modify the dict()
method to handle example_selector
objects in a way that's appropriate for your use case. However, please note that modifying the dict()
method might have other implications on the LangChain framework, so it should be done with caution.
As for the MaxMarginalRelevanceExampleSelector
and SemanticSimilarityExampleSelector
classes, they are used for selecting examples based on semantic similarity and max marginal relevance, respectively. You can find more details about these classes in the LangChain framework documentation or directly in the source code.
If you're following the steps provided in the LangChain documentation and still encountering the error, it's likely that the dict()
method is being called implicitly somewhere in your code or in the LangChain framework. If you're unable to identify where this is happening, could you please provide more details about your use case or the context in which you're using the FewShotPromptTemplate
? This could help in identifying the root cause of the issue.
I hope this helps! Let me know if you have any other questions or need further clarification.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot Here is the full code. I have replaced it sample examples, but what I want is that it should select the closest example and then pass it to prompt. The abive suggestions is not helping here much
π€
Hello,
Thank you for reaching out and providing a detailed description of your issue. The error you're encountering, ValueError: Saving an example selector is not currently supported
, is raised when the dict()
method is called on an instance of FewShotPromptWithTemplates
that has an example_selector
attribute. This happens because the dict()
method in FewShotPromptWithTemplates
class checks if self.example_selector
is not None and raises a ValueError if it is.
Here is the relevant code from the FewShotPromptWithTemplates
class:
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
Even though you're not explicitly calling the dict()
method in your code, it might be getting called implicitly somewhere else in your code or in the LangChain framework. This could happen, for example, when trying to serialize the FewShotPromptWithTemplates
instance to JSON, or when trying to print or log the instance.
The FewShotPromptTemplate
class handles the example_selector
attribute during its initialization by first checking if either examples
or example_selector
is provided, but not both. If both are provided or neither is provided, it raises a ValueError.
@root_validator(pre=True)
def check_examples_and_selector(cls, values: Dict) -> Dict:
"""Check that one and only one of examples/example_selector are provided."""
examples = values.get("examples", None)
example_selector = values.get("example_selector", None)
if examples and example_selector:
raise ValueError(
"Only one of 'examples' and 'example_selector' should be provided"
)
if examples is None and example_selector is None:
raise ValueError(
"One of 'examples' and 'example_selector' should be provided"
)
return values
Then, when the format
method is called, it uses the _get_examples
method to get the examples to use for formatting the prompt. If example_selector
is not None, it calls the select_examples
method of the example_selector
with the provided keyword arguments.
def _get_examples(self, **kwargs: Any) -> List[dict]:
"""Get the examples to use for formatting the prompt.
Args:
**kwargs: Keyword arguments to be passed to the example selector.
Returns:
List of examples.
"""
if self.examples is not None:
return self.examples
elif self.example_selector is not None:
return self.example_selector.select_examples(kwargs)
else:
raise ValueError(
"One of 'examples' and 'example_selector' should be provided"
)
To resolve your issue, you might want to check if there's any part of your code or any library that you're using that could be trying to serialize the FewShotPromptWithTemplates
instance to JSON, or trying to print or log the instance. If you find such a part, you might want to modify it to avoid calling the dict()
method on the FewShotPromptWithTemplates
instance.
I hope this helps! If you have any further questions or if the issue persists, please feel free to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot I understand the logic of fewshort template. As mentioned in earlier messages, I'm nethier callign dict nor printing or serializing it it. I have provided the whole code above to understand if there is any issue. I don't see any issue in the code but it is still failing. Error Message
ValueError Traceback (most recent call last) Cell In[171], line 1 ----> 1 agent.run("worried ")
File ~/.local/lib/python3.9/site-packages/langchain/chains/base.py:440, in run(self, callbacks, tags, metadata, *args, *kwargs)
431 raise ValueError(
432 f"run
not supported when there is not exactly "
433 f"one output key. Got {self.output_keys}."
434 )
435 return self.output_keys[0]
437 def run(
438 self,
439 args: Any,
--> 440 callbacks: Callbacks = None,
441 tags: Optional[List[str]] = None,
442 metadata: Optional[Dict[str, Any]] = None,
443 **kwargs: Any,
444 ) -> Any:
445 """Convenience method for executing chain.
446
447 The main difference between this method and Chain.__call__
is that this
(...)
479 # -> "The temperature in Boise is..."
480 """
481 # Run at start to make sure this is possible/defined
File ~/.local/lib/python3.9/site-packages/langchain/chains/base.py:243, in call(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
232 def call(
233 self,
234 inputs: Union[Dict[str, Any], Any],
(...)
241 include_run_info: bool = False,
242 ) -> Dict[str, Any]:
--> 243 """Execute the chain.
244
245 Args:
246 inputs: Dictionary of inputs, or single input if chain expects
247 only one param. Should contain all inputs specified in
248 Chain.input_keys
except for inputs that will be set by the chain's
249 memory.
250 return_only_outputs: Whether to return only outputs in the
251 response. If True, only new keys generated by this chain will be
252 returned. If False, both input keys and new keys generated by this
253 chain will be returned. Defaults to False.
254 callbacks: Callbacks to use for this chain run. These will be called in
255 addition to callbacks passed to the chain during construction, but only
256 these runtime callbacks will propagate to calls to other objects.
257 tags: List of string tags to pass to all callbacks. These will be passed in
258 addition to tags passed to the chain during construction, but only
259 these runtime tags will propagate to calls to other objects.
260 metadata: Optional metadata associated with the chain. Defaults to None
261 include_run_info: Whether to include run info in the response. Defaults
262 to False.
263
264 Returns:
265 A dict of named outputs. Should contain all outputs specified in
266 Chain.output_keys
.
267 """
268 inputs = self.prep_inputs(inputs)
269 callback_manager = CallbackManager.configure(
270 callbacks,
271 self.callbacks,
(...)
276 self.metadata,
277 )
File ~/.local/lib/python3.9/site-packages/langchain/chains/base.py:237, in call(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
214 """Asynchronously execute the chain.
215
216 This is a private method that is not user-facing. It is only called within
(...)
228 Chain.output_keys
.
229 """
230 raise NotImplementedError("Async call not supported for this chain type.")
232 def call(
233 self,
234 inputs: Union[Dict[str, Any], Any],
235 return_only_outputs: bool = False,
236 callbacks: Callbacks = None,
--> 237 *,
238 tags: Optional[List[str]] = None,
239 metadata: Optional[Dict[str, Any]] = None,
240 run_name: Optional[str] = None,
241 include_run_info: bool = False,
242 ) -> Dict[str, Any]:
243 """Execute the chain.
244
245 Args:
(...)
266 Chain.output_keys
.
267 """
268 inputs = self.prep_inputs(inputs)
File ~/.local/lib/python3.9/site-packages/langchain/agents/agent.py:987, in _call(self, inputs, run_manager) 985 tool = name_to_tool_map[agent_action.tool] 986 return_direct = tool.return_direct --> 987 color = color_mapping[agent_action.tool] 988 tool_run_kwargs = self.agent.tool_run_logging_kwargs() 989 if return_direct:
File ~/.local/lib/python3.9/site-packages/langchain/agents/agent.py:850, in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager) 848 text = str(e) 849 if isinstance(self.handle_parsing_errors, bool): --> 850 if e.send_to_llm: 851 observation = str(e.observation) 852 text = str(e.llm_output)
File ~/.local/lib/python3.9/site-packages/langchain/tools/base.py:320, in run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, kwargs) 317 # TODO: maybe also pass through run_manager is _run supports kwargs 318 new_arg_supported = signature(self._run).parameters.get("run_manager") 319 run_manager = callback_manager.on_tool_start( --> 320 {"name": self.name, "description": self.description}, 321 tool_input if isinstance(tool_input, str) else str(tool_input), 322 color=start_color, 323 kwargs, 324 ) 325 try: 326 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
File ~/.local/lib/python3.9/site-packages/langchain/tools/base.py:292, in run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, *kwargs) 287 else: 288 return (), tool_input 290 def run( 291 self, --> 292 tool_input: Union[str, Dict], 293 verbose: Optional[bool] = None, 294 start_color: Optional[str] = "green", 295 color: Optional[str] = "green", 296 callbacks: Callbacks = None, 297 , 298 tags: Optional[List[str]] = None, 299 metadata: Optional[Dict[str, Any]] = None, 300 **kwargs: Any, 301 ) -> Any: 302 """Run the tool.""" 303 parsed_input = self._parse_input(tool_input)
File ~/.local/lib/python3.9/site-packages/langchain/tools/base.py:442, in _run(self, run_manager, *args, **kwargs) 441 class Tool(BaseTool): --> 442 """Tool that takes in function or coroutine directly.""" 444 description: str = "" 445 func: Optional[Callable[..., str]]
File /opt/conda/lib/python3.9/inspect.py:3113, in signature(obj, follow_wrapped) 3111 def signature(obj, *, follow_wrapped=True): 3112 """Get a signature object for the passed callable.""" -> 3113 return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
File /opt/conda/lib/python3.9/inspect.py:2862, in Signature.from_callable(cls, obj, follow_wrapped) 2859 @classmethod 2860 def from_callable(cls, obj, *, follow_wrapped=True): 2861 """Constructs Signature for the given callable object.""" -> 2862 return _signature_from_callable(obj, sigcls=cls, 2863 follow_wrapper_chains=follow_wrapped)
File /opt/conda/lib/python3.9/inspect.py:2328, in _signature_from_callable(obj, follow_wrapper_chains, skip_bound_arg, sigcls) 2322 if isfunction(obj) or _signature_is_functionlike(obj): 2323 # If it's a pure Python function, or an object that is duck type 2324 # of a Python function (Cython functions, for instance), then: 2325 return _signature_from_function(sigcls, obj, 2326 skip_bound_arg=skip_bound_arg) -> 2328 if _signature_is_builtin(obj): 2329 return _signature_from_builtin(sigcls, obj, 2330 skip_bound_arg=skip_bound_arg) 2332 if isinstance(obj, functools.partial):
File /opt/conda/lib/python3.9/inspect.py:1875, in _signature_is_builtin(obj)
1866 def _signature_is_builtin(obj):
1867 """Private helper to test if obj
is a callable that might
1868 support Argument Clinic's __text_signature__ protocol.
1869 """
1870 return (isbuiltin(obj) or
1871 ismethoddescriptor(obj) or
1872 isinstance(obj, _NonUserDefinedCallables) or
1873 # Can't test 'isinstance(type)' here, as it would
1874 # also be True for regular python classes
-> 1875 obj in (type, object))
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:911, in pydantic.main.BaseModel.eq()
File ~/.local/lib/python3.9/site-packages/langchain/chains/base.py:555, in dict(self, **kwargs)
515 """Convenience method for executing chain.
516
517 The main difference between this method and Chain.__call__
is that this
(...)
550 # -> "The temperature in Boise is..."
551 """
552 if len(self.output_keys) != 1:
553 raise ValueError(
554 f"run
not supported when there is not exactly "
--> 555 f"one output key. Got {self.output_keys}."
556 )
557 elif args and not kwargs:
558 if len(args) != 1:
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:449, in pydantic.main.BaseModel.dict()
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:868, in _iter()
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:743, in pydantic.main.BaseModel._get_value()
File ~/.local/lib/python3.9/site-packages/langchain/chains/base.py:555, in dict(self, **kwargs)
515 """Convenience method for executing chain.
516
517 The main difference between this method and Chain.__call__
is that this
(...)
550 # -> "The temperature in Boise is..."
551 """
552 if len(self.output_keys) != 1:
553 raise ValueError(
554 f"run
not supported when there is not exactly "
--> 555 f"one output key. Got {self.output_keys}."
556 )
557 elif args and not kwargs:
558 if len(args) != 1:
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:449, in pydantic.main.BaseModel.dict()
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:868, in _iter()
File /opt/conda/lib/python3.9/site-packages/pydantic/main.py:743, in pydantic.main.BaseModel._get_value()
File ~/.local/lib/python3.9/site-packages/langchain/prompts/few_shot.py:133, in dict(self, kwargs) 121 def format(self, kwargs: Any) -> str: 122 """Format the prompt with the inputs. 123 124 Args: 125 kwargs: Any arguments to be passed to the prompt template. 126 127 Returns: 128 A formatted string. 129 130 Example: 131 132 .. code-block:: python --> 133 134 prompt.format(variable1="foo") 135 """ 136 kwargs = self._merge_partial_and_user_variables(kwargs) 137 # Get the examples to use.
ValueError: Saving an example selector is not currently supported β
π€
Hello,
Thank you for reaching out and providing a detailed description of your issue. The error you're encountering, ValueError: Saving an example selector is not currently supported
, is raised when you try to convert an instance of the FewShotPromptWithTemplates
class into a dictionary using the dict()
method, and the instance has an example_selector
attribute. This is because the dict()
method in the FewShotPromptWithTemplates
class raises a ValueError
if the example_selector
attribute is not None
.
However, from the code snippet you've shared, it doesn't seem like you're explicitly calling the dict()
method on the FewShotPromptWithTemplates
instance. This suggests that the dict()
method might be getting called somewhere else in your code or in the LangChain framework itself.
One possible cause could be the print
statement at the end of your code. In Python, when you try to print an object, Python internally calls the __str__
or __repr__
method of that object. If the __str__
or __repr__
method of the FewShotPromptWithTemplates
class is implemented in a way that it calls the dict()
method, this could be causing the error.
To resolve this issue, you can either:
Avoid calling the print
function (or any other function that might be internally calling the dict()
method) on instances of the FewShotPromptWithTemplates
class that have an example_selector
attribute.
Or, if you need to print the instance or convert it into a dictionary, you can set the example_selector
attribute to None
before doing so.
I hope this helps! If you're still encountering the issue or if you have any other questions, please feel free to share more details.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
The code I provided is entire code, i'm not printing anything
@AnkitaKota dosu is an AI bot, and you are posting here code examples as images. So you will have much better chances if you paste it here as code.
I'm still having issue. Can you please check the code and let me know the issue @dosu-bot
examples = [ {"input": "happy", "output": "sad"}, {"input": "tall", "output": "short"}, {"input": "energetic", "output": "lethargic"}, {"input": "sunny", "output": "gloomy"}, {"input": "windy", "output": "calm"}, ]
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(model_kwargs={"deployment_id" : "am-trial"}),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=2
)
example_prompt = PromptTemplate( input_variables=["input", "output"], template="Input: {input}\nOutput: {output}", )
prompt = FewShotPromptTemplate(
example_selector=example_selector,
example_prompt=example_prompt,
prefix="Give the antonym of every input",
suffix="Input: {adjective}\nOutput:",
input_variables=["adjective"],
)
llm = OpenAI(temperature=0,model_kwargs={"deployment_id":"text-davinci-003"})
db = SQLDatabase.from_uri( "bigquery://xxxxβ/yyyy, include_tables=[βxzzzz])
tool_name = βtest toolkit" tool_desc = βββtesttttttβββ
db_chain = SQLDatabaseChain.from_llm( llm, db, verbose = True, top_k = 20, prompt = prompt, use_query_checker=True, )
from langchain.tools import Tool
tool_orders = Tool.from_function( func = db_chain, name = tool_name, description = tool_desc )
tools = [tool_orders]
from langchain.agents.initialize import initialize_agent from langchain.agents.agent_types import AgentType
agent = initialize_agent( tools, llm, agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True, use_query_checker=True )
agent.run("worried ")
ValueError: Saving an example selector is not currently supported
π€
Hello,
The error you're encountering is due to the fact that the MaxMarginalRelevanceExampleSelector
class, which you're using in your code, contains non-serializable attributes. Specifically, the vectorstore
attribute, which is an instance of the VectorStore
class, cannot be serialized.
When you're trying to save or serialize an instance of the MaxMarginalRelevanceExampleSelector
class, Python attempts to convert all attributes to a serializable format. Since VectorStore
instances cannot be serialized, Python raises a ValueError
.
To resolve this issue, you have a couple of options:
Implement a method for serializing and deserializing the VectorStore
instances. This would involve defining a custom serialization method for the VectorStore
class, and then using this method when serializing and deserializing instances of the MaxMarginalRelevanceExampleSelector
class.
Exclude the vectorstore
attribute from the serialization process. This would involve modifying the MaxMarginalRelevanceExampleSelector
class to exclude the vectorstore
attribute when serializing and deserializing instances of the class. Please note that this approach will result in loss of the vectorstore
data when the MaxMarginalRelevanceExampleSelector
instance is restored.
Unfortunately, I wasn't able to find specific documentation or code examples in the LangChain repository that demonstrate how to implement these solutions. However, you might find the Python documentation on data persistence and customizing attribute access helpful.
I hope this helps! Let me know if you have any other questions.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot Sorry, I'm using SemanticSimilarityExampleSelector -> I just updated the code.
On a side note isn't this issue bcause of below
def dict(self, kwargs: Any) -> Dict: """Return a dictionary of the prompt.""" if self.example_selector: raise ValueError("Saving an example selector is not currently supported") return super().dict(kwargs)
π€
Hello,
Thank you for reaching out and providing a detailed explanation of your issue. The error you're encountering is due to the dict()
method being called on an instance of the FewShotPromptTemplate
class that has an example_selector
attribute set. This method is called implicitly in several places within the LangChain framework, even if you're not explicitly calling it in your code.
The dict()
method is designed to convert an instance of the FewShotPromptTemplate
class to a dictionary. However, it currently doesn't support instances that have an example_selector
attribute set, hence the ValueError
you're seeing.
Here's the relevant code from the FewShotPromptTemplate
class:
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
To avoid this error, you have a couple of options:
Don't set the example_selector
attribute: When creating an instance of the FewShotPromptTemplate
class, you can leave out the example_selector
attribute. This will prevent the ValueError
from being raised when the dict()
method is called.
Avoid converting the instance to a dictionary: If you need to use an example_selector
, you can avoid calling the dict()
method on instances of the FewShotPromptTemplate
class that have this attribute set. This might involve changing other parts of your code or the LangChain framework to avoid serialization of these instances.
I hope this helps! If you have any other questions or need further clarification, please don't hesitate to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot I need to pass example-selector as I want to use semanticsearch example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
# This is the embedding class used to produce embeddings which are used to measure semantic similarity.
OpenAIEmbeddings(model_kwargs={"deployment_id" : "am-trial"}),
# This is the VectorStore class that is used to store the embeddings and do a similarity search over.
Chroma,
# This is the number of examples to produce.
k=1
)
also, I'm not passing any dict. Can you please review and code and see if i'm doing anything wrong
π€
Hello,
The error message you're encountering, ValueError: Saving an example selector is not currently supported
, is not directly related to the dict()
method. It's likely that you're trying to save or serialize an instance of the SemanticSimilarityExampleSelector
class, which is not currently supported.
The SemanticSimilarityExampleSelector
class is a Pydantic model used for data validation and settings management. However, this class contains non-serializable attributes like vectorstore
, which is an instance of the VectorStore
class. When you try to save or serialize these classes, Python attempts to convert all attributes to a serializable format. Since VectorStore
instances cannot be serialized, Python raises a ValueError
.
To resolve this issue, you need to implement a method for serializing and deserializing the VectorStore
instances, or exclude the vectorstore
attribute from the serialization process. Please note that excluding vectorstore
from serialization will result in loss of its data when the SemanticSimilarityExampleSelector
instance is restored.
Here's the relevant part of the SemanticSimilarityExampleSelector
class implementation for your reference:
class SemanticSimilarityExampleSelector(BaseExampleSelector, BaseModel):
"""Example selector that selects examples based on SemanticSimilarity."""
vectorstore: VectorStore
"""VectorStore than contains information about examples."""
...
You can see that vectorstore
is an attribute of the SemanticSimilarityExampleSelector
class, and it's an instance of the VectorStore
class.
I hope this helps! If you have any other questions, feel free to ask.
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot but SemanticSimilarityExampleSelector is pre-built and uses the vector store. Are you suggesting to write own class of SemanticSimilarityExampleSelector?
π€
Hello,
The error you're encountering, ValueError: Saving an example selector is not currently supported
, is raised when you try to convert a FewShotPromptTemplate
or FewShotChatMessagePromptTemplate
object to a dictionary using the dict()
method, and the object has an example_selector
attribute. This is because the dict()
method in these classes does not currently support serialization of the example_selector
attribute.
Even though you're not explicitly calling the dict()
method in your code, it might be called internally by the LangChain framework. For instance, if you're trying to save or serialize a FewShotPromptTemplate
object, the dict()
method would be called, leading to this error.
To resolve this issue, you can avoid calling the dict()
method on a FewShotPromptTemplate
or FewShotChatMessagePromptTemplate
object that has an example_selector
attribute. If you need to serialize the object, you might need to implement your own serialization method that can handle the example_selector
attribute.
Here is the relevant code that raises the error:
class FewShotPromptTemplate(_FewShotPromptTemplateMixin, StringPromptTemplate):
# ...
def dict(self, **kwargs: Any) -> Dict:
"""Return a dictionary of the prompt."""
if self.example_selector:
raise ValueError("Saving an example selector is not currently supported")
return super().dict(**kwargs)
In this code, if self.example_selector
is not None
, a ValueError
is raised with the message "Saving an example selector is not currently supported".
I hope this helps! If you have any other questions or need further clarification, feel free to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-beta We are going back and forth. Please help me understand in the code I have provided if something looks wrong. I haven't used the dict nor I do think it's getting called internally. I have sued this and passed it to agent https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity
@dosu-beta Also just adding when I try to print(prompt("worried")) it works absoluetly fine but when we bake it with agent, it breaks Saving an example selector is not currently supported
@dosu-beta the langchain version I'm using is 0.0.240. Is that an issue ?
π€
Hello,
The error you're encountering is related to the saving process of the agent in LangChain. The save
method in the BaseSingleActionAgent
class is responsible for saving the agent. It does this by calling the dict()
method to get a dictionary representation of the agent and then saving this dictionary to a file. The error message suggests that the agent you are trying to save includes an example selector, which is not currently supported by the saving process.
To resolve this issue, you need to ensure that the agent you are trying to save does not include an example selector. If the example selector is not necessary for your application, you can remove it from the agent. If it is necessary, you might need to modify the saving process to support example selectors, or find a different way to save and load your agent.
However, it's also mentioned in the context that saving is not supported for agent executors. If you are trying to save the agent, you should use the .save_agent(...)
method instead of the .save(...)
method. Here is the relevant code for the save
and save_agent
methods:
def save(self, file_path: Union[Path, str]) -> None:
"""Raise error - saving not supported for Agent Executors."""
raise ValueError(
"Saving not supported for agent executors. "
"If you are trying to save the agent, please use the "
"`.save_agent(...)`"
)
def save_agent(self, file_path: Union[Path, str]) -> None:
"""Save the underlying agent."""
return self.agent.save(file_path)
You can find this in the libs/langchain/langchain/agents/agent.py file.
I hope this helps! Let me know if you have any other questions.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot - I want to use semantic selector which enforces to pass it in the example-selector. I don't think I'm saving the agent , the way I have the agent is below agent = initialize_agent( tools, llm, agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True, use_query_checker=True )
agent.run("what does the weather look like in Atlanta ")
On side note: we are using 0.0.240 langchain version. Could that be issue ?
@dosu-bot any update?
π€
Hello,
The error message you're encountering, ValueError: Saving an example selector is not currently supported
, suggests that somewhere in your code or in the LangChain framework, there's an attempt to serialize the SemanticSimilarityExampleSelector
object, which is not currently supported.
The SemanticSimilarityExampleSelector
object contains a VectorStore
object, which is likely not serializable. This is because the VectorStore
object contains the embeddings, which are large numpy arrays that cannot be serialized to JSON by default.
The initialize_agent
function you're using might be trying to save the agent configuration, which includes the SemanticSimilarityExampleSelector
object, and this is where the error is coming from.
Unfortunately, without the ability to serialize the SemanticSimilarityExampleSelector
, you won't be able to save and load your agent with this configuration.
As a workaround, you could try to initialize the SemanticSimilarityExampleSelector
within the agent's run
method, so it's not part of the agent's initial configuration and doesn't need to be serialized. Here's an example of how you could do this:
def run(self, input_variables):
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
OpenAIEmbeddings(model_kwargs={"deployment_id" : "am-trial"}),
Chroma,
k=1
)
self.prompt.example_selector = example_selector
super().run(input_variables)
Please note that this is just a workaround and might not be the best solution for your use case. It's also possible that there have been changes to the SemanticSimilarityExampleSelector
class or the initialize_agent
function in the latest version of LangChain (v0.0.292) that could resolve this issue. I recommend checking the release notes or the commit history for these changes.
I hope this helps! Let me know if you have any other questions.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot Are you saying instead of running agent.run("question") , and wrap the SemanticSimilarityExampleSelector in the run method ? also what is input_variables?
π€
Hello,
Thank you for reaching out and providing a detailed description of your issue. The error you're encountering is due to the fact that the SemanticSimilarityExampleSelector
does not currently support saving. This is a known limitation of the LangChain framework as of version 0.0.240.
The SemanticSimilarityExampleSelector
is designed to select examples based on semantic similarity, but it does not have the capability to be saved or loaded from disk. This is because the underlying model used for semantic similarity (in this case, OpenAIEmbeddings
) may not be serializable, and thus cannot be saved or loaded.
As a workaround, you could try to initialize the SemanticSimilarityExampleSelector
each time you run your application, rather than trying to save and load it. Here's how you could modify your code:
# ... rest of your code ...
# Initialize the SemanticSimilarityExampleSelector
example_selector = SemanticSimilarityExampleSelector.from_examples(
examples,
OpenAIEmbeddings(model_kwargs={"deployment_id" : "am-trial"}),
Chroma,
k=1
)
# ... rest of your code ...
# Initialize the agent
agent = initialize_agent(
tools, llm, agent = AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True, handle_parsing_errors=True,
use_query_checker=True
)
# Run the agent
agent.run("what does the weather look like in Atlanta ")
In this modified code, the SemanticSimilarityExampleSelector
is initialized each time you run your application, so there's no need to save and load it.
I hope this helps! If you have any other questions or run into any other issues, please don't hesitate to ask.
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
@dosu-bot How is this different from the code I have provided ? I have the same Initialization for SemanticSimilarityExampleSelector ? How is this different ? Am I missing anything?
π€
Hello,
Thank you for reaching out and providing a detailed description of your issue. The error message you're encountering, ValueError: Saving an example selector is not currently supported
, suggests that the SemanticSimilarityExampleSelector
is trying to be saved, which is not currently supported.
From the code you've shared, it seems like the initialize_agent
function might be trying to save the agent, which includes the SemanticSimilarityExampleSelector
. This could be the source of the error.
Unfortunately, I don't have the specific changes made to the SemanticSimilarityExampleSelector
and initialize_agent
function between version 0.0.240 and the latest version 0.0.306 of LangChain. However, it's possible that the issue you're encountering has been addressed in a more recent version of LangChain.
As a first step, I would recommend updating your LangChain version to the latest one (v0.0.306) and see if the issue persists. If it does, it would be helpful to know more about the context in which you're running this code. Are you trying to save the agent or the SemanticSimilarityExampleSelector
at any point in your code or environment?
Please let me know if updating LangChain resolves the issue or if you can provide more context. I'm here to help!
Best, Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a π if this is helpful and π if it is irrelevant.
If you want to continue the conversation, start your reply with @dosu-bot.
Hi, @AnkitaKota,
I'm helping the LangChain team manage their backlog and am marking this issue as stale. From the conversation, it seems like the issue involved using the SemanticSimilarityExampleSelector
to create a prompt and passing it to the agent, resulting in a ValueError: Saving an example selector is not currently supported
. The conversation includes suggestions from the bot to avoid calling the dict()
method on instances of the FewShotPromptTemplate
class that have an example_selector
attribute, and to implement a method for serializing and deserializing the VectorStore
instances or exclude the vectorstore
attribute from the serialization process. The user also mentioned using the LangChain version 0.0.240 and asked if that is an issue.
Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!
System Info
I have used the SemanticSimilarityExampleSelector and created a prompt. When I try to pass this to agent it fails saying: ValueError: Saving an example selector is not currently supported
ValueError: Saving an example selector is not currently supported to create prompt I have used https://python.langchain.com/docs/modules/model_io/prompts/example_selectors/similarity
Who can help?
No response
Information
Related Components
Reproduction
Use the above code provided along with prompt selector provided in the original docuementation
Expected behavior
To produce the output for the question asked