langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
94.85k stars 15.36k forks source link

Performing a Graph SPARQL Query with a Local LLM #13107

Closed ZhShihang closed 8 months ago

ZhShihang commented 1 year ago

image

Hello all, I'm attempting to perform a SPARQL graph query using my local LLM, but it appears that something is amiss. Please feel free to share any helpful tips or guidance.

graph = RdfGraph( source_file="http://www.w3.org/People/Berners-Lee/card", standard="rdf", local_copy="test1109.ttl", ) tokenizer = AutoTokenizer.from_pretrained('C:\\data\\llm\\chatglm-6b-int4', trust_remote_code=True) model = AutoModel.from_pretrained('C:\\data\\llm\\chatglm-6b-int4', trust_remote_code=True).half().cuda().eval() chain = GraphSparqlQAChain.from_llm(model, graph=graph, verbose=True) question = "What is Tim Berners-Lee's work homepage?" result = chain.run(question)

File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 2 validation errors for LLMChain llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable) llm instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)

dosubot[bot] commented 1 year ago

🤖

Hello @SHAWNZHANGSH,

Nice to meet you! I'm Dosu, an AI bot here to assist you with your issues and questions regarding the LangChain repository. I can also guide you on how to contribute to our community. While you're waiting for a human maintainer, I'm here to help!

I'm currently reviewing your issue related to performing a SPARQL graph query with your local LLM. I'll get back to you with a full response shortly. Please hang tight!

dosubot[bot] commented 1 year ago

🤖

To perform a SPARQL graph query using your local LLM in the LangChain framework, you can follow these steps:

  1. Import the necessary modules:
from langchain.chat_models import ChatOpenAI
from langchain.chains import GraphSparqlQAChain
from langchain.graphs import RdfGraph
  1. Create an instance of the RdfGraph class, specifying the source file, the standard, and the local copy:
graph = RdfGraph(
    source_file="http://www.w3.org/People/Berners-Lee/card",
    standard="rdf",
    local_copy="test.ttl",
)

Note that providing a local_file is necessary for storing changes locally if the source is read-only.

  1. Load the schema of the graph:
graph.load_schema()
  1. Create an instance of the GraphSparqlQAChain class, using the from_llm method:
chain = GraphSparqlQAChain.from_llm(
    ChatOpenAI(temperature=0), graph=graph, verbose=True
)
  1. Run the chain with your query:
chain.run("What is Tim Berners-Lee's work homepage?")

This will return the result of the SPARQL query.

You can also update the graph, i.e., insert triples, using natural language:

chain.run(
    "Save that the person with the name 'Timothy Berners-Lee' has a work homepage at 'http://www.w3.org/foo/bar/'"
)

To verify the results, you can run a SPARQL query:

query = (
    """PREFIX foaf: <http://xmlns.com/foaf/0.1/>\n"""
    """SELECT ?hp\n"""
    """WHERE {\n"""
    """    ?person foaf:name "Timothy Berners-Lee" . \n"""
    """    ?person foaf:workplaceHomepage ?hp .\n"""
    """}"""
)
graph.query(query)

You can find more details in the following notebooks:

If you're still encountering issues, could you please provide more details about the exact error message or issue you're encountering? This will help in diagnosing the problem more accurately.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

ZhShihang commented 1 year ago

I believe your code is still utilizing an online LLM instead of a local one. Please modify the following line: chain = GraphSparqlQAChain.from_llm( ChatOpenAI(temperature=0), graph=graph, verbose=True ) @dosu-bot

dosubot[bot] commented 1 year ago

🤖

From the error message you provided, it seems like the model object you're passing to GraphSparqlQAChain.from_llm is not an instance of Runnable, which is expected by the LLMChain class.

The from_llm method in the GraphSparqlQAChain class is designed to take an instance of a language model that is a subclass of BaseLanguageModel and is also Runnable. The Runnable interface is a contract that ensures the object can be run or executed, which is a requirement for the language model in this context.

In your case, you're using the AutoModel.from_pretrained method from the Hugging Face Transformers library to load your local language model. However, the returned model is an instance of a PyTorch model, which is not Runnable and therefore not compatible with the from_llm method.

To resolve this issue, you need to wrap your PyTorch model in a class that extends BaseLanguageModel and implements the Runnable interface. Here's an example of how you can do this:

from langchain.schema.language_model import BaseLanguageModel
from langchain.schema.runnable import Runnable
from transformers import AutoModel, AutoTokenizer

class MyLocalLanguageModel(BaseLanguageModel, Runnable):
    def __init__(self, model_path):
        self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
        self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda().eval()

    def run(self, inputs):
        # Implement the logic to run your model here
        pass

# Initialize your local language model
llm = MyLocalLanguageModel('C:\\data\\llm\\chatglm-6b-int4')

# Initialize GraphSparqlQAChain with the local LLM
chain = GraphSparqlQAChain.from_llm(llm, graph=graph, verbose=True)

In this example, replace the run method with the actual logic to run your model. After you've created this wrapper class, you can use it to initialize your GraphSparqlQAChain.

Please note that this is a simplified example and you might need to adjust it to fit your specific use case. For instance, you might need to implement additional methods in your MyLocalLanguageModel class depending on the requirements of your model and the Runnable interface.

I hope this helps! Let me know if you have any other questions.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

Barry1915 commented 12 months ago

🤖

From the error message you provided, it seems like the model object you're passing to GraphSparqlQAChain.from_llm is not an instance of Runnable, which is expected by the LLMChain class.

The from_llm method in the GraphSparqlQAChain class is designed to take an instance of a language model that is a subclass of BaseLanguageModel and is also Runnable. The Runnable interface is a contract that ensures the object can be run or executed, which is a requirement for the language model in this context.

In your case, you're using the AutoModel.from_pretrained method from the Hugging Face Transformers library to load your local language model. However, the returned model is an instance of a PyTorch model, which is not Runnable and therefore not compatible with the from_llm method.

To resolve this issue, you need to wrap your PyTorch model in a class that extends BaseLanguageModel and implements the Runnable interface. Here's an example of how you can do this:

from langchain.schema.language_model import BaseLanguageModel
from langchain.schema.runnable import Runnable
from transformers import AutoModel, AutoTokenizer

class MyLocalLanguageModel(BaseLanguageModel, Runnable):
    def __init__(self, model_path):
        self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
        self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True).half().cuda().eval()

    def run(self, inputs):
        # Implement the logic to run your model here
        pass

# Initialize your local language model
llm = MyLocalLanguageModel('C:\\data\\llm\\chatglm-6b-int4')

# Initialize GraphSparqlQAChain with the local LLM
chain = GraphSparqlQAChain.from_llm(llm, graph=graph, verbose=True)

In this example, replace the run method with the actual logic to run your model. After you've created this wrapper class, you can use it to initialize your GraphSparqlQAChain.

Please note that this is a simplified example and you might need to adjust it to fit your specific use case. For instance, you might need to implement additional methods in your MyLocalLanguageModel class depending on the requirements of your model and the Runnable interface.

I hope this helps! Let me know if you have any other questions.

Sources

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

TypeError: Can't instantiate abstract class MyLocalLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, invoke, predict, predict_messages

dosubot[bot] commented 8 months ago

Hi, @SHAWNZHANGSH,

I'm helping the LangChain team manage their backlog and am marking this issue as stale. From what I understand, you encountered a validation error when attempting to perform a SPARQL graph query using a local LLM. Dosu-bot provided a detailed response with code examples and references to relevant notebooks in the LangChain repository. After some back and forth, it seems that the issue was identified as the model object not being an instance of Runnable, as expected by the LLMChain class, and a code example was provided to resolve the issue.

Could you please confirm if this issue is still relevant to the latest version of the LangChain repository? If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you!