run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.8k stars 5.06k forks source link

[Question]: #12444

Closed Sridevi17j closed 2 months ago

Sridevi17j commented 6 months ago

Question Validation

Question

How can i give persona to my LLM? Here is my code

from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms.ollama import Ollama from llama_index.embeddings.huggingface import HuggingFaceEmbedding from llama_index.core import Settings

llm = Ollama(model="llama2", request_timeout=2000.0) Settings.embed_model = HuggingFaceEmbedding(model_name="sentence-transformers/all-MiniLM-L6-v2") Settings.llm = llm documents = SimpleDirectoryReader(input_files=["temp.docx"]).load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(streaming=True, verbose=True, llm=llm,text_qa_template=text_qa_template,refine_template=refine_template,similarity_top_k=1, system_prompt=("You are a chatbot, and your name is Chiti, you are able to have normal interactions, as well as talk"),) response = query_engine.query("What is your name") print(reponse)

I tried to name my chatbot, so if anyone asks what is your name, it should reply with its name. Above is my code. But it does not work.. when i ask name.. it says I am an AI model, i dont have name.. or it says it is out of context.. How can i give persona to my LLM using lama_index?

dosubot[bot] commented 6 months ago

Hey @Sridevi17j! Great to see you back here. Hope you're ready to give Chiti some personality!

To modify your LlamaIndex setup so that your LLM (Language Learning Model) responds with its given name "Chiti" when asked about its name, you can implement a check in the _run and _arun methods of the LlamaIndexTool class to intercept questions about the model's name and return "Chiti" directly before querying the engine. This approach allows you to customize responses for specific questions without altering the underlying query engine or its configuration.

Here's how you can modify the _run and _arun methods to achieve this:

class LlamaIndexTool(BaseTool):
    """Tool for querying a LlamaIndex."""

    query_engine: BaseQueryEngine
    return_sources: bool = False

    @classmethod
    def from_tool_config(cls, tool_config: IndexToolConfig) -> "LlamaIndexTool":
        """Create a tool from a tool config."""
        return_sources = tool_config.tool_kwargs.pop("return_sources", False)
        return cls(
            query_engine=tool_config.query_engine,
            name=tool_config.name,
            description=tool_config.description,
            return_sources=return_sources,
            **tool_config.tool_kwargs,
        )

    def _run(self, input: str) -> str:
        # Check if the input is asking for the model's name
        if "what is your name" in input.lower():
            return "My name is Chiti."

        response = self.query_engine.query(input)
        if self.return_sources:
            return _get_response_with_sources(response)
        return str(response)

    async def _arun(self, input: str) -> str:
        # Check if the input is asking for the model's name
        if "what is your name" in input.lower():
            return "My name is Chiti."

        response = await self.query_engine.aquery(input)
        if self.return_sources:
            return _get_response_with_sources(response)
        return str(response)

In this modification, both _run and _arun methods first check if the input string contains the phrase "what is your name" (case-insensitive). If it does, the method returns "My name is Chiti." immediately, bypassing the query engine. This ensures that whenever the model is asked about its name, it will respond with "Chiti" directly.

This solution is straightforward and does not require changes to the query engine or its configuration. It's a practical way to customize responses for specific queries. You can find more details about the LlamaIndexTool class in the source code.

Regarding your question about the Ollama class and its interaction with the HuggingFaceEmbedding class, unfortunately, I wasn't able to find specific information about this within the LlamaIndex repository. It might be helpful to review the documentation or source code of these classes for more details.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.