torvalds-dev / llama_index

LlamaIndex (formerly GPT Index) is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
0 stars 0 forks source link

Implementation of Nemo Guard Rails for Content Safety and Topical safety Feature Description Now the LLM application answer questions on anything without restricting to the vectorised topics. It will be a great add on if Nemo Guard rails like feature is inbuilt into Llama Index for content and Topical safety. Reason Content and Topical Safety Value of Feature will be used by every user, who's putting the application on Production #14

Open Raghavan1988 opened 7 months ago

Raghavan1988 commented 7 months ago

Feature Description

Feature Description Now the LLM application answer questions on anything without restricting to the vectorised topics. It will be a great add on if Nemo Guard rails like feature is inbuilt into Llama Index for content and Topical safety.

Reason Content and Topical Safety

Value of Feature will be used by every user, who's putting the application on Production

Reason

No response

Value of Feature

No response

torvalds-dev commented 7 months ago

torvalds.dev is analyzing the ticket

torvalds-dev commented 7 months ago

The task is to implement a feature called Nemo Guard Rails for content and topical safety in the LLM application. Currently, the LLM application can answer questions on any topic without restrictions. The proposed feature will add a layer of safety by restricting the topics that the application can answer. This feature will be beneficial for users who are deploying the application in a production environment.

The relevant files for this task are:

  1. llama_index/indices/keyword_table/utils.py: This file contains utility functions for keyword extraction. These functions might be useful for implementing the feature as they can help in identifying and restricting certain keywords or topics.

  2. llama_index/llms/llama_cpp.py: This file contains the implementation of the LlamaCPP class, which is a custom language model. The Nemo Guard Rails feature might need to be integrated into this class.

  3. llama_index/llms/monsterapi.py: This file contains the implementation of the MonsterLLM class, another custom language model. The Nemo Guard Rails feature might also need to be integrated into this class.

  4. llama_index/evaluation/relevancy.py: This file contains the implementation of the RelevancyEvaluator class, which evaluates the relevancy of retrieved contexts and responses to a query. This class might be useful for evaluating the effectiveness of the Nemo Guard Rails feature.

The first step in implementing the Nemo Guard Rails feature could be to identify and categorize unsafe or inappropriate topics. The keyword extraction functions in llama_index/indices/keyword_table/utils.py could be used for this purpose. The next step could be to modify the LlamaCPP and MonsterLLM classes in llama_index/llms/llama_cpp.py and llama_index/llms/monsterapi.py respectively, to restrict the generation of content related to the identified unsafe topics. Finally, the RelevancyEvaluator class in llama_index/evaluation/relevancy.py could be used to evaluate the effectiveness of the implemented feature.