Open Raghavan1988 opened 1 year ago
torvalds.dev is analyzing the ticket
The task is to implement a feature called Nemo Guard Rails for content and topical safety in the LLM application. Currently, the LLM application can answer questions on any topic without restrictions. The proposed feature will add a layer of safety by restricting the topics that the application can answer. This feature will be beneficial for users who are deploying the application in a production environment.
The relevant files for this task are:
llama_index/indices/keyword_table/utils.py
: This file contains utility functions for keyword extraction. These functions might be useful for implementing the feature as they can help in identifying and restricting certain keywords or topics.
llama_index/llms/llama_cpp.py
: This file contains the implementation of the LlamaCPP class, which is a custom language model. The Nemo Guard Rails feature might need to be integrated into this class.
llama_index/llms/monsterapi.py
: This file contains the implementation of the MonsterLLM class, another custom language model. The Nemo Guard Rails feature might also need to be integrated into this class.
llama_index/evaluation/relevancy.py
: This file contains the implementation of the RelevancyEvaluator class, which evaluates the relevancy of retrieved contexts and responses to a query. This class might be useful for evaluating the effectiveness of the Nemo Guard Rails feature.
The first step in implementing the Nemo Guard Rails feature could be to identify and categorize unsafe or inappropriate topics. The keyword extraction functions in llama_index/indices/keyword_table/utils.py
could be used for this purpose. The next step could be to modify the LlamaCPP
and MonsterLLM
classes in llama_index/llms/llama_cpp.py
and llama_index/llms/monsterapi.py
respectively, to restrict the generation of content related to the identified unsafe topics. Finally, the RelevancyEvaluator
class in llama_index/evaluation/relevancy.py
could be used to evaluate the effectiveness of the implemented feature.
Feature Description
Feature Description Now the LLM application answer questions on anything without restricting to the vectorised topics. It will be a great add on if Nemo Guard rails like feature is inbuilt into Llama Index for content and Topical safety.
Reason Content and Topical Safety
Value of Feature will be used by every user, who's putting the application on Production
Reason
No response
Value of Feature
No response