Closed unoriginalscreenname closed 1 year ago
I like it :)
Feel free to open a PR with it if you want
TestUser @.***> schrieb am Mi. 3. Mai 2023 um 13:27:
Hey, I made a model loader class that takes in an llm model file as a new parameter from the config and then dynamically looks for the function or class to load. I think this is kinda cool because it let's you swap out a different model in all of your examples without having to import the model directly, you just change one config file. Maybe this is a bit too much. But thought I'd share.
` import importlib import inspect from servers.load_config import Config
def load_llm(config: Config = None): if config is None: config = Config()
try: module = importlib.import_module(f"langchain_app.models.{config.model_loader}")
build_function = None found_class = None for name, obj in inspect.getmembers(module): if inspect.isfunction(obj) and name.startswith("build_"): build_function = obj break elif inspect.isclass(obj) and found_class is None and obj.__module__ == module.__name__: found_class = obj if build_function is not None: return build_function() elif found_class is not None: return found_class() else: raise ValueError(f"Invalid model loader: {config.model_loader}")
except ImportError: raise ValueError(f"Invalid model loader: {config.model_loader}")
`
from langchain_app.models.model_loader import load_llm
llm = load_llm()
— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/16, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ7JVTJ5FE7TOXTBA7LXEI6J3ANCNFSM6AAAAAAXUISZ4A . You are receiving this because you are subscribed to this thread.Message ID: @.***>
The Vicuna model response contain prompt and answer twice { "response": "\nWhat is the fourth planet from the sun? \nThe fourth planet from the sun is Jupiter. \nWhat is the fourth planet from the sun? \nThe fourth planet from the sun is Jupiter." }
Is there anything I'm missing?
Hey, can you share more details of which prompt and server you’re using? Oobagooba’s or my server?
Sankethgadadinni @.***> schrieb am Sa. 6. Mai 2023 um 12:27:
The Vicuna model response contain prompt and answer twice { "response": "\nWhat is the fourth planet from the sun? \nThe fourth planet from the sun is Jupiter. \nWhat is the fourth planet from the sun? \nThe fourth planet from the sun is Jupiter." }
Is there anything I'm missing?
— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/16#issuecomment-1537111197, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ2J4HY6EMYFD5QHLCLXEYRPNANCNFSM6AAAAAAXUISZ4A . You are receiving this because you commented.Message ID: @.***>
@paolorechia The prompt is What is the fourth planet from the sun? and I'm using vicuna server
Are you using a specific agent from Langchain? You need to give it a stop token or else it will just keep generating. The default stop token is meant to use with ReAct zero shot
Sankethgadadinni @.***> schrieb am Sa. 6. Mai 2023 um 13:50:
The prompt is What is the fourth planet from the sun? and I'm using vicuna server
— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/16#issuecomment-1537124738, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ33J5X37INPYW6I6Y3XEY3IPANCNFSM6AAAAAAXUISZ4A . You are receiving this because you commented.Message ID: @.***>
@paolorechia I'm not using any agent from langchain, just chat with the model and I gave 'None' as stop token
I see, the code samples in this repo are meant for a different type of use
Sankethgadadinni @.***> schrieb am Sa. 6. Mai 2023 um 14:05:
@paolorechia https://github.com/paolorechia I'm not using any agent from langchain, just chat with the model and I gave 'None' as stop token
— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/16#issuecomment-1537127716, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ66SX3ASTHLXJYGMULXEY47XANCNFSM6AAAAAAXUISZ4A . You are receiving this because you were mentioned.Message ID: @.***>
Cool, nice job - always feel free to open a PR to support your use case if you would like - in the end this repository is just a bunch of code snippets / prompts to try, so there’s no contribution guideline
Sankethgadadinni @.***> schrieb am Sa. 6. Mai 2023 um 14:08:
Yeah, I changed a bit. It works now
`def chat_one_shot( model, tokenizer, params: dict, device: str, chatio: ChatIO = SimpleChatIO()):
message = params["prompt"] temperature = float(params.get("temperature", 1.0)) max_new_tokens = int(params.get("max_new_tokens", 256))
stop_str = params.get("stop", None) model_name = params['model_name'] echo = params.get("echo", False) stop_token_ids = params.get("stop_token_ids", None) or [] stop_token_ids.append(tokenizer.eos_token_id)
is_chatglm = "chatglm" in str(type(model)).lower()
conv = conv_templates[model_name].copy()
conv.append_message(conv.roles[0], message) conv.append_message(conv.roles[1], None)
if is_chatglm: generate_stream_func = chatglm_generate_stream prompt = conv.messages[conv.offset:] else: generate_stream_func = generate_stream prompt = conv.get_prompt()
gen_params = { "prompt": prompt, "temperature": temperature, "max_new_tokens": max_new_tokens, "stop": stop_str, "stop_token_ids": stop_token_ids, "echo": echo, }
chatio.prompt_for_output(conv.roles[1]) output_stream = generate_stream_func(model, tokenizer, gen_params, device) outputs = chatio.stream_output(output_stream) conv.messages[-1][-1] = outputs.strip()
return outputs`
— Reply to this email directly, view it on GitHub https://github.com/paolorechia/learn-langchain/issues/16#issuecomment-1537128502, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABJDFZ5D4GOMY45AG56RXADXEY5NXANCNFSM6AAAAAAXUISZ4A . You are receiving this because you were mentioned.Message ID: @.***>
Hey, I made a model loader class that takes in an llm model file as a new parameter from the config and then dynamically looks for the function or class to load. I think this is kinda cool because it let's you swap out a different model in all of your examples without having to import the model directly, you just change one config file. Maybe this is a bit too much. But thought I'd share.
from langchain_app.models.model_loader import load_llm
llm = load_llm()