benman1 / generative_ai_with_langchain

Build large language model (LLM) apps with Python, ChatGPT and other models. This is the companion repository for the book on generative AI with LangChain.
https://amzn.to/43PuIkQ
MIT License
600 stars 242 forks source link

4 Building Capable Assistants #22

Closed theAfricanQuant closed 8 months ago

theAfricanQuant commented 8 months ago

image

I am wondering how to tackle this issue

benman1 commented 8 months ago

Hi @theAfricanQuant Thanks for posting this. This must have crept in during editing, sorry about that. I think there was supposed to be some other example about product information that I decided to leave out. This here works with the joke template:

from langchain.chains import LLMChain

input_list = [
    {"topic": "socks"},
    {"topic": "computers"},
    {"topic": "shoes"}
]
LLMChain(
    llm=OpenAI(model="gpt-3.5-turbo-instruct"), 
    prompt=PromptTemplate.from_template("Tell me a joke about {topic}!")
).generate(input_list)

The important part is to use the LLMChain class instead of LangChain Expression Language.

Output:

LLMResult(generations=[[Generation(text='\n\nWhy was the sock sad?\n\nBecause it was feeling un-paired!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nWhy did the computer go to the doctor?\n\nBecause it had a virus!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nWhy did the shoe go to the doctor?\nBecause it was feeling a little sole.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'total_tokens': 70, 'completion_tokens': 49, 'prompt_tokens': 21}, 'model_name': 'gpt-3.5-turbo-instruct'}, run=[RunInfo(run_id=UUID('0c199ddc-0dc9-4ee1-a6fe-ef1f5ba55db1')), RunInfo(run_id=UUID('48ef6752-fe7e-494f-b4a2-6c7bc7170726')), RunInfo(run_id=UUID('196bae8a-6da3-4703-95dd-614b124e1444'))])
theAfricanQuant commented 8 months ago

Thanks your new code worked smoothly.