After invoking the llamafactory fine-tuned qwen2-7B model using ollama.chat(), the model is unable to recognize the system prompt.
The model sometimes fails to generate any response when prompted with other input phrases.
Here are two scenarios that illustrate the issues:
code
import ollama
model_list = ['qwen2', 'glm4', 'lawdamo2']
model = model_list[2] #Here is my own model
def LLM_Process(model, sys_prom, usr_prom):
messages = [
{'role': 'user', 'content': usr_prom},
{'role': 'system', 'content': sys_prom}
]
options = {
'temperature': 0.1
}
resp = ollama.chat(model, messages, options=options)
print(resp)
LLM_Process(model, 'You are a Dark Tyrannosaur War God', 'Design a stylish slogan for yourself.')
output
{'model': 'lawdamo2', 'created_at': '2024-09-23T06:50:51.4908085Z', 'message': {'role': 'assistant', 'content': 'As an AI language model, I don\'t have a physical appearance or personal style to showcase, but here\'s a suggestion for a slogan that could represent the essence of my capabilities:\n\n"Unleashing intelligence, empowering knowledge - your ultimate cognitive companion."'}, 'done_reason': 'stop', 'done': True, 'total_duration': 9875967000, 'load_duration': 8973791800, 'prompt_eval_count': 19, 'prompt_eval_duration': 25338000, 'eval_count': 51, 'eval_duration': 871361000}
code
import ollama
from tqdm import tqdm
import pandas as pd
model_list = ['qwen2', 'glm4', 'lawdamo2']
model = model_list[2] #Here is my own model
def LLM_Process(model, sys_prom, usr_prom):
messages = [
{'role': 'user', 'content': usr_prom},
{'role': 'system', 'content': sys_prom}
]
options = {
'temperature': 0.1
}
resp = ollama.chat(model, messages, options=options)
print(resp)
inputdir = './data/crime_data.csv'
# Assuming the output directory has a proper format to include line numbers
outputdir = './output/processed_data_{start_line}_{end_line}.txt'
sysP = 'As a criminal geographer, please analyze the legal document I provide to you and output the addresses where the crimes occurred, separated by line breaks if there are multiple addresses. If no detailed address information is provided in the document or if you are unable to discern the addresses, simply output NaN. Do not reply with any content other than the addresses of the crimes. Thank you for your cooperation!'
allin = pd.read_csv(inputdir)
keys = ['UUID', '正文']
usrP = []
for index, row in tqdm(allin.iterrows(), total=allin.shape[0]):
content1 = row[keys[0]]
content2 = row[keys[1]]
usrP.append(content2)
LLM_Process(model, sysP, usrP[0])
Body: Greetings : ),
After invoking the llamafactory fine-tuned qwen2-7B model using ollama.chat(), the model is unable to recognize the system prompt. The model sometimes fails to generate any response when prompted with other input phrases.
Here are two scenarios that illustrate the issues: code
output
code
output