Closed Jason-233 closed 4 months ago
I don't understand what you mean? There is no data in this response, what would you expect to use here?
Sorry, i thought that the reponse started with ```json , so it was ignored as not being the appropriate format. then i took a further look at the response. that cannot be resolved, right?
{ "tags": [ "天际未来科技有限公司", "Top 10客户名单", "排名", "排名变化", "企业名称", "市值(亿人民币)", "价值变化", "城市", "行业", "Infinity Innovations Inc.", "13800", "-3000", "洛杉矶", "金融科技" ], ... // Additional tags based on the content of the image }
@Jason-233 In the prompt for images, we have Don't wrap the response in a markdown code.
which I added explicitly to ask the model to not add this wrapping. I think we can probably add it in the prompt for text as well.
@Jason-233 In the prompt for images, we have
Don't wrap the response in a markdown code.
which I added explicitly to ask the model to not add this wrapping. I think we can probably add it in the prompt for text as well.
@MohamedBassem thanks for your explaining. glm-4v,vision model i used for picture inference, just kept adding this wrapping. so that means i need a new vision model, right? or extract json from the markdown code?
@Jason-233 yeah, if the model is ignoring the prompt, I'm not a big fan of explicitly handling the markdown to be honest. Can you try llava
for example for the picture inference?
im currently using qwen1.5-14b-chat-awq and glm-4v for inference. i got these three responses in a row for a picture inference with glm-4v, and i wondered if it was possible that hoarder would take these responses too. sometimes, the same response happened on qwen1.5-14b-chat-awq. i really appreciate your work. below are the logs of the responses.
Here's a sneak peak from the response: ```json hoarder-workers-1 | { hoarder-workers-1 | "tags":