Open piscopancer opened 1 year ago
Hi there π Thanks for the additional context. Do you perhaps have a certain model you would like to use for this task? At the moment, Transformers.js models are quite limited (since most are used within a browser context).
Also, I think your question would be best directed at the original transformers (python) library (link). Transformers.js serves as a JS port of the python library, and we try to avoid adding new features to the library that aren't available in the original repo.
Hope that helps! π€
A super interesting story of mine π
As I have recently started learning to use models from hugging face I also tried directing models to understand templates of their expected responses. So far I only created a prompt which included instructions concatenated with user's input, which had the following shape:
Prompt = you are a model who plays a role of a mother. Your child refers to you with the following phrase: {input}. Respond to it while being loving, caring and affectionate.
This would work, but as learned after, it is not how things are done. To take more control over a model there are so called system messages, which are, as I understood, part of conversational/chat models ONLY. I still do not quite understand the difference between these two though.
As I understood, System messages, are a specific feature, they are also consumed by the model in the first place, much like a ruler's will or the god's word. And unlike system messages, simply expanding a prompt with additional clarification like "please do blablabla using this data: {input}" is absolutely not how responses are meant to be templated, right?
I would rather send a model both a system message and a user's prompt every time. First, the model reads a system message defining the response template in such a way:
With a system message attached, I also give it a user's prompt as it is:
My current situation
Now you may ask, why do I bother about system messages and directing a model? I answer, I do that because I need a model to return an object generated from user's description (eg. a house with 3 doors and 2 windows) as a pure JSON string ONLY. Yes, I am creating a test app which thinks over a description to render it as a JSON object. Why do I need accuracy? Because the model's response is not going to appear in the chat box, there is none actually, instead it is going to be consumed by the function, and I want to maximise the chance up to 100% that a string response is a valid JSON object convertible to a JavaScript object with the use of
JSON.parse(res)
.The question is
How do I normally use a hugging face chat model and feed it system + user messages?
Also, did I get everything right?