Closed shizheng-rlfresh closed 4 months ago
Hi @shizheng-rlfresh.
The request handler
property is perfect for this. Once you have added transformers.js to your application, all you will need to do is process the pipeline within it.
Example (Vanilla Js):
chatElementRef.request = {
handler: async (body, signals) => {
try {
const pipe = await pipeline('sentiment-analysis');
const result = await pipe(body.messages[0].text);
signals.onResponse({text: result[0].label});
} catch (e) {
console.error(e);
signals.onResponse({text: 'Failed to process pipeline'});
}
}
};
Let me know if you have any further questions.
Thank you @OvidijusParsiunas ! Could you please show an example of using the request handler
in svelte?
I want to also mention that I have considered creating a special property for transfomers.js just like webModel. However the current issue is that configuration to run a model differs by the model - not by the task, hence it is very difficult and cumbersome to maintain multiple different tasks that require custom configs for different models. Another problem is that there is only one text generation
model that behaves like an LLM (Xenova/Qwen1.5-0.5B-Chat), hence creating and maintaining an interface for an ecosystem that does not have much to offer for the chatbot domain doesn't currently yield much benefit to Deep Chat. Nevertheless, I may revisit this in the future.
Here is some sample code for Svelte:
<script>
import { DeepChat } from "deep-chat";
import { pipeline } from "@xenova/transformers";
</script>
<main>
<deep-chat
request={{
handler: async (body, signals) => {
try {
const pipe = await pipeline("sentiment-analysis");
const result = await pipe(body.messages[0].text);
signals.onResponse({ text: result[0].label });
} catch (e) {
console.error(e);
signals.onResponse({ text: "Failed to process pipeline" });
}
},
}}
/>
</main>
Thank you @OvidijusParsiunas!!!
I will be closing this issue since the topic of discussion has been resolved. Nevertheless feel free to comment below or create a new issue for anything else. Thanks!
I am wondering if I can use models imported from transformers.js instead of using the existing three options? I understand the
requestInterceptor
andresponseInterceptor
are sort of pre/post-processing, but they require aconnect
ordirectConnect
.Suppose I import a model/pipeline through transformers.js, then how do I use them directly in deep chat? #