Open tongyx361 opened 1 year ago
我也很需要这个API,来提取embedding,forward处开始改的话太痛苦了,
@WoosukKwon @simon-mo @zhuohan123 is this a feature that you'd like to see implemented?
@WoosukKwon @simon-mo @zhuohan123 is this a feature that you'd like to see implemented?
https://github.com/WuNein/vllm4mteb/blob/main/vllm-new.py I have a demo, using existing vllm api.
@WuNein It looks great! Will you create a PR to the main vllm so that we may using vllm to serving embedding model?
Its more useful if we can support decode-based embedding model with v1/embedding api like openai embedding api.
Its more useful if we can support decode-based embedding model with v1/embedding api like openai embedding api.
How would I say, someone do something [Model][Misc] Add e5-mistral-7b-instruct and Embedding API #3734 But i don't think it make sense.
I looked into the source code and found that the class
Sampler
discards the prefix inlast_hidden_states
.Is it possible for me to start with
Sampler
and implement the output tolast_hidden_states
as an optional output? Could the development team or anyone else familiar with vLLM provide some guidance and suggestions?想要获取 last_hidden_states,有无对应接口?如果没有,应该修改哪些代码来实现?
我仔细查看了源代码,发现类
Sampler
舍弃了last_hidden_states
中的前缀。我是否可以从
Sampler
开始修改,以可选输出的形式实现对last_hidden_states
的输出? 请问开发团队或其他任何熟悉 vLLM 的人能否提供一些指导和建议?