Open niutech opened 4 months ago
This is more or less what VS Code does. See https://code.visualstudio.com/api/extension-guides/language-model and https://code.visualstudio.com/api/references/vscode-api#lm
VSCode extension devs can use LLMs, and they do this by first choosing a model from a predefined list with selectChatModels
.
LLMs are contributed by other extensions, although I don't think the docs say how yet.
Thanks for the detailed proposal.
While enabling developers to register custom LLMs via web extensions offers interesting possibilities, we need to carefully consider the implications of going with strong identifiers that might limit flexibility (e.g. avoiding an explosion of models or versions of a given model while what's already available could be sufficient, avoiding being stuck with what was popular at a given time while better options have become available), and also portability across browsers (e.g. I imagine that the chrome-extension://[id] may only make sense for Chrome). Especially with large models, it seems important to minimize an over-reliance on a specific version of a model, or a specific "location" (origin, or an extension ID) for the model.
See this related discussion: issue #5 that goes beyond the built-in AI APIs. We encourage you to engage there to contribute to a more future-proof solution.
Let's allow developers to register a new LLM in a web browser as a web extension, which then would be able to be chosen in #8. The model would be in a TFLite FlatBuffers format, so that it was compatible with MediaPipe LLM Inference as a possible fallback for unsupported browsers (compatible with Gemini Nano).
The method to register/add a custom model could be invoked by web extension like this:
Then it could be listed by web apps like this:
The model metadata could be accessed like this: