-
Hi Team,
Can you please clarify the dimensions of \(V\) and \(H_{i}\)? Assuming the dimension of the LLM is \(d\) and the number of privacy neurons is \(m\), how is the dimension of \(V\) shown as …
-
https://pltrees.github.io/publication/VecDataComp.pdf
-
Hi,
It would be great to have the option to integrate a local LLM, such as Llama 3.2, to help minimize costs associated with using translation and OC services. This feature could be highly beneficial…
-
### Describe the feature you'd like to request
Currently the end user has no control over or even knowledge of which LLM is being used in the assistant. This is a potential privacy concern as the use…
-
**What would you like to be added/modified**:
Research benchmarks for evaluating LLM and LLM Agent
Develop a personalized LLM Agent using lifelong learning on the KubeEdge-lanvs edge-cloud colla…
-
### Have I written custom code (as opposed to using a stock example script provided in MediaPipe)
None
### OS Platform and Distribution
Firebase Hosting
### MediaPipe Tasks SDK version
_No respon…
-
As per the discussion with the team the following is the summary for new docs structure:
The docs will be organized into top-level sections by navbar links, defined like [here](https://github.com/c…
-
# Description of the new feature/enhancement
The Windows Terminal Chat currently only supports Azure OpenAI Service. This restriction limits developers who work with or are developing their own…
-
All API calls are not private. Privacy can be supported by running the LLMs locally on your laptop.
This ensures nothing from your laptop goes out of the laptop..
Ollama integration with Alwrity i…
-
Is there any way to preprocess input, before sending it to an LLM?
For langchain I found an example how to use presidio to anonymize data before passing it to an LLM.
https://python.langchain.com/…