-
## Introduction
Browsers and operating systems are increasingly expected to gain access to a language model. ([Example](https://developer.chrome.com/docs/ai/built-in), [example](https://blogs.windo…
-
there is another git project from Microsoft for LLM Prompt Query compression thatr reduces query prompt by 20% and optimizes it but also reduces the LLM costing and speeds up performance.
https://…
-
tl;dr
> I started experimenting with downscaling images. If the purpose is to pipe content to LLM i think it would be beneficial to reduce the amount of input tokens we would send. (the llm doesn't…
-
Have you tried building the spectrogram and encoder output in smaller chunks and appending? I think the spectrogram should generate fairly easily with minimal noise depending on the size of the chunk,…
-
**EDIT**: This has become more of a temporary devblog and less of a TODO list.
:exclamation: **Development can move fast. Old posts are old.** :exclamation: See the **latest** posts below for what …
-
### What happened + What you expected to happen
I am using ray tune on the [LUMI supercomputer](https://lumi-supercomputer.eu/) on one whole [GPU node](https://docs.lumi-supercomputer.eu/hardware/lum…
-
Pose your questions for [Nilam Ram](https://profiles.stanford.edu/nilam-ram) for his talk **Modeling at Multiple Time-Scales: Screenomics and Other Super-Intensive Longitudinal Paradigms**. _Abstract_…
-
### Your current environment
```text
The output of `python collect_env.py`
(vllm-openvino) yongshuai_wang@cpu-10-48-1-249:~/models$ python collect_env.py
Collecting environment information...
…
-
Hello, I wanted to share my opinion.
The memary is based on knowledge graph expansion, though there is a compression in the pipleline, but in the knowledge graph, compression by data storage techni…
-
Hi there,
Thank you for providing the open-source library for preprocessing large-scale datasets.
I have a question regarding the size of storage used by the MDS format.
Specifically, I have a …