-
Description:
In-memory compression for animation
Use case:
A lot, and long, animations take a lot of memory. Some basic runtime compression/key-reduction could help reduce that.
2017-06-26 14:0…
-
Add a `doctr.models.utils` module to compress existing models and improve their latency / memory load for inference purposes on CPU. Some interesting leads to investigate:
- [x] FP conversion (#10)
…
-
| --- | --- |
| Bugzilla Link | [477598](https://bugs.eclipse.org/bugs/show_bug.cgi?id=477598) |
| Status | NEW |
| Importance | P3 enhancement |
| Reported | Sep 16, 2015 11:12 EDT |
| Modified …
-
### Describe the bug
The `TokenClfDataset` is [initialized without a `model_name`](https://github.com/microsoft/LLMLingua/blob/60abc0f94939b24e000fe6a33a954de72055fa0c/llmlingua/prompt_compressor.p…
-
**Is your feature request related to a problem? Please describe.**
GGUF is becoming the mainstream method for large model compression and accelerated inference. Transformers currently supports the lo…
-
**Describe the bug**
Following discussion https://github.com/f3d-app/f3d/issues/1705, here is a dedicated thread for adding support for GLTF `KHR_texture_basisu` extension - which can be referenced…
-
I was thinking, the hosted files (i.e. models) could use compression like brotli. Considering its all static files this could be done once instead on per request.
For example, [decoder_model_merge…
-
Dear authors,
@shuyansy @UnableToUseGit
I kindly think you need to discuss VoCo-LLaMA[1] in the "Intro" section of your paper at the very least.
As I find the citation and discussions related to …
Yxxxb updated
4 hours ago
-
Previously we used boost::serialization + boost::iostreams for compressed portable binary archives. Now, with cereal portable binary archives, we need a lightweight alternative to boost::iostreams fo…
-
## 🐞Describing the bug
I have a pytorch model ,which I'm able to convert to CoreML and compress. If I add ClassificationConfig during conversion, I'm unable to compress the model. The error is:
`"id…