-
We have the following board to track the issues:
https://github.com/orgs/ITISFoundation/projects/17/views/2
With priority tendency left -> right.
The short term priorities would be solving breaki…
-
Hi, thank you for sharing your work. I am using single_turn_eval.py to do chart summarization tasks. But my model keeps got killed at initializing the model ' model = MetaModel(args.llama_type, args.l…
-
(Note this is a WIP and will be added to)
This is a meta issue to keep track of the body of work required to:
* migrate keybinding dispatch to app-model
* enable npe2 plugin keybinding contribu…
-
```[tasklist]
### Tasks
- [ ] https://github.com/mozilla/bugbug/issues/4269
- [ ] https://github.com/mozilla/bugbug/issues/4281
- [ ] https://github.com/mozilla/bugbug/issues/4297
```
-
## WIP PRs
- [ ] #665 -- Attempts to compile Lux models to XLA. While more generally scoped to allow models that cannot be compiled via Reactant, this is particularly hard because we need to give o…
-
-
[meta engineering blog post](https://engineering.fb.com/2024/06/12/data-infrastructure/training-large-language-models-at-scale-meta/)
- Meta requires massive computational power to train large lang…
-
### What happened?
The latest llama.cpp produces bad outputs for CodeShell, which previously performed well when merged into llama.cpp.
After updating `convert-hf-to-gguf.py` and `convert-hf-to-g…
-
### What happened?
The model to work.
### Name and Version
version: 3222 (48e6b92c)
built with cc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 for x86_64-linux-gnu
### What operating system are you seei…
-
### What happened?
I am using the llama-2-7b-chat.Q4_K_M.gguf and trying to run it using llama-cpp
but I am not getting the actual output .I am getting output as # , not as any string.
### Nam…