-
Hi there 👋
Let's translate the course to `Arabic` so that the whole community can benefit from this resource 🌎!
Below are the chapters and files that need translating - let us know here if you'd…
-
### Feature request
Flash Attention 2 is a library that provides attention operation kernels for faster and more memory efficient inference and training: https://github.com/Dao-AILab/flash-attentio…
-
[UMD](https://github.com/umdjs/umd) is used by many libraries to support legacy module systems. Migrating to esbuild is difficult when relying on this format.
There are some workarounds but they ha…
-
I have a library that is designed for use in a browser (HuggingFace's Transformers.js). It uses the Web Fetch API, so I've been trying to monkey-patch in node-fetch stuff with no success so far, due t…
-
## Failing module
- **GitHub**: https://github.com/xenova/transformers.js
- **npm**: https://www.npmjs.com/package/@xenova/transformers
```js
import { env, pipeline } from "@xenova/transformer…
-
Hi all, just a heads up: I filed an [issue](https://github.com/huggingface/transformers/issues/29466) with `huggingface/transformers` requesting model support for BASED via their library.
My engage…
-
- [x] Learn the basics
- [x] Write simple hello
- [x] Write a pipeline code using transformer.
-
Hi OpenLM team! Is there interest in making OpenLM models loadable using just HF?
I see some OpenLM [models](https://huggingface.co/mlfoundations/open_lm_7B_1.25T) up on HF, but they are not readil…
-
### Feature request
The underlying peft library supports setting multiple adapters:
```python
model.set_adapters(["adapter_a", "adapter_b"])
```
It would be nice if the pipeline supported the…
-
### Feature request
We try to propose the addition of a new and widely-adopted scheduler strategy for language model pretraining in the Transformers repository. Upon reviewing the current schedulers …