-
When users log into the modeling app, they click on their OAuth provider (e.g. Google) and then sign in, and then they see this.
Users have to ignore the big code, and instead press the little …
-
### Feature request
There are some models such that their forward pass doesn't get position_ids. e.g. we can see that OPTModel doesn't get position_ids, while GPTJModel does get position_ids. most ne…
-
Hi,
Llama 3 trains like this
> We trained the models on sequences of 8,192 tokens, using a mask to ensure self-attention does not cross document boundaries.
I see you have something like this…
-
First off -- AMAZING TTS!!!
I know I'm repeating several other issues that have been opened, but I've spent several days testing and code tweaking to try to resolve the issues I have found, and wan…
-
The new sequence QR code method suggests a new feature that would be very handy. Kiosk could sort modeling photos into folders that have the proper names (by archaeological identifier) of the thing mo…
-
In cobra/cobra/models/mamba/modeling_mamba.py line 1772:
```python
assert hidden_states.shape[1] == 1, "Only support decoding with 1 token at a time for now"
```
which prevent me from generating s…
-
Thanks for the great review, if you are interested, plz check our new sequence modeling work called 'TOEPLITZ NEURAL NETWORK' in ICLR2023.
In which we use only relative positional encoding to achieve…
-
I encountered a runtime error while using the transformers-interpret library with a fine-tuned LLama-2 model that includes LoRA adapters for sequence classification. The error occurs when invoking the…
-
Should a sequence of given names, e.g. “Johan Ludvig” be modelled with several forenames (components of the Linguistic Appellation), and if so, how do we indicate the order of names, only by the indic…
-
`rules.build_model` currently assumes multiple sequences per FASTA file, where each FASTA file contains sequences all belonging to the family specified in the filename.
Assessment of model performa…