-
The reproduction of the results on Overall is not very good. I ran it on V100, and here are my parameter settings and experimental results. May I ask what the reason is, or how should I reproduce it c…
-
I tried to instantiate a bert model with the following code:
```rust
use candle_core::DType;
use candle_lora::LoraConfig;
use candle_lora_transformers::bert::{BertModel, Config};
use candle_nn::{…
-
See notebook, section "Word-level timestamps using attention weights":
https://github.com/openai/whisper/blob/main/notebooks/Multilingual_ASR.ipynb
-
Apply ASR (e.g., Whisper) to an existing speech dataset to establish word-level timings, and add said timings as an additional column for the dataset. With this new column, add an option to data/datas…
-
Time Pressure Setup & Visual Signal:
- Define "too much" pressure: Since individual tolerance for pressure varies, it’s important to assess how much pressure participants feel during the task. Adding…
-
How can I ensure that the "Align" feature, which aligns plain text or tokens with audio at the word level, avoids outputting accidental errors? This feature is great because it can output final result…
-
After updated tgi version to
ghcr.io/huggingface/text-generation-inference:latest-intel-cpu
The codegen test failed with the following 2 MODELs:
ise-uiuc/Magicoder-S-DS-6.7B
m-a-p/OpenCodeInterpr…
-
By overfitting a hierarchical network (word-level + sentence-level embeddings; most likely LSTM/RNN architecture), we can potentially overfit on a single sample, by attempting to generate the summary …
-
So far as I've read until, the implementation of attention on both word and sentence level are WRONG:
```python
## The word RNN model for generating a sentence vector
class WordRNN(nn.Module):
…
-
Kindly observe video where I restored seed abandon x 25, but specific only aband x 25
[restore-bug-3.webm](https://github.com/user-attachments/assets/6ca1fe50-db35-4bb3-8a2f-10b79ba6a5f0)
Expected…