-
Hello, thank you for your great work. I am reproducing your work on the **DTU** dataset **using MLP parameterized SDF.**
Do I need to **use position encoding like NeRF when pretraining MLP**, and **…
-
I am constructing a `TimeSeriesDataset` that is used to train a `TemporalFusionTransformer`. I have a variable in this dataset, call it `int_var`, with values that range from {1, ..., 72}. I would lik…
-
## Description:
Hello! I’ve been following the development of this repository and appreciate the efforts to benchmark various efficient Transformer variants. I’d like to propose the implementation of…
-
-
Tere is a class `CatersianGrid` in `/mmgen/models/architectures/positional_encoding.py`
It seems that there is a typo `CatersianGrid -> CartesianGrid`
-
for embedding and positional_encoding,
```
if scale:
outputs = outputs * (num_units ** 0.5)
```
What does the scale _num_units ** 0.5_ mean?
-
Hi.
To the best of my understanding, this line of code should be like this:
```python
self.has_pos_emb = position_infused_attn or rel_pos_bias or rotary_pos_emb or alibi_pos_bias
```
https://git…
-
Hello, I have a few questions:
1)
` if args.prompt_ST==1:
if args.file_load_path != '':
model.load_state_dict(torch.load('{}'.format(args.file_load_path),map_location=device)…
-
We should have that example.
Based on Transformer #53 and self attention #52.
Maybe similar to Conformer #54.
-
Hi, thanks for your great work!
I wonder if you have tried using positional encoding or sine activation schemes in the coordinate-MLP network? It is a very popular method to make the coordinate-MLP f…