-
Hello, I'm interested in applying the MLP layer of your Monarch Mixer in my research.
I'm unsure about the versions of packages used in the implementation of Monarch Mixer, including PyTorch, CUDA, …
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmengine/issues) and [Discussions](https://github.com/open-mmlab/mmengine/discussions) but cannot get the expected help.
…
-
## Approach 1
- Search for surveys on REID
#### Examples:
- https://ieeexplore.ieee.org/abstract/document/9336268
- https://arxiv.org/abs/2303.11332
### Approach 1A (One embedding per trajectory …
-
Hi,
Thanks for this awesome implementation. I've been attempting to adopt your implementation of ALAE to work with sequential data. In my refactor, I replace the MLP layers with LSTMs since I'm wor…
-
Hello!@LinB203
I used the ‘Inference for video’ code from readme,but got
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████…
-
### System Info
CPU architecture: x86_64
Host RAM: 1TB
GPU: 8xH100 SXM
Container: Manually built container with TRT 9.3 Dockerfile.trt_llm_backend
TensorRT-LLM version: 0.10.0.dev2024043000
Dr…
-
Thank you for sharing.
In this framework, I have some questions.
Question1
Can this INN framework implements an MLP network with different input and output dimensions? For example, the input d…
-
Hello, I load pre-trained llava-llama3 SFT weights and fine-tune using LoRA, but get an error when merging weights:
**scripts:**
Training:
```
deepspeed --master_port=$((RANDOM + 10000)) --inclu…
-
I was wondering how much VRAM is required to use this model. An NVIDIA GeForce RTX 3090 GPU with 24 GB VRAM seems to be insufficient when using the default settings on a clean install of Lubuntu 18.04…
-
Hi. I read your paper very well.
I want to do two experiments.
1. Multivariate regression
You said "The NBEATSx model offers a solution to the multivariate regression problem" in your paper.
…