-
Hi everyone,
I am currently trying to run the "run_decimer_save_results.py" script to reproduce benchmarks. Trying this I am encouting errors in different files importing packages.
An Example:
…
-
Hello!
First of all, I would like to thank you for great job! But it seems that EfficientNetV2 don't work. I tried this:
python tools/train.py models/efficientnetv2/efficientnetv2_xl_512.py
And r…
-
for example:
the max value of `dropout_rate` and `ram` is 0.2 and 10 in `effnetv2_configs.py`, but 0.3 and 15 in paper, and `dropout_rate` seems not progressively changed in 4 traning stages
```pyth…
-
Model: A1 ~ A6 (6-CLASS), EfficientNetV2_s
Preprocessing: 해상도 224x224 초과 이미지 제외, 환부 이외의 배경 포함 박스 이미지
batch_size: 128
optimizer = Adam
learning rate: 1e-4
criterion = CrossEntropyLoss
resize를 하…
-
- 标题:EfficientNetV2模型复现没有收敛,不懂哪一步出的问题
- 版本、环境信息:
****************************************
Paddle version: 2.2.2
Paddle With CUDA: True
OS: Ubuntu 16.04
Python version: 3.7.4
CUDA version: 10.1.…
-
Model 1: A1 ~ A3 (3-CLASS), EfficientNetV2_s
Model 2: A4 ~ A6 (3-CLASS), EfficientNetV2_s
Preprocessing: ratio-and-zero-padding, interpolation: LANCZOS4
batch_size: 64, 128
optimizer = Adam
learn…
-
The official repo made some changes that are different from the paper.
For example, the paper claims that v2-s uses 272 channels in the last stage, but they changed to 256 in their code, and the auth…
-
This should follow the example of #1412 and #1401, including
- Docstring updates
- Colab demonstrating that weight loading works (if pre-trained weights exist) and numerics are identical
-
**Describe the bug**
I am working on quantization of few timm models using Torch FX Graph Mode Quantization. Specifically, I am looking into post training static quantization. For static models like …
-
Model 1: A1 ~ A6 (6-CLASS), EfficientNetV2_s
Preprocessing: ratio-and-zero-padding, interpolation: LANCZOS4
batch_size: 64, 128
optimizer = Adam
learning rate: 1e-4
criterion = CrossEntropyLoss
…