-
### Branch
main branch (1.x version, such as `v1.0.0`, or `dev-1.x` branch)
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmaction2/issues) and [Discussions](https:/…
-
As #5 's suggestion, we released MambaOut-Kobe model, a Kobe Memorial version with 24 Gated CNN blocks. MambaOut-Kobe achieves really competitive performance, surpassing ResNet-50 and ViT-S with much …
-
(.venv) (base) coty@P16:~/OneDrive/LLM/repo/exo$ ^C
(.venv) (base) coty@P16:~/OneDrive/LLM/repo/exo$ ^C
(.venv) (base) coty@P16:~/OneDrive/LLM/repo/exo$ DEBUG=9 python3 main.py
None of PyTorch, Ten…
-
Hey, it's a nice tool
However, I am wondering whether the return of get_model_complexity_info is correct.
Let's assume all calculations are in floating point.
1MACs = 2OPs
MAC = Mult + Add
FLOP…
-
Dear all,
Thanks for releasing the codes!
I noticed some problems in model flops calculation.
1. The classname `Conv2d` and `ConvTranspose2d` both contain 'Conv', so their flops are counted i…
-
### Required prerequisites
- [X] I have read the documentation .
- [X] I have searched the [Issue Tracker](https://github.com/baichuan-inc/baichuan-7B/issues) and [Discussions](https://github.com/bai…
-
Hi I am getting decent enough results on the vit-s architecture model for metric depth which shows it has around 22-25 million parameters. I want to make the model even faster/lighter, I could think o…
-
When computing the macs/params using the `get_model_complexity_info` function I get the following message using Torch 2.2.1.
`Op Flatten is not supported at now, set FLOPs of it to zero`
Natural…
-
**summary.py部分代码**
def main():
args = parse_args()
cfg, model_name = _trim(get_config(args.config, show=False), args)
print(f"Building model({model_name})...")
model = build_model…
-
They claim lookahead decoding provides a 1.5~2x decoding speedup without a speculative model.
Blog post: https://lmsys.org/blog/2023-11-21-lookahead-decoding/
Twitter thread: https://twitter.com/l…