-
> \>dump_image test1.bin 0xffffffff80003e3e 1048576
>dumped 1048576 bytes in 42.666645s (24.000 KiB/s)
> \>load_image test1.bin 0xffffffff80003e3e
>downloaded 1048576 bytes in 10.778667s (95.002 Ki…
-
I am attempting to use the fine tuning with my custom dataset, however the training percentage value keeps staying at 0% and not increasing at all, after 20h of running time:
```
Train: 0%| …
-
my inference size is 640 x 480, tested in 3090, when i set if_local as False, the pipe time is 1.22s, memory costing is large to 22G. however, when setting if_local to True, the pipe time is 2s, memeo…
-
I find your paper on Vision Mamba very interesting. However, when using your code, I encountered a problem (which may well be normal behavior). When analyzing GPU memory consumption and FPS for Vim ve…
-
我在4090上利用自己的数据集finetune MiniCPM-V2.6 在8285/11043的时候会出现loss=0且grad_norm=Nan的情况,我认真检查了数据并且对感到有问题的数据进行了替换发现还是这个问题,我又在8*3090上面跑了一遍发现没有任何问题,请问是对4090的适配有问题吗?
我的命令行:
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 NP…
-
In the Readme it says: "The module eeprom must be loaded to display info about your currently installed memory. Load with modprobe eeprom and refresh the module screen."
-
Presumably Nevegrad performs excellent on computationally expensive objective functions, because it is good at choosing an informative next iterative. On the other hand, it is sometimes slow for choos…
-
It would be great if we could setup a speed up or down "flag" on the CW messages memory/macros.
For example say I have my key set to 26 WPM and the following macro:
`CQ CQ CQ DE K
`
so to …
-
I have tested the inference speed and memory usage of Qwen1.5-14b on my machine using the example in ipex-llm. The peek cpu usage to load Qwen1.5-14b in 4-bit is about 24GB. The peek GPU usage is abou…
-
When I super-resolution a 1024 * 1024 image into 4096 * 4096, I used 70GB of GPU memory and spent 18 minutes, which seems to contradict the advantages stated in the paper. I wonder if this is normal?