ridgerchu / matmulfreellm

Implementation for MatMul-free LM.
Apache License 2.0
2.89k stars 178 forks source link

No reduction in VRAM usage #17

Open radna0 opened 3 months ago

radna0 commented 3 months ago

I tried running the following code, with just having the ```ridger/MMfreeLM-1.3B```` model initialized:

root@r4-0:~/matmulfreellm# python
>>> import os
>>> os.environ["TOKENIZERS_PARALLELISM"] = "false"
>>> import mmfreelm
>>> from transformers import AutoModelForCausalLM, AutoTokenizer
>>> # Change here to our open-sourced model
>>> name = "ridger/MMfreeLM-1.3B"
>>> tokenizer = AutoTokenizer.from_pretrained(name)
>>> model = AutoModelForCausalLM.from_pretrained(name).cuda().half()

Having another terminal opened with 'watch rocm-smi', showing 68% VRAM usage meaning about 5.5GB

Every 2.0s: rocm-smi                                                                                        r4-0: Wed Jun 12 12:16:17 2024

======================================== ROCm System Management Interface ========================================
================================================== Concise Info ==================================================
Device  [Model : Revision]    Temp    Power     Partitions      SCLK    MCLK    Fan    Perf  PwrCap  VRAM%  GPU%
        Name (20 chars)       (Edge)  (Socket)  (Mem, Compute)                                                    
==================================================================================================================
0       [RX Vega64 : 0xc1]    30.0°C  11.0W     N/A, N/A        852Mhz  167Mhz  9.41%  auto  220.0W   68%   0%
        Vega 10 XL/XT [Radeo
==================================================================================================================
============================================== End of ROCm SMI Log ===============================================

Contradicting what was said in the paper?

image

ridgerchu commented 3 months ago

Hi, We have highlighted in the paper that we use BitBLAS for conducting those experiments. However, BitBLAS can be challenging to install and is only compatible with NVIDIA GPUs. In fact, we even had to recompile it during our installation process. For those reasons, we haven't merged it into this repo yet. Additionally, due to the different ways FuseBitLinear stores weights, there is still some compatibility work that needs to be completed.

Screenshot 2024-06-12 at 8 50 24 AM

We are also working on merging MatmulFreeLLM into BitBLAS examples. In the meantime, you can try Bitnet's example to achieve a similar level of VRM reduction, which should be comparable to our model.

radna0 commented 3 months ago

I see, so we would still have to wait for the repo to be fully functionally working with BitBLAS until that we can not experience the results from the paper nor do training, right?

ridgerchu commented 3 months ago

For training it is okay, since we have integrated triton in our current repo, so you can still enjoy the accelerated training, for inference maybe not…

radna0 commented 3 months ago

Wait, so you could still train a model and get faster training + vram reduction? It just doesn't work on inference? I might be wrong here but how would we evaluate the model during and after training for the losses, ouputs?

A little bit of context, I'm wanting to train a video generative model

ridgerchu commented 3 months ago
Screenshot 2024-06-12 at 10 46 44 AM

You can refer to an and b, these two figures show that how our fused bilinear help to reduce the memory and training speed. (in pure MLP situation)