ggerganov / ggml

Tensor library for machine learning
MIT License
11.11k stars 1.02k forks source link

[Feature request] Implement GPT-JT #6

Open pablogranolabar opened 1 year ago

pablogranolabar commented 1 year ago

e.g. https://www.together.xyz/blog/releasing-v1-of-gpt-jt-powered-by-open-source-ai

ggerganov commented 1 year ago

Well this looks like the same model as GPT-J, just different weights. You should already be able to run it - just convert it to ggml format and use the gpt-j example

pablogranolabar commented 1 year ago

Ok, converted these weights from GPT-JT and it generated the model file, however I'm getting the following error when loading:

gptj_model_load: f16     = 1
gptj_model_load: ggml ctx size = 13334.86 MB
gptj_model_load: memory_size =  1792.00 MB, n_mem = 57344
gptj_model_load: unknown tensor '       W*ƍyC$B' in model file
main: failed to load model from './ggml-model.bin'

any ideas?

ggerganov commented 1 year ago

The convert script assumes that the original weights are FP32 and converts to FP16 when necessary. However, in the new GPT-JT, the weights are in FP16 by default, so the script has to be adjusted.

Try changing the following:

https://github.com/ggerganov/ggml/blob/90ee5c6358a3f33a5615256a0b229aa793ff4b49/examples/gpt-j/convert-h5-to-ggml.py#L118-L121

to:

    # ftype == 0 -> float32, ftype == 1 -> float16
    ftype = 0;
    if use_f16:
        if name[-7:] == ".weight" and n_dims == 2:
            print("  Converting to float16")
            data = data.astype(np.float16)
            ftype = 1
        else:
            print("  Converting to float32")
            data = data.astype(np.float32)
            ftype = 0
ggerganov commented 1 year ago

Just tested and it works: ed09c7190ea26f68faf9adba57feb3c7f404a26d

Also fixed unicode support for the GPT-2 and GPT-J models in general

trholding commented 1 year ago

@pablogranolabar Is there a noticeable difference in quality of output of GPT-JT compared to GPT-J?

pablogranolabar commented 1 year ago

Yes and no, it's getting a lot of conflicting reviews because GPT-JT is fine tuned for task oriented stuff like chain of thought reasoning. So for canned general tasks like causal LM it's potentially worse in whatever you would consider precision and accuracy, but with quality prompt engineering all of these additional tasks can be teased out during inference. So, the inevitable "it depends" is applicable there, depending on target architecture, model handler customization, and inference hyperparameters + prompt injection and optimization during inference.

trholding commented 1 year ago

So for canned general tasks like causal LM it's potentially worse in whatever you would consider precision and accuracy, but with quality prompt engineering all of these additional tasks can be teased out during inference.

Would be awesome if you could share some sample outputs.

If there is a way to share large models, I'd be willing to convert it to ggml and share. Maybe IPFS or Torrent, have to figure out. I have bandwidth caps on server.

trholding commented 1 year ago

@pablogranolabar Thanks for sharing the great idea about using GPT-JT

@ggerganov Thanks for the fix

I uploaded the model to huggingface so that its easy for people to get hold of the gpt-jt ggml model variant without eating into your hosting bills:

https://huggingface.co/trholding/GPT-JT-6B-v1-ggml

cd models
mkdir gpt-jt-6B ; cd gpt-jt-6B
wget https://huggingface.co/trholding/GPT-JT-6B-v1-ggml/resolve/main/ggml-model.bin
cd ../..

# Run the GPT-JT 6B v1 model (requires 12GB disk space and 16GB CPU RAM)
./bin/gpt-j -m models/gpt-jt-6B/ggml-model.bin -p "This is an example"
pablogranolabar commented 1 year ago

probably best suited for a new issue, but @ggerganov what do you think about adding 8-bit inference? this would further cut model memory consumption by 50% and with nominal loss of precision. this is a supported option now for transformers with bitsandbytes via Accelerate.

ggerganov commented 1 year ago

@pablogranolabar Hm, I'm probably missing something - the referenced repo is a CUDA wrapper. I cannot find any information about Apple Accelerate supporting 8-bit precision. Can you provide any reference?

pablogranolabar commented 1 year ago

Yeah for example: https://github.com/huggingface/transformers/pull/17901

ggerganov commented 1 year ago

Again, I might be missing something, but it seems this refers to huggingface/accelerate framework which is all CUDA and does not apply to Apple Accelerate.

Unless there is a way to use Apple framework with direct 8-bit precision support, I think 8-bit support will be very low priority for ggml. It means I will have to implement the quantization from scratch with NEON and I'm not really sure how to do this atm. And even if I achieve it, it will very likely be less performant compared to the existing mixed FP16/FP32 + Accelerate because we will lose the AMX coprocessor benefits that we currently have.

pablogranolabar commented 1 year ago

Ah sorry I was referring to the Accelerate framework used with PyTorch. Here's a decent writeup of their 8-bit quantization methods: https://huggingface.co/blog/hf-bitsandbytes-integration

regstuff commented 1 year ago

@trholding - your model link gives a 404. Is the GPT-6JT ggml still available anywhere?