balisujohn / tortoise.cpp

A ggml (C++) re-implementation of tortoise-tts
MIT License
141 stars 11 forks source link

Mac OS: Bringing metal support #14

Open baileyheading opened 1 month ago

baileyheading commented 1 month ago

Work is underway to get metal working for this project. Keen to get help from anyone interested.

TODO:

Existing PRs from @balisujohn decent example to follow for the changes done to CUDA already: https://github.com/ggerganov/ggml/pulls/balisujohn

baileyheading commented 1 month ago

that was fast! before it had problem haha

gpt_vocab_init: loading vocab from '../models/tokenizer.json'
gpt_vocab_init: vocab size = 255
autoregressive_model_load: loading model from '../models/ggml-model.bin'
autoregressive_model_load: ggml tensor size    = 432 bytes
autoregressive_model_load: backend buffer size = 1889.29 MB
autoregressive_model_load: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 103079.22 MB
autoregressive_model_load: model size  =  1510.54 MB
ggml_metal_free: deallocating
 41984 diffusion_model_load: loading model from '../models/ggml-diffusion-model.bin'
diffusion_model_load: ggml tensor size    = 432 bytes
diffusion_model_load: backend buffer size = 689.28 MB
diffusion_model_load: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M3 Max
ggml_metal_init: picking default device: Apple M3 Max
ggml_metal_init: using embedded metal library
ggml_metal_init: GPU name:   Apple M3 Max
ggml_metal_init: GPU family: MTLGPUFamilyApple9  (1009)
ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003)
ggml_metal_init: GPU family: MTLGPUFamilyMetal3  (5001)
ggml_metal_init: simdgroup reduction support   = true
ggml_metal_init: simdgroup matrix mul. support = true
ggml_metal_init: hasUnifiedMemory              = true
ggml_metal_init: recommendedMaxWorkingSetSize  = 103079.22 MB
diffusion_model_load: model size  =   688.49 MB
GGML_ASSERT: /Users/user_name/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/user_name/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
baileyheading commented 1 month ago

Some lldb (debug) info

GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:2216: ne00 % 4 == 0
Process 38817 stopped
* thread #5, queue = 'ggml-metal', stop reason = signal SIGABRT
    frame #0: 0x000000018e276a60 libsystem_kernel.dylib`__pthread_kill + 8
libsystem_kernel.dylib`:
->  0x18e276a60 <+8>:  b.lo   0x18e276a80               ; <+40>
    0x18e276a64 <+12>: pacibsp 
    0x18e276a68 <+16>: stp    x29, x30, [sp, #-0x10]!
    0x18e276a6c <+20>: mov    x29, sp
Target 0: (tortoise) stopped.
(lldb) tb
No breakpoints currently set.
(lldb) bt
* thread #5, queue = 'ggml-metal', stop reason = signal SIGABRT
  * frame #0: 0x000000018e276a60 libsystem_kernel.dylib`__pthread_kill + 8
    frame #1: 0x000000018e2aec20 libsystem_pthread.dylib`pthread_kill + 288
    frame #2: 0x000000018e1bba30 libsystem_c.dylib`abort + 180
    frame #3: 0x0000000100677378 libggml.dylib`__ggml_metal_graph_compute_block_invoke.cold.40 + 88
    frame #4: 0x000000010066287c libggml.dylib`__ggml_metal_graph_compute_block_invoke + 21672
    frame #5: 0x000000018e0fe428 libdispatch.dylib`_dispatch_client_callout2 + 20
    frame #6: 0x000000018e112850 libdispatch.dylib`_dispatch_apply_invoke3 + 336
    frame #7: 0x000000018e0fe3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #8: 0x000000018e0ffc68 libdispatch.dylib`_dispatch_once_callout + 32
    frame #9: 0x000000018e1119b8 libdispatch.dylib`_dispatch_apply_redirect_invoke + 260
    frame #10: 0x000000018e0fe3e8 libdispatch.dylib`_dispatch_client_callout + 20
    frame #11: 0x000000018e110080 libdispatch.dylib`_dispatch_root_queue_drain + 864
    frame #12: 0x000000018e1106b8 libdispatch.dylib`_dispatch_worker_thread2 + 156
    frame #13: 0x000000018e2aafd0 libsystem_pthread.dylib`_pthread_wqthread + 228
baileyheading commented 1 month ago

The GGML Assert is from ggml.metal.m I think. I don't know if that causes the abort or something else

EDIT: yep, the asserts cause the block. that means there is probably an issue somewhere, but I can at least proceed to the next stages for now by skipping over it

baileyheading commented 1 month ago

@balisujohn so this is what you were referring to about the unsupported ggml operations? It's very fast at least up until the problems

ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'
GGML_ASSERT: /Users/me/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:918: !"unsupported op"
baileyheading commented 1 month ago

It says it's not supported, but it looks supported to me!

image

EDIT: right, _EXT is different

balisujohn commented 1 month ago

This is actually quite suspicious, ggml_upscale_ext should use GGML_OP_UPSCALE

baileyheading commented 1 month ago

This is actually quite suspicious, ggml_upscale_ext should use GGML_OP_UPSCALE

any thoughts on where to look?

baileyheading commented 1 month ago

Actually! I'm comparing the CUDA code CUDA

        case GGML_OP_UPSCALE:
            ggml_cuda_op_upscale(ctx, dst);
            break;

METAL image

METAL checks for support with a function to return a bool, but I tried setting it to "true" already with no luck

EDIT: METAL Actually it has this image

baileyheading commented 1 month ago

I believe that PAD is also supported

ggml_metal_graph_compute_block_invoke: error: unsupported op 'PAD'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"
ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"
ggml_metal_graph_compute_block_invoke: error: unsupported op 'PAD'
GGML_ASSERT: /Users/baileyheading/Documents/GitHub/tortoise.cpp/ggml/src/ggml-metal.m:919: !"unsupported op"

Update: I'm removing some of the GGML_ASSERTs to see whether it's just the detection of support that is broken

baileyheading commented 1 month ago

It seems to me like these numbers are somehow wrong for the Metal implementation

        case GGML_OP_UPSCALE:
            return true;
        case 49:
            return true;

I did some printouts and apparently the upscale one it wants is 49, which doesn't seem to be what GGML_OP_UPSCALE is defined as. I added 49 for debugging, but I'm gonna swap that op out for it to try and get by the error

Update: Yeh it got past, but then got error for a different op that also exists, so I'm going to figure there's something wrong with these numbers somewhere

balisujohn commented 1 month ago

Possibly i made a mistake adding other new ops. Though not sure why it would work in cuda and not metal.

baileyheading commented 1 month ago

@balisujohn they're out by two.. which seems consistent with this kind of thing

dst->op in default case: 49 (what was sent from main.cpp presumably)
dst->op in default case: 51 (Metals definition of the number)
ggml_metal_graph_compute_block_invoke: error: unsupported op 'UPSCALE'

but if they're only defined in one place, I don't understand how this would happen

baileyheading commented 1 month ago

I numbered them. What confuses me is that it believes 51 is upscale in metal but the incoming operation thing says it is 49 and it has the name "UPSCALE". where are they defined differently?

image

balisujohn commented 1 month ago

These are out of order with the OP enum, possibly leading to at a minimum the wrong op names being printed. All of the things I linked should reference the same ops in the same order.

https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/src/ggml.c#L2701 https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/include/ggml/ggml.h#L424 https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/src/ggml.c#L2613 image

balisujohn commented 1 month ago

(I think the op failing on metal actually isn't upscale)

baileyheading commented 1 month ago

so what would need to be reordered to see what is actually failing?

balisujohn commented 1 month ago

The other two things I linked need to be added to and reordered to have the same number of elements as https://github.com/balisujohn/ggml/blob/e9ac38f82c0956cf6f4e86206d4503095c25d47c/include/ggml/ggml.h#L424, and in the same order. A very silly mistake on my end to not have these matching already.

baileyheading commented 1 month ago

Should be adding these two I think

image

@balisujohn are they familiar?

balisujohn commented 1 month ago

yeah that's right, assuming the order is correct those are the two new ops (yet to be upstreamed to ggml proper)

baileyheading commented 1 month ago

yeah that's right, assuming the order is correct those are the two new ops (yet to be upstreamed to ggml proper)

and I've confirmed they are not a part of the metal code of course.. so these are the ones I need to add. Which other ones did you add?

balisujohn commented 1 month ago

I added a cuda kernel for conv_transpose_1d I don't know if there's a metal implementation, but there was an existing CPU implementation. I modified the ggml_pad backend and added ggml_pad_ext so the metal kernel may need to be modified, assuming there is an existing one. I modified the ggml_get_rows cuda kernel to fix a bug, it's unlikely the same bug exists on metal.

balisujohn commented 1 month ago

https://github.com/ggerganov/ggml/pulls/balisujohn

Useful for reference, though missing the getrows change The branch tortoise is all of these PR's squished together plus the getrows fix.

baileyheading commented 1 month ago

I've started a todo list at top of the issue. is there a better part of the repo to have such a thing?

balisujohn commented 1 month ago

I think the top of the issue is fine. Here's a link to join the Discord; a while back someone in the discord expressed interest in metal so they might be willing to help https://discord.gg/q9zufuhc

baileyheading commented 1 month ago

Conversion without understanding is an interesting task haha. Technically, I can run this through by CPU to get expected outputs for different inputs.. then I can translate code the best I can, and then test them one by one. Have you got a more refined approach you could recommend? @balisujohn given you've just done this for CUDA

edit: I did find the tests finally! I guess that's something

baileyheading commented 1 month ago

This is hardly going to get the desired result, but I do at least know where to make the modifications.

                case GGML_OP_PAD_REFLECT_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_PAD_REFLECT_1D\n");
                }
                break;
                case GGML_OP_CONV_TRANSPOSE_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_CONV_TRANSPOSE_1D\n");
                }
                break;
                case GGML_OP_UNFOLD_1D:
                {
                    fprintf(stderr, "%s", "GGML_OP_UNFOLD_1D\n");
                }
                break;

it ran through to the end and made the .wav file (albeit, static noises only haha) (obviously it fails the tests)

GGML_OP_PAD_REFLECT_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D
GGML_OP_UNFOLD_1D

I see these run a few times. I wonder how far off the time for this to run (with those parts of the code missing) is from the time it would take when it's done. It took ~26 seconds on M3 Max.

If this were to run on a server, would some of this just need to load one? or do all these steps need to run each time? and is it possible to stream or is everything done all at once?

balisujohn commented 1 month ago

You could avoid repeating the model loads if streaming, you will still need to run the autoregressive model, diffusion model, and vocoder for each generation. I am working on making tortoise.cpp faster.

balisujohn commented 1 month ago

Wrt how to get the ops working, each op has tests in the tests folder, you should be able to define a flag at the top of a particular test file to make it build with metal, then you can run that test from the ggml/build/bin folder after calling make in ggml/build. See for example https://github.com/balisujohn/ggml/blob/tortoise/tests/test-conv-transpose-1d.cpp which is currently set to build with the cuda backend.

baileyheading commented 1 month ago

I'll get up to speed with those tests next time I work on the operations.

What do you have planned to make tortoise faster?

balisujohn commented 1 month ago

there are unnecessary ops which can be removed, in particular some copies from float32 to float16 that can be removed once ggml_conv_1d supports float32 weight tensors. Also probably some unnecessary transposes when the weights can be saved already transposed.

baileyheading commented 1 month ago

Another a related note, how fast is it already on CUDA?

balisujohn commented 1 month ago

image

This was a measurement I got on a 1070ti for the full process including model loading.

baileyheading commented 1 month ago

I got to 98% of diffusion before I got the unsupported op on M3 Max in about 26 seconds for "./tortoise". How far from the end to go from that point do you think?

balisujohn commented 1 month ago

About 5-8 more seconds probably. The vocoder usually is pretty fast.

balisujohn commented 1 month ago

that's a weird place for it to fail, I'd recommend checking with print statements to see if it fails before or after the last call to the diffusion graph.

baileyheading commented 1 month ago

Well I know the first unsupported op is pad reflect 1D, so which part of the code is that?

balisujohn commented 1 month ago

oh i was getting "illegal instruction" and "unsupported OP" confused, pad_reflect_1d is in the vocoder only.

baileyheading commented 1 month ago

It must surely have been very close the end then 😅. Do we really need those ops? 🤣

balisujohn commented 1 month ago

You can save any tensor you want to wav, but I think without those ops it won't sound very good lol

balisujohn commented 1 month ago

interesting ablation to try replacing pad_reflect with a normal pad but with all the other ops left the same to see if it totally ruins the generation.

baileyheading commented 1 month ago

lol. I'm the kind of programmer who would try that just out of curiosity. I'm currently using Piper for tts and it's very fast, but sounds so fake. Maybe it would still sound better 🤣

baileyheading commented 1 month ago

In all seriousness though, are these ops similar? That's potentially where I should start for seeing how to write metal (which I've zero experience with)

balisujohn commented 1 month ago

lmao it actually still works fine on cuda if you replace it with the below and the tests also pass. apparently pad_reflect_1d is pretty inconsequential, but I think I'll still keep it to retain consistency with tortoise-tts.

  ggml_tensor *cur = ggml_pad_ext(ctx0, vocoder_noise, 3, 3,0,0,0,0,0,0);

yeah I'd definitely look at the pad implementation for metal because pad reflect is just padding that mirrors the elements of the tensor whereas normal pad is just padding with a constant value. I'd start by trying to make pad reflect 1d a working clone of pad, then change the behavior to do a reflect pad instead. You can check https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html for a description of the intended behavior.

baileyheading commented 1 month ago

these are the unsupported operations as per my earlier comments

GGML_OP_PAD_REFLECT_1D
GGML_OP_CONV_TRANSPOSE_1D
GGML_OP_UNFOLD_1D

Do the other two have evil twins (the reflection) or are they quite unique from the existing ggml ops? Regardless, I didn't think to look at PyTorch docs thanks as I've only ever used PyTorch to run repos

balisujohn commented 1 month ago

No equivalent as far as I know. So those are totally unavoidable to get a working metal forward pass. Also the metal kernel of pad maybe be out of date ggernagov patched it in the pr but im not sure if thats in the tortoise branch.

baileyheading commented 1 month ago

I'm doing a bit of a "can chatGPT write metal" experiment. At the very least I've now got kernel functions and things setup so I'm in a position to work the problem. Things like "stride" and "block_size".. I see that for CUDA you set block_size to 256? Is there a performance thing or is that the value required to get the right result.

with reference to https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html of course

image

PS: This code could be way way off. It's better than nothing though at this stage

EDIT: my conv transpose 1d code currently never completes so I can't tell if unfold gets the right result (it's just static but when I skip conv transpose 1d that could cause that perhaps

balisujohn commented 1 month ago

so the way cuda works, you get some number of blocks all with the same number of threads per block. Generally you will assign a thread to each index of the output tensor, so the computation can be massively parallelized. You will have some extra threads if your output tensor size is not a multiple of the cuda block size, for these threads you check to see if the output index is out of bounds, and if it is you do nothing. I don't know how much of that transfers to metal.

balisujohn commented 1 month ago

You can just use whatever block size the other ggml metal kernels use. I think it can affect performance but that is pretty secondary here.

baileyheading commented 1 month ago

You can just use whatever block size the other ggml metal kernels use. I think it can affect performance but that is pretty secondary here.

thanks for the insights. I now see that various block sizes are defined as constants in the .metal code. I'll look into how CUDA translates to metal. At some point I can perhaps find someone who actually knows what they're doing (referring to myself being clueless) to have a look at my progress and provide some feedback

balisujohn commented 1 month ago

there is some code in main.cpp for saving tensors to and loading them from disk, and also comparing tensors to one another. This code might be helpful for confirming that the ops are work the same way on metal as on cuda. You can fetch intermediate result tensors out of the computational graph after the vocoder computation using ggml_graph_get_tensor after first tagging the in the graph function with ggml_set_name