Open saraalrawi opened 11 months ago
I could help you. First of all, the way you want to allocate memory to the context is wrong. ctx0 is for building the computation graph and only requires memory for tensor metadata. The tensor data should be stored in a backend buffer using ggml-alloc, you can see a simple example here: simple-ggml.
@FSSRepo Thank you very much for your reply!
The example you provided works as WebAssmbly (with little adaptation). I will try to apply the concept for training a neural network now. :-)
Let's keep the issue open till I am done so I can provide an example to be used by others :-)
Happy coding with ggml
I built a simple autoencoder model using ggml.
In the forward function and after running the matrix multiplication, and activation functions, I run:
ggml_build_forward_expand(gf, out_layer_1);
Forward function:
struct ggml_tensor *logits = forward(&model, ctx0 ,&gf ,real_data);
where ctx0 is the context initialised with the following parameters:
and real_data is a tensor of size 200x400 filled with float numbers.
I can train the model and run inference as well by compiling and running the compiled C++ code.
Later, I compiled the code into WebAssembly with MAXIMUM_MEMORY=4GB
My issue now: I receive the following error, which is an assertion gets triggered when a new object is created (probably)
Once the program reaches:
ggml_build_forward_expand(gf, out_layer_1);
Can anyone help? is it because of the memory size? should I allocate memory differently?
My hardware is MacBook Pro Apple M1 64