dschonholtz / Neuralink

2 stars 0 forks source link

[DISCUSSION] Autoencoder #1

Open Jerry-Master opened 1 month ago

Jerry-Master commented 1 month ago

I thought about overfitting an autoencoder per wav file. Do you think it makes sense? What I say is, train a very small model to predict the whole signal. Then, compute the error. Now you just need to transmit the network, the latent vector and the differences. Have you tried that instead of training a model that generalizes? Encoding the differences may lead to better compression ratios than the signal itself, and it also leads to a path for lossy compression using quantization there. The main limitation is how to train that under 1ms and under 10mW, however.

dschonholtz commented 1 month ago

There are a lot of questions here. For right now, I am only focused on doing compression on a single wav file at a time so monochannel for 5 seconds. I think the autoencoder stuff is a neat idea, but probably will not be feasible to run on a chip for the full 1024 signal.

I talk a bit about how I would approach this in my readme though.

I have not tried training any model for this. Right now this is all manipulating the bits. There is a lot more alpha there before I'd throw in the towel and switch to ML.

In general, I am not attempting to train a model because that will take a long time, and I do not have enough data from Neuralink to reliably do that.

As for training in under 1ms/10mW. Training would definitely happen elsewhere and then be quantized and loaded onto the chip. I would have to look at hardware specs and if there is a DPU or some equivilant on the neuralink chip to know if latency is possible to overcome.