Fixed README instructions, you don't need to download ALL of the huggingface dataset
Switched from conda env to venv (in line with Stability HPC guidance)
Tidied up src directory, moving extra notebooks into a folder and going back to having a simple design of a base notebook + models.py + utils.py
MLPMixer seq_len mechanism now working properly
Rather than feeding the averaged voxels across the same-image repeats through the model once, validation testing now involves the feeding each single-image sample through model separately and averaging across those outputs. This is important as without doing this it leads to complete failure to use "memory" (aka seq_len > 1)
Removed depth estimations for now to keep things simpler for onboarding
Using SDXL VAE for blurry reconstructions
Can now set num_sessions and seq_len as argparse arguments
accel.slurm example for sbatch now will convert the Train.ipynb file to .py automatically
Removed deepspeed for now to keep things simpler for onboarding
Removed cells after the model training to keep things simpler for onboarding (can still reference the notebooks in more_notebooks/ folder)