Open anhhuyalex opened 3 months ago
to do: try varying the amount of input patches fed (from 25% of full to close to 100%) because we want the encoder to be robust to the number of patches fed as input
/weka/home-alexnguyen/mamba_fmri/ckpts/mar24_encoder32_decoder1_mamba64_lr3e-4/last.pth failed
working runs: apr22_encoder32_decoder32_equalseqlen_learnableposemb_encoder_outdim_512_lr4e-4
random tube mask: apr19_encoder32_decoder1_randomtubemask_fixdecodemask_learnableposemb_encoder_outdim_512_lr4e-6
downstream evals try scaling, e.g. reconstruct full volume