matsengrp / vampire

🧛 Deep generative models for TCR sequences 🧛
Apache License 2.0
16 stars 4 forks source link

Simplify running #78

Closed matsen closed 5 years ago

matsen commented 5 years ago

Right now to run for a new data set one needs to

python util.py split-repertoires --ncols 17 --out-prefix _ignore/deneuter-$(date -I) --test-size 0.333333333333333 $(fd .tsv /fh/fast/matsen_e/data/adaptive-deneuter)

Make a test-with-extras.txt file so that we can compare test and train. Add relevant targets to the SConscript Run like scons --data=deneuter-train --test=deneuter-test-extras -j 100 --clusters=beagle

Now on laptop:

scp stoat:/home/matsen/re/vampire/vampire/pipe_main/_output_deneuter-train/summarized.agg.csv $(date -I)-deneuter-extras.csv
scp stoat:/home/matsen/re/vampire/vampire/_ignore/deneuter-2018-12-31.json .

Run like scons --data=deneuter-train --test=deneuter-test -j 100 --clusters=beagle so we only aggregate the test samples. Then

rsync -avz --exclude='.out' --exclude='.err' --exclude='*log*' 'stoat:/home/matsen/re/vampire/vampire/pipe_main/_output_deneuter-train/*' 2019-01-01-deneuter/

What a mess.

matsen commented 5 years ago

Actually, test-extras.txt is made by the split-repertoires utility.

matsen commented 5 years ago

Updated protocol:

python util.py split-repertoires --out-prefix $PWD/_ignore/deneuter-$(date -I) --test-size 0.333333333333333 $(fd .tsv /fh/fast/matsen_e/data/adaptive-deneuter)
scons --data=$PWD/_ignore/deneuter-2019-02-07.json -j 75 --clusters=beagle