Thanks for doing this project, its really interesting.
I am hoping to run a decoder and encoder with the structural context, but I'm running into three issues.
(1) The line:
As a prerequisite, you must have SE(3)-Transformers installed to use this repository.
Is rather ambiguous as the SE(3)-Transformers provides a docker file for training the transformer itself. Can you please provide the few command lines we need to integrate the SE(3)-Transformer into the conda environment.
(2) The requirements file contains the exact requirements for every packages and its dependencies for that machine. Running the conda command to build the environment will surely fail.
Can you please report the results of the command
conda activate ProMEPconda env export –from-history > environment.yaml
(3) Is their a way to run the decoder given the input embedding? What code should I run in order to do this?
Hi all,
Thanks for doing this project, its really interesting.
I am hoping to run a decoder and encoder with the structural context, but I'm running into three issues. (1) The line:
Is rather ambiguous as the SE(3)-Transformers provides a docker file for training the transformer itself. Can you please provide the few command lines we need to integrate the SE(3)-Transformer into the conda environment.
(2) The requirements file contains the exact requirements for every packages and its dependencies for that machine. Running the conda command to build the environment will surely fail. Can you please report the results of the command
conda activate ProMEP
conda env export –from-history > environment.yaml
(3) Is their a way to run the decoder given the input embedding? What code should I run in order to do this?
Take care, Bryce Johnson