Closed HarshaSatyavardhan closed 9 months ago
when running the 0_run.py in
Validity_rate_ucRNN__Success_rate_cRNN
I am getting this error. what is thistemp.json
file? is it a placeholderis it the
cifs.json
you are referring to in place of temp.json?
Which 0_run.py? I guess it is the 0_run.py in "SLICES\benchmark\Validity_rate_ucRNN__Success_rate_cRNN\0_get_json\1_element_filter\", right? Could you post a screen shot of your error? Did you run the test under the docker image provided?
yes, as you can see in this image when I try to run the file its showing the temp.json not found. where this file is getting generated?
[unable to download the prior and transfer learning datasets] and when I run
cd SLICES/HTS/0_get_json_mp_api
python 0_prior_model_dataset.py
I am getting this error
python 0_prior_model_dataset.py
Accessing summary data through MPRester.summary is deprecated. Please use MPRester.materials.summary instead.
Traceback (most recent call last):
File "/scratch/harsha.vasamsetti/SLICES/HTS/0_get_json_mp_api/0_prior_model_dataset.py", line 15, in <module>
docs = mpr.summary.search(exclude_elements=['Fr' , 'Ra','Ac','Th','Pa','U','Np',\
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/routes/materials/summary.py", line 283, in search
return super()._search(
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 1072, in _search
return self._get_all_documents(
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 1119, in _get_all_documents
results = self._query_resource(
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 411, in _query_resource
data = self._submit_requests(
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 552, in _submit_requests
initial_data_tuples = self._multi_thread(
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 766, in _multi_thread
data, subtotal = future.result()
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 873, in _submit_request_and_process
raise MPRestError(
mp_api.client.core.client.MPRestError: REST query returned with error status code 422 on URL https://api.materialsproject.org/materials/summary/?formation_energy_per_atom_min=-10000&formation_energy_per_atom_max=0&nsites_min=1&nsites_max=10&exclude_elements=Fr%2CRa%2CAc%2CTh%2CPa%2CU%2CNp%2CPu%2CAm%2CCm%2CBk%2CCf%2CEs%2CFm%2CMd%2CNo%2CLr%2CRf%2CDb%2CSg%2CBh%2CHs%2CMt%2CDs%2CRg%2CCn%2CNh%2CFl%2CMc%2CLv%2CTs%2COg&_limit=1000&_fields=material_id with message:
exclude_elements - String should have at most 15 characters
yes, as you can see in this image when I try to run the file its showing the temp.json not found. where this file is getting generated?
Please refer to the tutorial in the Readme of this github repo. You need to run python 1_splitrun.py to run the test rather than go into the workflow folder to run python 0_run.py directly.
Thank you so much for answering those questions.
what is export XTB_MOD_PATH=/opt/xtb
In each pbs file? is it referring to some package present in the repository? as I am running locally I am willing to modify the .pbs files.
[unable to download the prior and transfer learning datasets] and when I run
cd SLICES/HTS/0_get_json_mp_api python 0_prior_model_dataset.py
I am getting this error
python 0_prior_model_dataset.py
Accessing summary data through MPRester.summary is deprecated. Please use MPRester.materials.summary instead. Traceback (most recent call last): File "/scratch/harsha.vasamsetti/SLICES/HTS/0_get_json_mp_api/0_prior_model_dataset.py", line 15, in <module> docs = mpr.summary.search(exclude_elements=['Fr' , 'Ra','Ac','Th','Pa','U','Np',\ File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/routes/materials/summary.py", line 283, in search return super()._search( File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 1072, in _search return self._get_all_documents( File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 1119, in _get_all_documents results = self._query_resource( File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 411, in _query_resource data = self._submit_requests( File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 552, in _submit_requests initial_data_tuples = self._multi_thread( File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 766, in _multi_thread data, subtotal = future.result() File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/concurrent/futures/_base.py", line 439, in result return self.__get_result() File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home2/harsha.vasamsetti/miniconda3/envs/slices/lib/python3.9/site-packages/mp_api/client/core/client.py", line 873, in _submit_request_and_process raise MPRestError( mp_api.client.core.client.MPRestError: REST query returned with error status code 422 on URL https://api.materialsproject.org/materials/summary/?formation_energy_per_atom_min=-10000&formation_energy_per_atom_max=0&nsites_min=1&nsites_max=10&exclude_elements=Fr%2CRa%2CAc%2CTh%2CPa%2CU%2CNp%2CPu%2CAm%2CCm%2CBk%2CCf%2CEs%2CFm%2CMd%2CNo%2CLr%2CRf%2CDb%2CSg%2CBh%2CHs%2CMt%2CDs%2CRg%2CCn%2CNh%2CFl%2CMc%2CLv%2CTs%2COg&_limit=1000&_fields=material_id with message: exclude_elements - String should have at most 15 characters
This bug is induced by the update of the server rules of MP API. I have updated relevant files to fix this problem (download the latest github repo).
Thank you so much for answering those questions.
what is
export XTB_MOD_PATH=/opt/xtb
In each pbs file? is it referring to some package present in the repository? as I am running locally I am willing to modify the .pbs files.
/opt/xtb is actually https://github.com/xiaohang007/SLICES/blob/main/invcryrep/xtb_noring_nooutput_nostdout_noCN. The pip package of SLICES has a built-in script in the installation routine to copy this file to /opt/xtb. This file is the modified xtb package with GFN-FF. The detailed info for this binary is in https://github.com/xiaohang007/xtb.
export XTB_MOD_PATH=/opt/xtb
is this necessary when processing the slices?. I have installed the slices using pip in the conda env and processing and I dont have anything in this path.
and I see that you updated the code, thanks for replying quickly. I see that you removed the exclude element list from the mp api. I have already implemented a work around by keeping the excluding the elements under the limit and looping over all elements.
should I still keep excluding list or is it ok to use the latest commit
export XTB_MOD_PATH=/opt/xtb
is this necessary when processing the slices?. I have installed the slices using pip in the conda env and processing and I dont have anything in this path.and I see that you updated the code, thanks for replying quickly. I see that you removed the exclude element list from the mp api. I have already implemented a work around by keeping the excluding the elements under the limit and looping over all elements.
should I still keep excluding list or is it ok to use the latest commit
I am sorry for the mistake in previous replies. "export XTB_MOD_PATH=/opt/xtb" is not necessary when processing the slices. I actually included os.environ["XTB_MOD_PATH"] = os.path.abspath(os.path.dirname(file))+"/xtb_noring_nooutput_nostdout_noCN" in invcryrep.py, which means "export XTB_MOD_PATH=/opt/xtb" will always be overriden. “export XTB_MOD_PATH=/opt/xtb” used to be the way I set up the path for modified xtb, but the new method is just more streamlined. I will delete "export XTB_MOD_PATH=/opt/xtb" in scripts. Thank you for the heads-up.
Regarding "should I still keep excluding list or is it ok to use the latest commit", I think both should be fine. Maybe you can check whether your result is the same with mine or not.
Thank you so much.
what is the difference between HTS/2_train_sample/transfer_userinpt.py
and /HTS/2_train_sample/transfer.py
. I see you train on the augmented slices and using the prior model to do transfer learning on a specific dataset cano_acceptors_smi.csv
which is mentioned default in the transfer_uerinput.py
but I could not find them in the repository, and for transfer.py
you used refined_smii_cano.csv
so what is the difference. should I need to generate these dataset files?
Thank you so much. what is the difference between
HTS/2_train_sample/transfer_userinpt.py
and/HTS/2_train_sample/transfer.py
. I see you train on the augmented slices and using the prior model to do transfer learning on a specific datasetcano_acceptors_smi.csv
which is mentioned default in thetransfer_uerinput.py
but I could not find them in the repository, and fortransfer.py
you usedrefined_smii_cano.csv
so what is the difference. should I need to generate these dataset files?
You can follow the tutorial. In fact, transfer.py is not used in our task. "refined_smii_cano.csv" is not needed. I have deleted transfer.py in order to prevent confusions.
when running the 0_run.py in
Validity_rate_ucRNN__Success_rate_cRNN
I am getting this error. what is thistemp.json
file? is it a placeholderis it the
cifs.json
you are referring to in place of temp.json?