Closed mayisme closed 8 months ago
data_Al(SiO3)2 _symmetry_space_group_name_H-M 'P 1' _cell_length_a 5.14000000 _cell_length_b 8.90000000 _cell_length_c 18.55000000 _cell_angle_alpha 90.00000000 _cell_angle_beta 99.92000000 _cell_angle_gamma 90.00000000 _symmetry_Int_Tables_number 1 _chemical_formula_structural Al(SiO3)2 _chemical_formula_sum 'Al8 Si16 O48' _cell_volume 835.90126980 _cell_formula_unitsZ 8 loop _symmetry_equiv_pos_site_id _symmetry_equiv_pos_asxyz 1 'x, y, z' loop _atom_site_type_symbol _atom_site_label _atom_site_symmetry_multiplicity _atom_site_fract_x _atom_site_fract_y _atom_site_fract_z _atom_site_occupancy Al Al0 1 0.00000000 0.33300000 0.00000000 1.0 Al Al1 1 0.50000000 0.83300000 0.00000000 1.0 Al Al2 1 0.00000000 0.66700000 0.50000000 1.0 Al Al3 1 0.50000000 0.16700000 0.50000000 1.0 Al Al4 1 0.00000000 0.33300000 0.50000000 1.0 Al Al5 1 0.50000000 0.83300000 0.50000000 1.0 Al Al6 1 0.00000000 0.66700000 0.00000000 1.0 Al Al7 1 0.50000000 0.16700000 0.00000000 1.0 Si Si1 1 0.76100000 0.00000000 0.14300000 1.0 Si Si1 1 0.26100000 0.50000000 0.14300000 1.0 Si Si1 1 0.76100000 0.00000000 0.64300000 1.0 Si Si1 1 0.26100000 0.50000000 0.64300000 1.0 Si Si1 1 0.23900000 0.00000000 0.35700000 1.0 Si Si1 1 0.73900000 0.50000000 0.35700000 1.0 Si Si1 1 0.23900000 0.00000000 0.85700000 1.0 Si Si1 1 0.73900000 0.50000000 0.85700000 1.0 Si Si2 1 0.26100000 0.16700000 0.14300000 1.0 Si Si2 1 0.76100000 0.66700000 0.14300000 1.0 Si Si2 1 0.26100000 0.83300000 0.64300000 1.0 Si Si2 1 0.76100000 0.33300000 0.64300000 1.0 Si Si2 1 0.73900000 0.16700000 0.35700000 1.0 Si Si2 1 0.23900000 0.66700000 0.35700000 1.0 Si Si2 1 0.73900000 0.83300000 0.85700000 1.0 Si Si2 1 0.23900000 0.33300000 0.85700000 1.0 O O1 1 0.20300000 0.50000000 0.05800000 1.0 O O1 1 0.70300000 0.00000000 0.05800000 1.0 O O1 1 0.20300000 0.50000000 0.55800000 1.0 O O1 1 0.70300000 0.00000000 0.55800000 1.0 O O1 1 0.79700000 0.50000000 0.44200000 1.0 O O1 1 0.29700000 0.00000000 0.44200000 1.0 O O1 1 0.79700000 0.50000000 0.94200000 1.0 O O1 1 0.29700000 0.00000000 0.94200000 1.0 O O2 1 0.20300000 0.16700000 0.05800000 1.0 O O2 1 0.70300000 0.66700000 0.05800000 1.0 O O2 1 0.20300000 0.83300000 0.55800000 1.0 O O2 1 0.70300000 0.33300000 0.55800000 1.0 O O2 1 0.79700000 0.16700000 0.44200000 1.0 O O2 1 0.29700000 0.66700000 0.44200000 1.0 O O2 1 0.79700000 0.83300000 0.94200000 1.0 O O2 1 0.29700000 0.33300000 0.94200000 1.0 O O-H1 1 0.20300000 0.83300000 0.05800000 1.0 O O-H1 1 0.70300000 0.33300000 0.05800000 1.0 O O-H1 1 0.20300000 0.16700000 0.55800000 1.0 O O-H1 1 0.70300000 0.66700000 0.55800000 1.0 O O-H1 1 0.79700000 0.83300000 0.44200000 1.0 O O-H1 1 0.29700000 0.33300000 0.44200000 1.0 O O-H1 1 0.79700000 0.16700000 0.94200000 1.0 O O-H1 1 0.29700000 0.66700000 0.94200000 1.0 O O3 1 0.02500000 0.08300000 0.17600000 1.0 O O3 1 0.52500000 0.58300000 0.17600000 1.0 O O3 1 0.02500000 0.91700000 0.67600000 1.0 O O3 1 0.52500000 0.41700000 0.67600000 1.0 O O3 1 0.97500000 0.08300000 0.32400000 1.0 O O3 1 0.47500000 0.58300000 0.32400000 1.0 O O3 1 0.97500000 0.91700000 0.82400000 1.0 O O3 1 0.47500000 0.41700000 0.82400000 1.0 O O4 1 0.52500000 0.08300000 0.17600000 1.0 O O4 1 0.02500000 0.58300000 0.17600000 1.0 O O4 1 0.52500000 0.91700000 0.67600000 1.0 O O4 1 0.02500000 0.41700000 0.67600000 1.0 O O4 1 0.47500000 0.08300000 0.32400000 1.0 O O4 1 0.97500000 0.58300000 0.32400000 1.0 O O4 1 0.47500000 0.91700000 0.82400000 1.0 O O4 1 0.97500000 0.41700000 0.82400000 1.0 O O5 1 0.27500000 0.33300000 0.17600000 1.0 O O5 1 0.77500000 0.83300000 0.17600000 1.0 O O5 1 0.27500000 0.66700000 0.67600000 1.0 O O5 1 0.77500000 0.16700000 0.67600000 1.0 O O5 1 0.72500000 0.33300000 0.32400000 1.0 O O5 1 0.22500000 0.83300000 0.32400000 1.0 O O5 1 0.72500000 0.66700000 0.82400000 1.0 O O5 1 0.22500000 0.16700000 0.82400000 1.0 my Own data is like this
Can you provide me with the full error/output? At what line in the code does this ValueError occur?
Hi,the full error is as follows:
Traceback (most recent call last):
File "/Users/xiaoyf/Documents/Jupyterlab/XRD-AutoAnalyzer/Novel-Space/construct_model.py", line 107, in
Hi, when I try to modify def y(self) as below, the Problem was solved. @property def y(self): """ Target property to predict (one-hot encoded vectors associated with the reference phases) """ n_phases = len(self.xrd) # 假设 self.xrd 是所有不同相位的集合 phase_indices = self.phase_indices one_hot_vectors = [] for index in phase_indices: one_hot_vector = [0] * n_phases one_hot_vector[index] = 1 one_hot_vectors.append(one_hot_vector) return np.array(one_hot_vectors)
however when I try to execute the run_CNN.py with new trained Model.h5, a new issue came up. the full error is as follows: Traceback (most recent call last): File "/Users/xiaoyf/Library/jupyterlab-desktop/jlab_server/lib/python3.12/site-packages/keras/src/ops/operation.py", line 196, in from_config return cls(**config) ^^^^^^^^^^^^^ File "/Users/xiaoyf/Library/jupyterlab-desktop/jlab_server/lib/python3.12/site-packages/keras/src/layers/core/dense.py", line 87, in init self.activation = activations.get(activation) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/xiaoyf/Library/jupyterlab-desktop/jlab_server/lib/python3.12/site-packages/keras/src/activations/init.py", line 104, in get raise ValueError( ValueError: Could not interpret activation function identifier: {'module': 'builtins', 'class_name': 'function', 'config': 'softmax_v2', 'registered_name': 'function'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/xiaoyf/Documents/Jupyterlab/XRD-AutoAnalyzer/Novel-Space/run_CNN1.py", line 33, in
Exception encountered: Could not interpret activation function identifier: {'module': 'builtins', 'class_name': 'function', 'config': 'softmax_v2', 'registered_name': 'function'}
Can you provide me with a zipped version of your run folder? This would help me determine what's causing the issue.
my training accuracy is 0, how to improve the model?
Training data shape: (74, 4501, 1)
Training labels shape: (74, 92)
Epoch 1/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 3s 1s/step - categorical_accuracy: 0.0000e+00 - loss: 7.0845 - val_categorical_accuracy: 0.0000e+00 - val_loss: 14.5905
Epoch 2/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 400ms/step - categorical_accuracy: 0.0217 - loss: 7.3922 - val_categorical_accuracy: 0.1333 - val_loss: 46.1377
Epoch 3/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 404ms/step - categorical_accuracy: 0.0434 - loss: 6.9763 - val_categorical_accuracy: 0.0667 - val_loss: 73.0654
Epoch 4/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 414ms/step - categorical_accuracy: 0.0547 - loss: 7.3809 - val_categorical_accuracy: 0.0000e+00 - val_loss: 90.2825
Epoch 5/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 412ms/step - categorical_accuracy: 0.0113 - loss: 6.6131 - val_categorical_accuracy: 0.0000e+00 - val_loss: 93.7274
Epoch 6/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 419ms/step - categorical_accuracy: 0.0217 - loss: 6.9738 - val_categorical_accuracy: 0.0000e+00 - val_loss: 79.6283
Epoch 7/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 409ms/step - categorical_accuracy: 0.0226 - loss: 7.0533 - val_categorical_accuracy: 0.0000e+00 - val_loss: 76.9755
Epoch 8/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 407ms/step - categorical_accuracy: 0.0000e+00 - loss: 6.7895 - val_categorical_accuracy: 0.0000e+00 - val_loss: 64.8061
Epoch 9/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 404ms/step - categorical_accuracy: 0.0000e+00 - loss: 7.0489 - val_categorical_accuracy: 0.0000e+00 - val_loss: 59.4355
Epoch 10/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 398ms/step - categorical_accuracy: 0.0547 - loss: 6.4233 - val_categorical_accuracy: 0.0000e+00 - val_loss: 66.9305
Epoch 11/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.0217 - loss: 6.1931 - val_categorical_accuracy: 0.0000e+00 - val_loss: 54.7749
Epoch 12/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 421ms/step - categorical_accuracy: 0.0651 - loss: 5.7802 - val_categorical_accuracy: 0.0000e+00 - val_loss: 57.8014
Epoch 13/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.0000e+00 - loss: 5.9988 - val_categorical_accuracy: 0.0667 - val_loss: 52.5752
Epoch 14/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 408ms/step - categorical_accuracy: 0.0443 - loss: 6.3365 - val_categorical_accuracy: 0.0000e+00 - val_loss: 69.5286
Epoch 15/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 416ms/step - categorical_accuracy: 0.0764 - loss: 5.9680 - val_categorical_accuracy: 0.0667 - val_loss: 62.5044
Epoch 16/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 430ms/step - categorical_accuracy: 0.0990 - loss: 4.8516 - val_categorical_accuracy: 0.0667 - val_loss: 50.8224
Epoch 17/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 413ms/step - categorical_accuracy: 0.0660 - loss: 5.5496 - val_categorical_accuracy: 0.0000e+00 - val_loss: 48.6654
Epoch 18/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 404ms/step - categorical_accuracy: 0.1199 - loss: 4.8782 - val_categorical_accuracy: 0.0000e+00 - val_loss: 45.4010
Epoch 19/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 441ms/step - categorical_accuracy: 0.1642 - loss: 5.0994 - val_categorical_accuracy: 0.0000e+00 - val_loss: 48.2345
Epoch 20/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 407ms/step - categorical_accuracy: 0.1103 - loss: 5.6798 - val_categorical_accuracy: 0.0000e+00 - val_loss: 41.5763
Epoch 21/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 397ms/step - categorical_accuracy: 0.1086 - loss: 4.6253 - val_categorical_accuracy: 0.0000e+00 - val_loss: 44.2621
Epoch 22/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 404ms/step - categorical_accuracy: 0.1642 - loss: 4.8132 - val_categorical_accuracy: 0.0000e+00 - val_loss: 40.7891
Epoch 23/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 420ms/step - categorical_accuracy: 0.0660 - loss: 4.8918 - val_categorical_accuracy: 0.0000e+00 - val_loss: 42.6678
Epoch 24/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 402ms/step - categorical_accuracy: 0.1199 - loss: 4.9616 - val_categorical_accuracy: 0.0000e+00 - val_loss: 38.0936
Epoch 25/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 406ms/step - categorical_accuracy: 0.1651 - loss: 4.3112 - val_categorical_accuracy: 0.0000e+00 - val_loss: 43.2405
Epoch 26/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 425ms/step - categorical_accuracy: 0.1755 - loss: 4.8295 - val_categorical_accuracy: 0.0000e+00 - val_loss: 38.3020
Epoch 27/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 416ms/step - categorical_accuracy: 0.1416 - loss: 4.7973 - val_categorical_accuracy: 0.0000e+00 - val_loss: 37.0159
Epoch 28/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 412ms/step - categorical_accuracy: 0.1199 - loss: 4.9667 - val_categorical_accuracy: 0.0000e+00 - val_loss: 40.4837
Epoch 29/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.0877 - loss: 4.5363 - val_categorical_accuracy: 0.0000e+00 - val_loss: 35.5006
Epoch 30/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 412ms/step - categorical_accuracy: 0.1329 - loss: 4.2568 - val_categorical_accuracy: 0.0000e+00 - val_loss: 37.3278
Epoch 31/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 409ms/step - categorical_accuracy: 0.1095 - loss: 4.6527 - val_categorical_accuracy: 0.0000e+00 - val_loss: 32.4564
Epoch 32/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 403ms/step - categorical_accuracy: 0.1529 - loss: 4.0403 - val_categorical_accuracy: 0.0667 - val_loss: 29.2192
Epoch 33/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 411ms/step - categorical_accuracy: 0.1538 - loss: 4.0359 - val_categorical_accuracy: 0.0000e+00 - val_loss: 25.5436
Epoch 34/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 419ms/step - categorical_accuracy: 0.2198 - loss: 3.7955 - val_categorical_accuracy: 0.0000e+00 - val_loss: 23.9804
Epoch 35/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 419ms/step - categorical_accuracy: 0.1972 - loss: 4.4588 - val_categorical_accuracy: 0.0000e+00 - val_loss: 20.5819
Epoch 36/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.1981 - loss: 3.6381 - val_categorical_accuracy: 0.0667 - val_loss: 16.0094
Epoch 37/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 416ms/step - categorical_accuracy: 0.1886 - loss: 3.6445 - val_categorical_accuracy: 0.0000e+00 - val_loss: 17.8915
Epoch 38/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 413ms/step - categorical_accuracy: 0.1651 - loss: 4.0391 - val_categorical_accuracy: 0.0000e+00 - val_loss: 16.6869
Epoch 39/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.2311 - loss: 3.7243 - val_categorical_accuracy: 0.0000e+00 - val_loss: 19.8661
Epoch 40/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 419ms/step - categorical_accuracy: 0.2198 - loss: 3.5134 - val_categorical_accuracy: 0.0000e+00 - val_loss: 23.7789
Epoch 41/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 408ms/step - categorical_accuracy: 0.0782 - loss: 4.0584 - val_categorical_accuracy: 0.0000e+00 - val_loss: 22.0357
Epoch 42/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 418ms/step - categorical_accuracy: 0.2841 - loss: 3.4871 - val_categorical_accuracy: 0.0000e+00 - val_loss: 25.1013
Epoch 43/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 412ms/step - categorical_accuracy: 0.1538 - loss: 3.7292 - val_categorical_accuracy: 0.0667 - val_loss: 19.0040
Epoch 44/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 417ms/step - categorical_accuracy: 0.3076 - loss: 3.2854 - val_categorical_accuracy: 0.0000e+00 - val_loss: 18.3936
Epoch 45/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.3293 - loss: 3.1678 - val_categorical_accuracy: 0.0000e+00 - val_loss: 15.6123
Epoch 46/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 407ms/step - categorical_accuracy: 0.2311 - loss: 3.1164 - val_categorical_accuracy: 0.0000e+00 - val_loss: 18.2801
Epoch 47/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 410ms/step - categorical_accuracy: 0.2754 - loss: 3.1783 - val_categorical_accuracy: 0.0000e+00 - val_loss: 16.1246
Epoch 48/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 415ms/step - categorical_accuracy: 0.1547 - loss: 3.7555 - val_categorical_accuracy: 0.0000e+00 - val_loss: 14.5329
Epoch 49/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 438ms/step - categorical_accuracy: 0.1999 - loss: 3.7122 - val_categorical_accuracy: 0.0000e+00 - val_loss: 11.7346
Epoch 50/50
2/2 ━━━━━━━━━━━━━━━━━━━━ 1s 416ms/step - categorical_accuracy: 0.1981 - loss: 3.4944 - val_categorical_accuracy: 0.0000e+00 - val_loss: 12.3622
WARNING:absl:You are saving your model as an HDF5 file via model.save()
or keras.saving.save_model(model)
. This file format is considered legacy. We recommend using instead the native Keras format, e.g. model.save('my_model.keras')
or keras.saving.save_model(model, 'my_model.keras')
.
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 80ms/step - categorical_accuracy: 0.0000e+00 - loss: 12.1257
Test Accuracy: 0.0%
This is likely due to your modification of the y() function. If you can provide me with a zipped version of your run folder, I'd be happy to help debug.
Hi, Nathan, Attached please find the folder, thanks for your time! Very appreciated!
2024年3月10日 08:39,Nathan Szymanski @.***> 写道:
This is likely due to your modification of the y() function. If you can provide me with a zipped version of your run folder, I'd be happy to help debug.
— Reply to this email directly, view it on GitHub https://github.com/njszym/XRD-AutoAnalyzer/issues/5#issuecomment-1987025373, or unsubscribe https://github.com/notifications/unsubscribe-auth/AU3PTGLTGNM7TMBOB3CWZ63YXOTUDAVCNFSM6AAAAABEOACS2OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSOBXGAZDKMZXGM. You are receiving this because you authored the thread.
Hi, the Novel-space.zip is my training folder, and the autoXRD.zip is the folder from side-packages, thanks for your time
I was able to reproduce your errors after upgrading tensorflow to the latest version. Turns out this modifies how the input should be shaped before passing it to the CNN. I've modified the relevant parts of the package to accommodate these changes. Please try downloading the latest version and re-training your model.
Thanks for bringing this issue to my attention! Let me know if you encounter any other problems.
Great! it Works! Thank you very much!
Hello,
I hope this message finds you well. I'm reaching out for assistance with an issue I've encountered while running construct_model.py using the autoXRD library.
Initially, I used my own CIF files sourced from the Crystallography Open Database, converting them with pymatgen to match the format of the example CIF files provided by your library. During this process, I encountered a ValueError:
ValueError: Cannot squeeze dim[1], expected a dimension of 1, got 9 for '{{node Squeeze}} = SqueezeT=DT_INT32, squeeze_dims=[-1]' with input shapes: [32,9]. This error suggests there is an issue with the tf.squeeze operation, where it is expecting a size of 1 in the last dimension but is encountering a size of 9. The tensor shape causing the issue is [32, 9].
To rule out the possibility of the error being related to my data, I replaced my CIF files with some of the example CIF files provided by your library. Unfortunately, the same issue occurred.
I have verified that the shapes of the input data conform to the model’s expected input shape, yet the error still occurs. Could you provide some guidance on what might be causing this error, or suggest any debugging steps I could take?
I'm happy to provide additional details or the full stack trace if it would be helpful for diagnosing the problem.
I appreciate any help you can give me on this matter.
Thank you for your time and assistance.
Kind regards