Open PrettyJerry opened 4 years ago
Hey there Jerry,
I hope that you are okay and that youa re doing well. I would like to ask your help on this. I had the same error as you here, and i could not figure out how to fix it. Did you find the solution ? If yes could you share it with me please ?
Thanks !
@MouadAouni you only have to remove data_df from dataloader.py-->return (x_train, None), (x_test, None), data_df That's it
@errorhandlerst thank you for the reply. It helped me avoid this problem to fall in another one lol.
There it is :
Train phase:
====== Model summary ======
Traceback (most recent call last):
File "PCA_demo_without_labels.py", line 34, in
What do you think ? Thanks again.
EDIT : I'm using here my own data, it isn't the same as theirs, and i have just 4 columns :
Content | EventId | EventTemplate | ParameterList
@MouadAouni here you have to apply one space make n_components = i + 1 inside for loop:-- in PCA.py
@errorhandlerst I already tried this but got this error :
Train phase:
====== Model summary ======
Traceback (most recent call last):
File "PCA_demo_without_labels.py", line 34, in
and then made this change: n_components = num_events + 1 and got this :
====== Input data summary ====== Loading ../data/HDFS/mover.txt_structured.csv Total: 0 instances, train: 0 instances, test: 0 instances ====== Transformed train data summary ====== ..\loglizer\preprocessing.py:102: RuntimeWarning: Mean of empty slice. mean_vec = X.mean(axis=0) Train data shape: 0-by-0
Train phase: ====== Model summary ====== n_components: 1 Project matrix shape: 0-by-0 ..\loglizer\models\PCA.py:83: RuntimeWarning: invalid value encountered in double_scalars h0 = 1.0 - 2 phi[0] phi[2] / (3.0 phi[1] phi[1]) SPE threshold: nan
Test phase: ====== Input data summary ====== Loading ../data/HDFS/mover.txt_structured.csv Total: 0 instances, train: 0 instances, test: 0 instances ====== Transformed test data summary ====== Test data shape: 0-by-0
@MouadAouni Did you change the load_hdfs data according your data? if not then you should change because load_hdfs method used session window .You might require fixed window (First check PCA working with default hdfs data If that working fine then might be problem id related to your data)
@errorhandlerst Do we have any resolution for this error yet ?
ValueError: too many values to unpack (expected 2)
hi, when i exec the command in the url: https://github.com/logpai/loglizer/blob/master/docs/demo.md
python PCA_demo_without_labels.py
it returns:
(base) localhost:demo guojie$ python PCA_demo_without_labels.py ====== Input data summary ====== Loading ../data/HDFS/HDFS_100k.log_structured.csv Total: 7940 instances, train: 3970 instances, test: 3970 instances Traceback (most recent call last): File "PCA_demo_without_labels.py", line 24, in split_type='sequential', save_csv=True) ValueError: too many values to unpack (expected 2)
Solution: train_test_tuple = dataloader.load_HDFS(..........) (x_train, y_train), (x_test, y_test) = train_test_tuple[0], train_test_tuple[1]
@huhui , will this demo without label codes work on Android Log file datasets ?
hi, when i exec the command in the url: https://github.com/logpai/loglizer/blob/master/docs/demo.md
python PCA_demo_without_labels.py
it returns:
(base) localhost:demo guojie$ python PCA_demo_without_labels.py ====== Input data summary ====== Loading ../data/HDFS/HDFS_100k.log_structured.csv Total: 7940 instances, train: 3970 instances, test: 3970 instances Traceback (most recent call last): File "PCA_demo_without_labels.py", line 24, in
split_type='sequential', save_csv=True)
ValueError: too many values to unpack (expected 2)