MingjunZhong / seq2point-nilm

Sequence-to-point learning for non-intrusive load monitoring
95 stars 32 forks source link

error #1

Closed Hessen525 closed 3 years ago

Hessen525 commented 3 years ago

Hi, @MingjunZhong This code is very clear and easy to understand, thanks for your share. But, I can not run it successfully. The steps as following:

  1. I run create_trainset_redd.py first for produce datasets processed, it successfully produce train, validation, test sets.
  2. Setting path of train and validation in train_main.pt, and then run it. The error show as following:

2020-07-10 18:32:07.111307: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2020-07-10 18:32:07.125057: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x7ff3429c8050 executing computations on platform Host. Devices: 2020-07-10 18:32:07.125071: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Host, Default Version Importing training file... Traceback (most recent call last): File "/Users//Desktop/seq2point-nilm-master/train_main.py", line 32, in trainer.train_model() File "/Users//Desktop/seq2point-nilm-master/seq2point_train.py", line 112, in train_model training_history = self.default_train(model, callbacks, steps_per_training_epoch) File "/Users/e/Desktop/seq2point-nilm-master/seq2point_train.py", line 155, in default_train validation_steps=self.__validation_steps) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training.py", line 728, in fit use_multiprocessing=use_multiprocessing) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 224, in fit distribution_strategy=strategy) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 547, in _process_training_inputs use_multiprocessing=use_multiprocessing) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 606, in _process_inputs use_multiprocessing=use_multiprocessing) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/tensorflow_core/python/keras/engine/data_adapter.py", line 556, in init peek = next(x) File "/Users//Desktop/seq2point-nilm-master/data_feeder.py", line 88, in load_dataset self.check_if_chunking() File "/Users//Desktop/seq2point-nilm-master/data_feeder.py", line 63, in check_if_chunking skiprows=self.skip_rows) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/pandas/io/parsers.py", line 676, in parser_f return _read(filepath_or_buffer, kwds) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/pandas/io/parsers.py", line 448, in _read parser = TextFileReader(fp_or_buf, **kwds) File "/Users/***/opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/pandas/io/parsers.py", line 880, in init self._make_engine(self.engine) File "/Users/*/opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/pandas/io/parsers.py", line 1114, in _make_engine self._engine = CParserWrapper(self.f, *self.options) File "/Users//opt/anaconda3/envs/nilmtk-env/lib/python3.6/site-packages/pandas/io/parsers.py", line 1891, in init self._reader = parsers.TextReader(src, **kwds) File "pandas/_libs/parsers.pyx", line 532, in pandas._libs.parsers.TextReader.cinit__ pandas.errors.EmptyDataError: No columns to parse from file

MingjunZhong commented 3 years ago

@Hessen525 Sorry for delay. It seems to me the error was from the nilmtk-env environment. Which version are you using Keras and Tensorflow? It is likely the keras version.

Hessen525 commented 3 years ago

@MingjunZhong Hi, I am using Keras 2.3.1 and Tensorflow 2.0.0

MingjunZhong commented 3 years ago

@Hessen525 That looks correct. So the error looks from your data files. Then can you check the file directory? Also open the csv file to see if the data columns are correct.

Hessen525 commented 3 years ago

@MingjunZhong I changed the default directory as following, image

If I change directory name to a wrong one, it will prompt error like " directory don't exist"

MingjunZhong commented 3 years ago

@Hessen525 So then I suspect the data: can you click to open both *.csv files to see if there are two columns because the error reported was pandas.errors.EmptyDataError: No columns to parse from file?

Hessen525 commented 3 years ago

@MingjunZhong Yes, 2 columns and the data works in transferNILM project.

ZhuzengChu commented 3 years ago

Hello, I also encountered the same problem, have you solved it now?

MingjunZhong commented 3 years ago

@Hessen525 @ZhuzengChu I did not encounter the same errors. I still think the problem was from loading the data. Can you try a Unit Test: just use the method load_dataset() in the class TrainSlidingWindowGenerator in data_feeder.py to load the data to see if it has the same errors.

ZhuzengChu commented 3 years ago

I found that the program can run normally by changing lines 63 and 94 of file data_feeder.py to skiprows=0.

ZhuzengChu commented 3 years ago

@MingjunZhong Will there be problems with my modification like this? Thanks for your nice paper and reply, any reply and advice will be much appreciated!

MingjunZhong commented 3 years ago

@ZhuzengChu @Hessen525 Set skiprows=0 is fine as skiprows is only useful when the data is too big. The class was initialised that skiprows=0 and I thought it was default that skoprows=0

MingjunZhong commented 3 years ago

@Hessen525 Is the problem still there?

Hessen525 commented 3 years ago

@MingjunZhong Sorry for late reply. According to ZhuzengChu' proposal, the problem is solved.

RengarWang commented 3 years ago

I found that the program can run normally by changing lines 63 and 94 of file data_feeder.py to skiprows=0.

thanks , it is work!

SCXCLY commented 2 years ago

I found that the program can run normally by changing lines 63 and 94 of file data_feeder.py to skiprows=0.

thanks, it is work!