Closed tejas-kale closed 3 years ago
Hi,
Yes, you should get results that match the ones in the paper (or at least comparable based on the training process and randomisation weights). I cannot figure out the problem based on the results, but it looks like the AE did not train appropriately.
I couldn't verify the issue, I cloned the repo now and run the main.py script and I attached the results to this reply.
Could you check python and keras versions you are using? Also, check that you didn't change any of the learning parameters.
Hi, thanks for your response. I reran main.py
and I agree that there is a problem with training at my end. In the logs that you attached, training and validation accuracy is around 98% whereas, in my runs, it hovers around 30%. I am attaching the epoch-wise loss and accuracy values for reference.
I am using Python version 3.6.8, TensorFlow (TF) version 2.4.1, and Keras version 2.4.0. I am using the Keras that comes bundled with TF. What environment are you using?
I did not change any of the learning parameters. Again, I have attached the results for reference. But, I had to make one change in autoencoder.py
. I changed the line:
self.model.compile(loss='mean_squared_error', optimizer = 'adadelta', metrics=[self.accuracy])
to
self.model.compile(loss='mean_squared_error', optimizer = 'adadelta', metrics=["accuracy"])
as self.accuracy
is not defined.
04_04_2021__21_32_Results.txt 04_04_2021__21_32_epoch_wise_results.txt
Hi again,
Here are the versions I use: python: 3.7.3 keras: 2.2.5 TF: 1.14.0 But I doubt this is the problem.
Changing this line in autoencoder.py will affect the results as the model will use the generic accuracy function not the one defined in the script, that is responsible to deal with the reconstruction error and threshold. If self. accuracy is not defined, then you might want to check the indentation of this file, the accuracy function is defined in line 53.
I hope this helps
Hi. Thanks again for the quick response. I will try my luck with TF 1.14.0 just in case.
Yes, the accuracy
function is defined after all. Not sure how I missed it. When I ran it with TF 2.4.1, I encountered the following issue which I could not find a solution to. Replacing it with a generic accuracy function was not sensible (and it explains the difference in results) but at least I was able to train the autoencoder.
2021-04-01 20:37:15.877150: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-04-01 20:37:15.877575: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 18)] 0
_________________________________________________________________
dense (Dense) (None, 15) 285
_________________________________________________________________
dense_1 (Dense) (None, 9) 144
_________________________________________________________________
dense_2 (Dense) (None, 15) 150
_________________________________________________________________
dense_3 (Dense) (None, 18) 288
=================================================================
Total params: 867
Trainable params: 867
Non-trainable params: 0
_________________________________________________________________
2021-04-01 20:37:16.696389: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:116] None of the MLIR optimization passes are enabled (registered 2)
Epoch 1/50
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py", line 99, in __exit__
self.gen.throw(type, value, traceback)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/ops/variable_scope.py", line 2825, in variable_creator_scope
yield
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py", line 1100, in fit
tmp_logs = self.train_function(iterator)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 828, in __call__
result = self._call(*args, **kwds)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 871, in _call
self._initialize(args, kwds, add_initializers_to=initializers)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 726, in _initialize
*args, **kwds))
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 2969, in _get_concrete_function_internal_garbage_collected
graph_function, _ = self._maybe_define_function(args, kwargs)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3361, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/function.py", line 3206, in _create_graph_function
capture_by_value=self._capture_by_value),
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 990, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py", line 634, in wrapped_fn
out = weak_wrapped_fn().__wrapped__(*args, **kwds)
File "/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/framework/func_graph.py", line 977, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/Users/tejas/tejas/code/NNModels_release/mini_projects/zero_day_detection/autoencoder.py:58 accuracy *
temp = K.ones(K.shape(mse))
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper **
return target(*args, **kwargs)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/keras/backend.py:1519 ones
v = array_ops.ones(shape=shape, dtype=tf_dtype, name=name)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:3132 ones
output = fill(shape, constant(one, dtype=dtype), name=name)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/util/dispatch.py:201 wrapper
return target(*args, **kwargs)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py:239 fill
result = gen_array_ops.fill(dims, value, name=name)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py:3353 fill
dims, value, name=name, ctx=_ctx)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py:3378 fill_eager_fallback
ctx=ctx, name=name)
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/execute.py:75 quick_execute
raise e
/Users/tejas/pyenvs/bolt_dl_368/lib/python3.6/site-packages/tensorflow/python/eager/execute.py:60 quick_execute
inputs, attrs, num_outputs)
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: Shape_1:0
Process finished with exit code 1
Hi! Many thanks for making the code available. It was helpful to a beginner like me in understanding your work better.
I have a quick question about matching the result I get with the one published (CIC-IDS2017 autoencoder). I cloned the repository and ran
main.py
with the default arguments. If I understand the code correctly, the results I get should match the results you have shown in section 6.1 (CICIDS2017 Autoencoder results) of your paper. Is that correct? If yes, the results I get (shown below) seem different. Can you please help me understand what I am doing wrong and how I can reproduce the results? Thanks... :)