faucetsdn / poseidon

Poseidon is a python-based application that leverages software defined networks (SDN) to acquire and then feed network traffic to a number of machine learning techniques. The machine learning algorithms classify and predict the type of device.
Apache License 2.0
417 stars 123 forks source link

pcap_to_node_pcap and poseidonml plugin don't run with poseidon #473

Closed alshaboti closed 6 years ago

alshaboti commented 6 years ago

Following the instruction in Poseidon readme file with Faucet controller. After running ./helper/run, vent main menu shows only one plugin is running, which is poseidon, and both pcap_to_node_pcap and poseidonml are not.

When I try to restart them all again through vent menu->plugins->stop then start plugins I got this error

2018-01-04T05:08:24+00:00 172.17.0.1 plugin[1571]: DEBUG:main:Could not get address info beacuse list index out of range 2018-01-04T05:08:24+00:00 172.17.0.1 plugin[1571]: DEBUG:main:Defaulting to inferring IP address from 2018-01-04T05:08:24+00:00 172.17.0.1 plugin[1571]: Traceback (most recent call last): 2018-01-04T05:08:24+00:00 172.17.0.1 plugin[1571]: File "eval_OneLayer.py", line 351, in 2018-01-04T05:08:24+00:00 172.17.0.1 plugin[1571]: with open(load_path, 'rb') as handle: 2018-01-04T05:08:24+00:00 172.17.0.1 plugin[1571]: FileNotFoundError: [Errno 2] No such file or directory: '/models/model.pickle'

Does it need to be trained offline? I can't find this instruction in the ReadMe. or Do I miss something?

Thank you,

cglewis commented 6 years ago

Ah, good catch. I've opened an issue for this: https://github.com/Lab41/PoseidonML/issues/49

cglewis commented 6 years ago

As an aside, the other 2 plugins (pcap_to_node_pcap and poseidonml) run on-demand when there are new files to be processed, whereas poseidon continuously runs. So that is why you only see the one running - that is expected until you either drop new PCAPs into the file_drop directory, or have the TAP running (which Poseidon would trigger via the NIC you specified).

alshaboti commented 6 years ago

That's clear now. However, when Poseidon is going to mirror for monitoring. I am connecting two PI's and the Poseidon into a switch (following this blog). Then trying to make traffic between PI's (Pinging/nmap scaning) to trigger Poseidon to mirror the traffic but nothing happens (I use docker logs -f cyberreboot-vent-syslog-master to see what's going on).

Then I tried to put a pcap file (which is captured using tcpdump from one PI while it was sending its temperature to the cloud using red-node, tcpdump -i eth0 -w IOT-260m-ibmwatsoniot.pcap) into /opt/vent_files dir, and I got an error. Output of the two tries is reported below.

The things that are not clear yet are: 1- What trigger poseidon to mirror or stop mirroring traffic of a particular host? Where I can what poseidon is doing?
2- Does poseidon assumes that PoseidonML plugin has a trained model? or it uses the mirror traffic to train it online? Note: I tried to train a ML offline but I faced some issues I reported in Class error

Thank you very much

#docker logs -f cyberreboot-vent-syslog-master
[2018-01-05T21:26:09.128996] WARNING: Configuration file format is too old, syslog-ng is running in compatibility mode Please update it to use the syslog-ng 3.9 format at your time of convenience, compatibility mode can operate less efficiently in some cases. To upgrade the configuration, please review the warnings about incompatible changes printed by syslog-ng, and once completed change the @version header at the top of the configuration file.;
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:C 05 Jan 21:26:13.438 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:C 05 Jan 21:26:13.438 # Redis version=4.0.6, bits=64, commit=00000000, modified=0, pid=1, just started
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:C 05 Jan 21:26:13.438 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:M 05 Jan 21:26:13.441 * Running mode=standalone, port=6379.
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:M 05 Jan 21:26:13.441 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:M 05 Jan 21:26:13.441 # Server initialized
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:M 05 Jan 21:26:13.441 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:M 05 Jan 21:26:13.441 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
2018-01-06T10:26:13+00:00 172.17.0.1 core[1261]: 1:M 05 Jan 21:26:13.441 * Ready to accept connections
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + [[ -z redis ]]
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + [[ -z  ]]
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + export REMOTE_REDIS_PORT=6379
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: REMOTE_REDIS_PSWD not set. Please SET
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + [[ -z  ]]
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + echo REMOTE_REDIS_PSWD not set. Please SET
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + export REMOTE_REDIS_PSWD=
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + [[ -z  ]]
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: REMOTE_REDIS_HOST=redis REMOTE_REDIS_PORT=6379
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + export DASH_PREFIX=/rq
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + export RQ_DASHBOARD_SETTINGS=/rq_dash_settings.py
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + echo REMOTE_REDIS_HOST=redis REMOTE_REDIS_PORT=6379
2018-01-06T10:26:17+00:00 172.17.0.1 core[1261]: + rq-dashboard
2018-01-06T10:26:21+00:00 172.17.0.1 core[1261]: RQ Dashboard, version 0.3.3
2018-01-06T10:26:21+00:00 172.17.0.1 core[1261]:  * Running on http://0.0.0.0:9181/ (Press CTRL+C to quit)
2018-01-06T10:26:23+00:00 172.17.0.1 core[1261]: 21:26:23 RQ worker u'rq:worker:c30e0008ec99.1' started, version 0.9.2
2018-01-06T10:26:23+00:00 172.17.0.1 core[1261]: 21:26:23 Cleaning registries for queue: default
2018-01-06T10:26:23+00:00 172.17.0.1 core[1261]: 21:26:23
2018-01-06T10:26:23+00:00 172.17.0.1 core[1261]: 21:26:23 *** Listening on default...
2018-01-06T10:26:29+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:29,957][WARN ][bootstrap                ] unable to install syscall filter: seccomp unavailable: your kernel is buggy and you should upgrade
2018-01-06T10:26:30+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:30,753][INFO ][node                     ] [Ox] version[2.4.6], pid[1], build[5376dca/2017-07-18T12:17:44Z]
2018-01-06T10:26:30+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:30,753][INFO ][node                     ] [Ox] initializing ...
2018-01-06T10:26:34+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:34,324][INFO ][plugins                  ] [Ox] modules [reindex, lang-expression, lang-groovy], plugins [head], sites [head]
2018-01-06T10:26:34+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:34,842][INFO ][env                      ] [Ox] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [411.2gb], net total_space [457.4gb], spins? [possibly], types [ext4]
2018-01-06T10:26:34+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:34,842][INFO ][env                      ] [Ox] heap size [990.7mb], compressed ordinary object pointers [true]
2018-01-06T10:26:37+00:00 172.17.0.1 core[1261]: http://0.0.0.0:8080/
2018-01-06T10:26:41+00:00 172.17.0.1 core[1261]: connected to rabbitmq...
2018-01-06T10:26:41+00:00 172.17.0.1 core[1261]:  [*] Waiting for logs. To exit press CTRL+C
2018-01-06T10:26:42+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:42,287][INFO ][node                     ] [Ox] initialized
2018-01-06T10:26:42+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:42,287][INFO ][node                     ] [Ox] starting ...
2018-01-06T10:26:42+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:42,509][INFO ][transport                ] [Ox] publish_address {172.17.0.8:9300}, bound_addresses {0.0.0.0:9300}
2018-01-06T10:26:42+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:42,517][INFO ][discovery                ] [Ox] elasticsearch/UtvHF1dgQ2u9fyLY1E8xcw
2018-01-06T10:26:45+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:45,573][INFO ][cluster.service          ] [Ox] new_master {Ox}{UtvHF1dgQ2u9fyLY1E8xcw}{172.17.0.8}{172.17.0.8:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
2018-01-06T10:26:45+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:45,587][INFO ][http                     ] [Ox] publish_address {172.17.0.8:9200}, bound_addresses {0.0.0.0:9200}
2018-01-06T10:26:45+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:45,587][INFO ][node                     ] [Ox] started
2018-01-06T10:26:46+00:00 172.17.0.1 core[1261]: [2018-01-05 21:26:46,384][INFO ][gateway                  ] [Ox] recovered [0] indices into cluster_state
2018-01-06T10:26:47+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:47,462 - INFO - Config:51  - From the Environment
2018-01-06T10:26:47+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:47,474 - DEBUG - Monitor_Helper_Base:45  - set_owner = Config
2018-01-06T10:26:47+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:47,474 - DEBUG - Monitor_Helper_Base:45  - set_owner = Config
2018-01-06T10:26:47+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:47,474 - DEBUG - Monitor_Helper_Base:45  - set_owner = Config
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,363 - DEBUG - Monitor_Helper_Base:45  - set_owner = NorthBoundControllerAbstraction
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,364 - DEBUG - poseidonMonitor:218 - Monitor:config:{'config': 'True', 'logging_file': '/tmp/poseidonWork/logging.json', 'logger_level': 'INFO', 'reinvestigation_frequency': '900', 'max_concurrent_reinvestigations': '1', 'scan_frequency': '5', 'rabbit_server': 'RABBIT_SERVER', 'rabbit_port': '5672', 'collector_nic': 'enp5s0', 'collector_interval': '900', 'vent_ip': 'vent_ip', 'vent_port': '8080'}
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,365 - INFO - Config:61  - Config:configure
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,365 - INFO - Monitor_Helper_Base:55  - Handle_SectionConfig configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Handle_SectionConfig configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,366 - INFO - Monitor_Helper_Base:55  - Handle_FieldConfig configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Handle_FieldConfig configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,367 - INFO - Monitor_Helper_Base:55  - Handle_FullConfig configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Handle_FullConfig configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:48,368 - INFO - Monitor_Helper_Base:55  - Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:48+00:00 172.17.0.1 plugin[1261]: Update_Switch_State configure()
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,273 - INFO - EndpointWrapper:86  - ====START
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,273 - INFO - EndpointWrapper:68  - *******KNOWN*********
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,274 - INFO - EndpointWrapper:81  - None
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,274 - INFO - EndpointWrapper:68  - *******UNKNOWN*********
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,274 - INFO - EndpointWrapper:81  - None
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,274 - INFO - EndpointWrapper:68  - *******MIRRORING*********
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,274 - INFO - EndpointWrapper:81  - None
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,274 - INFO - EndpointWrapper:68  - *******SHUTDOWN*********
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,275 - INFO - EndpointWrapper:81  - None
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,275 - INFO - EndpointWrapper:68  - *******REINVESTIGATING*********
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,275 - INFO - EndpointWrapper:81  - None
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,275 - INFO - EndpointWrapper:90  - ****************
2018-01-06T10:26:54+00:00 172.17.0.1 plugin[1261]: 2018-01-05 21:26:54,275 - INFO - EndpointWrapper:91  - ====STOP
2018-01-06T10:27:13+00:00 172.17.0.1 core[1261]: [2018-01-05 21:27:13,822][INFO ][cluster.metadata         ] [Ox] [core] creating index, cause [auto(index api)], templates [], shards [5]/[1], mappings []
2018-01-06T10:27:15+00:00 172.17.0.1 core[1261]: [2018-01-05 21:27:15,111][INFO ][cluster.routing.allocation] [Ox] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[core][4]] ...]).
2018-01-06T10:27:15+00:00 172.17.0.1 core[1261]: [2018-01-05 21:27:15,331][INFO ][cluster.metadata         ] [Ox] [core] create_mapping [core]
2018-01-06T10:27:18+00:00 172.17.0.1 core[1261]: [2018-01-05 21:27:18,625][INFO ][cluster.metadata         ] [Ox] [plugin] creating index, cause [auto(index api)], templates [], shards [5]/[1], mappings []
2018-01-06T10:27:19+00:00 172.17.0.1 core[1261]: [2018-01-05 21:27:19,895][INFO ][cluster.routing.allocation] [Ox] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[plugin][4]] ...]).
2018-01-06T10:27:19+00:00 172.17.0.1 core[1261]: [2018-01-05 21:27:19,994][INFO ][cluster.metadata         ] [Ox] [plugin] create_mapping [plugin]
2018-01-06T10:34:01+00:00 172.17.0.1 core[1261]: c353ad85-e106-42ce-8a7b-4e4eaf95cbb2 started /files/IOT-260m-ibmwatsoniot.pcap
2018-01-06T10:34:01+00:00 172.17.0.1 core[1261]: c353ad85-e106-42ce-8a7b-4e4eaf95cbb2 let's queue it /files/IOT-260m-ibmwatsoniot.pcap
2018-01-06T10:34:01+00:00 172.17.0.1 core[1261]: c353ad85-e106-42ce-8a7b-4e4eaf95cbb2 end /files/IOT-260m-ibmwatsoniot.pcap
2018-01-06T10:34:01+00:00 172.17.0.1 core[1261]: 21:34:01 default: watch.file_queue('6156144f50f1_/files/IOT-260m-ibmwatsoniot.pcap') (c31c4deb-3ed4-4c5e-b084-ddeeb38d4725)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: Path to manifest: /vent/plugin_manifest.cfg
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: (True, [])
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: 21:34:02 default: Job OK (c31c4deb-3ed4-4c5e-b084-ddeeb38d4725)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: 21:34:02 Result is kept for 500 seconds
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: 21:34:02
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: 21:34:02 *** Listening on default...
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: [2018-01-05 21:34:02,536][DEBUG][action.index             ] [Ox] failed to execute [index {[core][core][syslog.core.6156144f50f1.1515234842.bf0042be-3a32-41b7-9eab-a5a99bbd53ce], source[_na_]}] on [[core][4]]
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: MapperParsingException[failed to parse]; nested: NotXContentException[Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes];
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:156)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:309)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.index.shard.IndexShard.prepareIndex(IndexShard.java:584)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.index.shard.IndexShard.prepareIndexOnPrimary(IndexShard.java:563)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.index.TransportIndexAction.prepareIndexOperationOnPrimary(TransportIndexAction.java:211)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.index.TransportIndexAction.executeIndexRequestOnPrimary(TransportIndexAction.java:223)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:157)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:66)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:657)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:287)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:77)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:378)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at java.lang.Thread.run(Thread.java:748)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]: Caused by: org.elasticsearch.common.compress.NotXContentException: Compressor detection can only be called on some xcontent bytes or compressed xcontent bytes
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.common.compress.CompressorFactory.compressor(CompressorFactory.java:85)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:50)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        at org.elasticsearch.index.mapper.DocumentParser.parseDocument(DocumentParser.java:92)
2018-01-06T10:34:02+00:00 172.17.0.1 core[1261]:        ... 17 more
cglewis commented 6 years ago

@alshaboti a couple things:

The error at the end of the log is actually just elasticsearch complaining about a parsing mismatch (not actually anything to worry about, but I need to document in https://github.com/Cyberreboot/vent about that, I'll open an issue)

You can see before that error that the queue picked up your file for processing, but looks like nothing happened - what plugins does Vent say you have installed? You'd need a plugin that knows how to process pcaps installed (i.e. https://github.com/CyberReboot/vent-plugins/blob/master/pcap_to_node_pcap/vent.template)

Currently Poseidon doesn't do online training, so yes it's expecting that you have an offline trained models as pickle files (as you've already discovered, looks like).

For the tap trigger - there's a NIC that Vent is set to listen to which Poseidon is expecting the traffic to be mirrored to (via FAUCET or BCF) that NIC has to be specified either through changing the configuration file or an environment variable (collector_nic). In your configuration the logs says your NIC is enp5s0 is that the one you're mirroring traffic to that Vent also has access to?

alshaboti commented 6 years ago

Thanks @cglewis The elastic error and pcap files issues are clear now.

For the tap trigger, yes enp5s0 is the NIC where the switch can mirror traffic to Poseidon.

However, Faucet is not configured to mirror traffic to Poseidon. I expect Poseidon is going to configure Faucet (through ssh to faucet.yaml) to mirror the traffic to its port? But it does not, is it because it can't run PoseidonML (as it misses the model)?

dps:
  opwnwrt:
    dp_id: 0x000014cc20be86a9
    hardware: "Open vSwitch"
    proactive_learn: true
    interfaces:
      1:
        native_vlan: demo
      2:
        native_vlan: demo
      3:
        native_vlan: mirror
vlans:
  demo:
    vid: 300
  mirror:
    vid: 101
    max_hosts: 0

vent environment varaibles:

nawal@nawal-pc:~$ docker exec -it vent sh
/ # printenv
HOSTNAME=6156144f50f1
SHLVL=1
HOME=/root
controller_mirror_ports={"openwrt":3}
controller_config_file=/etc/ryu/faucet/faucet.yaml
controller_pass=w?i.zN.P1
controller_log_file=/var/log/ryu/faucet/faucet.log
TERM=xterm
controller_user=root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
controller_type=faucet
max_concurrent_reinvestigations=1
controller_uri=192.168.2.10
PWD=/
VENT_CONTAINERIZED=true
collector_nic=enp5s0
cglewis commented 6 years ago

ah ok, then yeah the tap should work as you have it. yes you're correct that Poseidon will configure FAUCET (that's what the controller_mirror_ports is for). It takes a few minutes for Poseidon to trigger collection on a fresh start. I'd let it run for like 5 minutes and see if there's any activity in the logs?

alshaboti commented 6 years ago

Awesome, I think I was not patient enough to wait for 5 minutes lol.

One last question, before trying that. If this is a valid question, what Poseidon is going to do with the traffic if PoseidonML is not working?

cglewis commented 6 years ago

it should still capture the pcaps and will go into a mirroring state, but it will never leave that mirroring state until PoseidonML comes back and decides that it is normal traffic and doesn't need to reinvestigate. So it doesn't hurt anything, but it won't be useful because the mechanism (PoseidonML) that tells Poseidon what to do with that traffic isn't reporting back to Poseidon.

alshaboti commented 6 years ago

I trained a model offline for PoeidonML and place it into /tmp/models/. However, when PoseidonML runs it trigger this error.

DEBUG:__main__:Could not get address info beacuse list index out of range
DEBUG:__main__:Defaulting to inferring IP address from pcap/
DEBUG:__main__:Loaded model from model.pickle
Traceback (most recent call last):
  File "eval_OneLayer.py", line 362, in <module>
    mean=False
  File "/app/NodeClassifier/utils/OneLayer.py", line 227, in get_representation
    source_ip=source_ip,
ValueError: not enough values to unpack (expected 4, got 3)

Same error I got with PoseidonML alone when I evaluate the model after I reset redis service. Seems PoseidonML expects a pcap file, rather than mirrored packets.

My question is, how Poseidon and PoseidonML interact?

alshaboti commented 6 years ago

This error is solved if a model is created with the correct pcap file naming structure (not sure why). See https://github.com/Lab41/PoseidonML/issues/50

Thank you,