Closed joel858 closed 4 years ago
It's most likely something to do with the filetype. If Malcolm sees files it doesn't recognize, it discards them. Are you on a Windows or Linux box? If you're on Linux, can you run file
on the file sort of like this:
$ file -i http2-16-ssl.pcap
http2-16-ssl.pcap: application/vnd.tcpdump.pcap; charset=binary
The other thing is let's look at the logs from the docker container(s) during the upload. If you don't have the logs open, run ./scripts/logs.sh
and then when you upload the file you should see something like this:
nginx-proxy_1 | 172.20.0.1 - tlacuache [09/Jan/2020:17:31:13 +0000] "POST /upload/server/php/ HTTP/1.1" 200 317 "https://localhost/upload/" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0"
pcap-monitor_1 | renamed '/pcap/upload/email-troubles.pcap' -> '/pcap/processed/email-troubles.pcap'
filebeat_1 | '/data/zeek//upload/email-troubles.pcap-email_troubles-1578591074941120.tar.gz' -> '/data/zeek/email-troubles.pcap-email_troubles-1578591074941120.tar.gz'
...
filebeat_1 | 2020-01-09T17:32:01.873Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/known_services(email,troubles,pcap,1578591121406720716).log
filebeat_1 | 2020-01-09T17:32:01.874Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/packet_filter(email,troubles,pcap,1578591121440846028).log
filebeat_1 | 2020-01-09T17:32:01.897Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/conn(email,troubles,pcap,1578591121330261413,ZEEKFLDx00x03FFFFFF).log
filebeat_1 | 2020-01-09T17:32:01.916Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/dhcp(email,troubles,pcap,1578591121367450021).log
filebeat_1 | 2020-01-09T17:32:01.974Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/known_services(email,troubles,pcap,1578591121406720716).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-09T17:32:01.974Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/packet_filter(email,troubles,pcap,1578591121440846028).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-09T17:32:01.974Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/conn(email,troubles,pcap,1578591121330261413,ZEEKFLDx00x03FFFFFF).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-09T17:32:01.975Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/dhcp(email,troubles,pcap,1578591121367450021).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-09T17:32:02.974Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://logstash:5044))
filebeat_1 | 2020-01-09T17:32:02.974Z WARN tlscommon/tls_config.go:79 SSL/TLS verifications disabled.
filebeat_1 | 2020-01-09T17:32:03.289Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://logstash:5044)) established
Compared to when I upload a file type Malcolm isn't recognizing:
nginx-proxy_1 | 172.20.0.1 - tlacuache [09/Jan/2020:17:32:32 +0000] "POST /upload/server/php/ HTTP/1.1" 200 308 "https://localhost/upload/" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0"
pcap-monitor_1 | Removed "/pcap/upload/Invoice - 1701.pdf", unhandled file type "application/pdf"
Are you seeing that "unhandled file type" message?
filebeat_1 isn't in the logs, not seeing the unhandled message. I do get moloch_1 renamed 'filepath' -> 'filepath' message in the logs. This is when I upload on port 8443.
Then not all of the docker containers that make up Malcolm are running.
Run docker ps -a | grep malcolm
and see if you have any "exited" error codes.
Then let's try to get the logs from filebeat and see why it's not running. Run:
$ docker-compose logs filebeat
Attaching to malcolm_filebeat_1
...
and let's see what errors are in there.
No exited containers. I have the following running
nginx-proxy file-upload kibana-oss curator moloch elastalert logstash-oss elasticsearch-oss pcap-capture file-monitor htadmin
I'm tailing filebeat, but nothing is logged when I upload a file.
Can you tell me where the .pcap files are distributed when I upload through :8443 UI? I had an issue with the htadmin file where I had to change permissions for the User management UI to see the htadmin file.
they're spooled into "pcap/upload" as they are written then moved to "pcap/processed" once the inotify event shows the file is closed for writing and they are processed there.
Here's the list of containers that should be running, as of the latest release:
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
malcolm_curator_1 /usr/local/bin/cron_env_deb.sh Up
malcolm_elastalert_1 /usr/local/bin/elastalert- ... Up (healthy) 3030/tcp, 3333/tcp
malcolm_elasticsearch_1 /usr/local/bin/docker-entr ... Up (healthy) 9200/tcp, 9300/tcp
malcolm_file-monitor_1 /usr/local/bin/supervisord ... Up 3310/tcp
malcolm_filebeat_1 /usr/local/bin/docker-entr ... Up
malcolm_htadmin_1 /usr/bin/supervisord -c /s ... Up 80/tcp
malcolm_kibana_1 /usr/local/bin/dumb-init - ... Up (healthy) 28991/tcp, 5601/tcp
malcolm_logstash_1 /usr/local/bin/logstash-st ... Up (healthy) 5000/tcp, 0.0.0.0:5044->5044/tcp, 9600/tcp
malcolm_moloch_1 /usr/bin/supervisord -c /e ... Up 8000/tcp, 8005/tcp, 8081/tcp
malcolm_nginx-proxy_1 /usr/local/bin/docker_entr ... Up 0.0.0.0:28991->28991/tcp, 0.0.0.0:3030->3030/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:488->488/tcp, 0.0.0.0:5601->5601/tcp, 80/tcp, 0.0.0.0:8443->8443/tcp, 0.0.0.0:9200->9200/tcp,
0.0.0.0:9600->9600/tcp
malcolm_pcap-capture_1 /usr/local/bin/supervisor.sh Up
malcolm_pcap-monitor_1 /usr/bin/supervisord -c /e ... Up 30441/tcp
malcolm_upload_1 /docker-entrypoint.sh /usr ... Up 127.0.0.1:8022->22/tcp, 80/tcp
malcolm_zeek_1 /usr/bin/supervisord -c /e ... Up
In other words, one for each section under the services
section in the docker-compose.yml
file:
elasticsearch
kibana
elastalert
curator
logstash
filebeat
moloch
zeek
file-monitor
pcap-capture
pcap-monitor
upload
htadmin
nginx-proxy
Can you clean everything (do a ./scripts/wipe.sh
) then start everything up and after a few moments attach the entire contents of the logs here? Maybe there are other weird permissions things going on on your setup. What is your host platform/os?
Platform is RHEL 7, I was given an offline installation of the platform. Looks like I don't have pcap-monitor or zeek included in the tar ball. But also looks like they aren't in the docker-compose. Unfortunately I won't be able to get logs here. Are there things I can look for?
Hmm yeah then this is for sure an older release before I split those services out into their own containers. But filebeat ought to be there since the beginning. I'd do the wipe.sh to clean everything off, then start everything back up, watch all of the logs and see why some containers (at least filebeat) are exiting prematurely based on the log messages the containers spit out as they are starting up.
I don't see any containers exiting. Is there a more recent offline package available? The one I have is 1.6
Offline packages aren't published anywhere, but you can create your own using the following method from another box connected to the internet:
$ git clone --depth=1 --single-branch --branch master "https://github.com/idaholab/Malcolm"
Cloning into 'Malcolm'...
remote: Enumerating objects: 770, done.
remote: Counting objects: 100% (770/770), done.
remote: Compressing objects: 100% (636/636), done.
remote: Total 770 (delta 135), reused 548 (delta 45), pack-reused 0
Receiving objects: 100% (770/770), 15.27 MiB | 24.59 MiB/s, done.
Resolving deltas: 100% (135/135), done.
$ cd Malcolm
$ touch auth.env
$ docker-compose pull
Pulling elasticsearch ... done
Pulling kibana ... done
Pulling elastalert ... done
Pulling curator ... done
Pulling logstash ... done
Pulling filebeat ... done
Pulling moloch ... done
Pulling zeek ... done
Pulling file-monitor ... done
Pulling pcap-capture ... done
Pulling pcap-monitor ... done
Pulling upload ... done
Pulling htadmin ... done
Pulling nginx-proxy ... done
$ mkdir offline
$ cd offline
$ ../scripts/malcolm_appliance_packager.sh
You must set an administrator username and password for Malcolm, and self-signed X.509 certificates will be generated
Administrator username: admin
admin password:
admin password (again):
(Re)generate self-signed certificates for HTTPS access [Y/n]?
(Re)generate self-signed certificates for a remote log forwarder [Y/n]?
Store username/password for forwarding Logstash events to a secondary, external Elasticsearch instance [y/N]?
Packaged Malcolm to "/home/user/Malcolm/offline/malcolm_20191218_125942_33f7598.tar.gz"
Do you need to package docker images also [y/N]? y
This might take a few minutes...
Packaged Malcolm docker images to "/home/user/Malcolm/offline/malcolm_20191218_125942_33f7598_images.tar.gz"
To install Malcolm:
1. Run install.py
2. Follow the prompts
To start, stop, restart, etc. Malcolm:
Use the control scripts in the "scripts/" directory:
- start.sh (start Malcolm)
- stop.sh (stop Malcolm)
- restart.sh (restart Malcolm)
- logs.sh (monitor Malcolm logs)
- wipe.sh (stop Malcolm and clear its database)
- auth_setup.sh (change authentication-related settings)
A minute or so after starting Malcolm, the following services will be accessible:
- Moloch: https://localhost/
- Kibana: https://localhost/kibana/
- PCAP Upload (web): https://localhost/upload/
- PCAP Upload (sftp): sftp://USERNAME@127.0.0.1:8022/files/
- Account management: https://localhost:488/
So I tried the offline script, for some reason its pulling 1.6 images and docker-compose.
From: SG notifications@github.com Sent: Thursday, January 9, 2020 2:29 PM To: idaholab/Malcolm Malcolm@noreply.github.com Cc: Stein, Joshua P. Joshua.Stein@jhuapl.edu; Author author@noreply.github.com Subject: [EXTERNAL] Re: [idaholab/Malcolm] File upload not Working (#98)
CAUTION: This Email is from an EXTERNAL source. Ensure you trust this sender before clicking on any links or attachments. Original Sender address is noreply@github.commailto:noreply@github.com
Offline packages aren't published anywhere, but you can create your own using the following method from another box connected to the internet:
$ git clone --depth=1 --single-branch --branch master "https://github.com/idaholab/Malcolm"
Cloning into 'Malcolm'...
remote: Enumerating objects: 770, done.
remote: Counting objects: 100% (770/770), done.
remote: Compressing objects: 100% (636/636), done.
remote: Total 770 (delta 135), reused 548 (delta 45), pack-reused 0
Receiving objects: 100% (770/770), 15.27 MiB | 24.59 MiB/s, done.
Resolving deltas: 100% (135/135), done.
$ cd Malcolm
$ touch auth.env
$ docker-compose pull
Pulling elasticsearch ... done
Pulling kibana ... done
Pulling elastalert ... done
Pulling curator ... done
Pulling logstash ... done
Pulling filebeat ... done
Pulling moloch ... done
Pulling zeek ... done
Pulling file-monitor ... done
Pulling pcap-capture ... done
Pulling pcap-monitor ... done
Pulling upload ... done
Pulling htadmin ... done
Pulling nginx-proxy ... done
$ mkdir offline
$ cd offline
$ ../scripts/malcolm_appliance_packager.sh
You must set an administrator username and password for Malcolm, and self-signed X.509 certificates will be generated
Administrator username: admin
admin password:
admin password (again):
(Re)generate self-signed certificates for HTTPS access [Y/n]?
(Re)generate self-signed certificates for a remote log forwarder [Y/n]?
Store username/password for forwarding Logstash events to a secondary, external Elasticsearch instance [y/N]?
Packaged Malcolm to "/home/user/Malcolm/offline/malcolm_20191218_125942_33f7598.tar.gz"
Do you need to package docker images also [y/N]? y
This might take a few minutes...
Packaged Malcolm docker images to "/home/user/Malcolm/offline/malcolm_20191218_125942_33f7598_images.tar.gz"
To install Malcolm:
Run install.py
Follow the prompts
To start, stop, restart, etc. Malcolm:
Use the control scripts in the "scripts/" directory:
start.sh (start Malcolm)
stop.sh (stop Malcolm)
restart.sh (restart Malcolm)
logs.sh (monitor Malcolm logs)
wipe.sh (stop Malcolm and clear its database)
auth_setup.sh (change authentication-related settings)
A minute or so after starting Malcolm, the following services will be accessible:
Moloch: https://localhost/
Kibana: https://localhost/kibana/
PCAP Upload (web): https://localhost/upload/
PCAP Upload (sftp): sftp://USERNAME@127.0.0.1:8022/files/
Account management: https://localhost:488/
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/idaholab/Malcolm/issues/98?email_source=notifications&email_token=AOHPFAGSTJHVKNUIIFS3HXLQ453JBA5CNFSM4KE3H2BKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIRPPUY#issuecomment-572717011, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AOHPFADXXT77PR6M3NF4DY3Q453JBANCNFSM4KE3H2BA.
I'm not sure what to tell you without more information. If you followed my instructions, cloning the master branch of this project certainly doesn't reference 1.6 (see https://github.com/idaholab/Malcolm/blob/master/docker-compose.yml#L125) and doing a docker-compose pull would result int the 1.8.0 images being pulled.
I went through last night and re-tested the tutorial at the bottom of the documentation on a centos 7 box. Everything works fine. I'd recommend reading the documentation and my examples more carefully and/or see if you can find somebody you work with who's more familiar with docker and docker-compose to see what you're doing wrong. If it's an offsite installation there's only so much I can do to help you troubleshoot.
Good luck!
Good Afternoon,
I upgraded to the newest version 1.8, and I’m still having issues with the file upload. Could you tell me all of the bind mounts and correct permissions? I think my system umask could be interfering with installation. A lot of the Dirs on the bind mounts are created with 755. However, I did notice when I restart the application, the htpassword file keeps getting reverted to 755, and the container won’t start until I added the write bit.
From: SG notifications@github.com Sent: Friday, January 10, 2020 12:18 PM To: idaholab/Malcolm Malcolm@noreply.github.com Subject: [EXTERNAL] Re: [idaholab/Malcolm] File upload not Working (RHEL7, Malcolm 1.6.x) (#98)
CAUTION: This Email is from an EXTERNAL source. Ensure you trust this sender before clicking on any links or attachments. Original Sender address is noreply@github.commailto:noreply@github.com
I'm not sure what to tell you without more information. If you followed my instructions, cloning the master branch of this project certainly doesn't reference 1.6 (see https://github.com/idaholab/Malcolm/blob/master/docker-compose.yml#L125) and doing a docker-compose pull would result int the 1.8.0 images being pulled.
I went through last night and re-tested the tutorial at the bottom of the documentation on a centos 7 box. Everything works fine. I'd recommend reading the documentation and my examples more carefully and/or see if you can find somebody you work with who's more familiar with docker and docker-compose to see what you're doing wrong. If it's an offsite installation there's only so much I can do to help you troubleshoot.
Good luck!
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHubhttps://github.com/idaholab/Malcolm/issues/98?email_source=notifications&email_token=AOHPFADARNEXYNRIFI7P5G3Q5CUS5A5CNFSM4KE3H2BKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIUTF4Y#issuecomment-573125363, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AOHPFABXTN5OUUBFSYNU4FDQ5CUS5ANCNFSM4KE3H2BA.
I set up a centos 7 virtual machine with vagrant and went through an entire installation from scratch. Maybe some of this will help you.
[vagrant@localhost vagrant_shared]$ sudo python install.py
install.py requires the requests module under Python 2.7.5 (/bin/python)
System-wide installation varies by platform and Python configuration. Please consult platform-specific documentation for installing Python modules.
You *may* be able to install requests manually via: sudo yum install python-requests
[vagrant@localhost vagrant_shared]$ sudo yum install python-requests
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.syringanetworks.net
* extras: mirrors.syringanetworks.net
* updates: mirrors.syringanetworks.net
Resolving Dependencies
--> Running transaction check
---> Package python-requests.noarch 0:2.6.0-8.el7_7 will be installed
--> Processing Dependency: python-urllib3 >= 1.10.2-1 for package: python-requests-2.6.0-8.el7_7.noarch
--> Running transaction check
---> Package python-urllib3.noarch 0:1.10.2-7.el7 will be installed
--> Processing Dependency: python-six for package: python-urllib3-1.10.2-7.el7.noarch
--> Processing Dependency: python-ipaddress for package: python-urllib3-1.10.2-7.el7.noarch
--> Processing Dependency: python-backports-ssl_match_hostname for package: python-urllib3-1.10.2-7.el7.noarch
--> Running transaction check
---> Package python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7 will be installed
--> Processing Dependency: python-backports for package: python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch
---> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be installed
---> Package python-six.noarch 0:1.9.0-2.el7 will be installed
--> Running transaction check
---> Package python-backports.x86_64 0:1.0-8.el7 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
================================================================================
Package Arch Version Repository
Size
================================================================================
Installing:
python-requests noarch 2.6.0-8.el7_7 updates 95 k
Installing for dependencies:
python-backports x86_64 1.0-8.el7 base 5.8 k
python-backports-ssl_match_hostname noarch 3.5.0.1-1.el7 base 13 k
python-ipaddress noarch 1.0.16-2.el7 base 34 k
python-six noarch 1.9.0-2.el7 base 29 k
python-urllib3 noarch 1.10.2-7.el7 base 103 k
Transaction Summary
================================================================================
Install 1 Package (+5 Dependent packages)
Total download size: 279 k
Installed size: 1.0 M
Is this ok [y/d/N]: Downloading packages:
--------------------------------------------------------------------------------
Total 559 kB/s | 279 kB 00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python-ipaddress-1.0.16-2.el7.noarch 1/6
Installing : python-six-1.9.0-2.el7.noarch 2/6
Installing : python-backports-1.0-8.el7.x86_64 3/6
Installing : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 4/6
Installing : python-urllib3-1.10.2-7.el7.noarch 5/6
Installing : python-requests-2.6.0-8.el7_7.noarch 6/6
Verifying : python-urllib3-1.10.2-7.el7.noarch 1/6
Verifying : python-backports-ssl_match_hostname-3.5.0.1-1.el7.noarch 2/6
Verifying : python-requests-2.6.0-8.el7_7.noarch 3/6
Verifying : python-backports-1.0-8.el7.x86_64 4/6
Verifying : python-ipaddress-1.0.16-2.el7.noarch 5/6
Verifying : python-six-1.9.0-2.el7.noarch 6/6
Installed:
python-requests.noarch 0:2.6.0-8.el7_7
Dependency Installed:
python-backports.x86_64 0:1.0-8.el7
python-backports-ssl_match_hostname.noarch 0:3.5.0.1-1.el7
python-ipaddress.noarch 0:1.0.16-2.el7
python-six.noarch 0:1.9.0-2.el7
python-urllib3.noarch 0:1.10.2-7.el7
Complete!
[vagrant@localhost vagrant_shared]$ sudo python install.py
Installing required packages: ['httpd-tools', 'make', 'openssl']
"docker info" failed, attempt to install Docker? (Y/n): y
Attempt to install Docker using official repositories? (Y/n): y
Installing required packages: ['yum-utils', 'device-mapper-persistent-data', 'lvm2']
Installing docker packages: ['docker-ce', 'docker-ce-cli', 'containerd.io']
Installation of docker packages apparently succeeded
Add a non-root user to the "docker" group? (y/n): y
Enter user account: vagrant
Add another non-root user to the "docker" group? (y/n): n
"docker-compose version" failed, attempt to install docker-compose? (Y/n): y
Install docker-compose directly from docker github? (Y/n): y
Download and installation of docker-compose apparently succeeded
fs.file-max increases allowed maximum for file handles
fs.file-max= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
fs.inotify.max_user_watches increases allowed maximum for monitored files
fs.inotify.max_user_watches= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
fs.inotify.max_queued_events increases queue size for monitored files
fs.inotify.max_queued_events= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
fs.inotify.max_user_instances increases allowed maximum monitor file watchers
fs.inotify.max_user_instances= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
vm.max_map_count increases allowed maximum for memory segments
vm.max_map_count= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
net.core.somaxconn increases allowed maximum for socket connections
net.core.somaxconn= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
vm.swappiness adjusts the preference of the system to swap vs. drop runtime memory pages
vm.swappiness= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
vm.dirty_background_ratio defines the percentage of system memory fillable with "dirty" pages before flushing
vm.dirty_background_ratio= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
vm.dirty_ratio defines the maximum percentage of dirty system memory before committing everything
vm.dirty_ratio= appears to be missing from /etc/sysctl.conf, append it? (Y/n): y
/etc/systemd/system.conf.d/limits.conf increases the allowed maximums for file handles and memlocked segments
/etc/systemd/system.conf.d/limits.conf does not exist, create it? (Y/n): y
The "haveged" utility may help improve Malcolm startup times by providing entropy for the Linux kernel. Install haveged? (y/N): y
Installing haveged packages: ['haveged', 'https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm']
The "haveged" utility may help improve Malcolm startup times by providing entropy for the Linux kernel. Install haveged? (y/N): y
Installing haveged packages: ['haveged', 'https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm']
Installation of haveged packages apparently succeeded
Load Malcolm Docker images from /home/vagrant/vagrant_shared/malcolm_20200110_113308_2d09b51_images.tar.gz (Y/n): y
Extract Malcolm runtime files from /home/vagrant/vagrant_shared/malcolm_20200110_113308_2d09b51.tar.gz (Y/n): y
Enter installation path for Malcolm [/home/vagrant/vagrant_shared/malcolm]: /home/vagrant/malcolm
Malcolm runtime files extracted to /home/vagrant/malcolm
Detected only 8.0 GiB of memory; performance will be suboptimal
Setting 4g for Elasticsearch and 2500m for Logstash. Is this OK? (Y/n): y
Restart Malcolm upon system or Docker daemon restart? (y/N): y
Select Malcolm restart behavior ('no', 'on-failure', 'always', 'unless-stopped'): unless-stopped
Authenticate against Lightweight Directory Access Protocol (LDAP) server? (y/N):
Create daily snapshots (backups) of Elasticsearch indices? (y/N):
Periodically close old Elasticsearch indices? (y/N):
Periodically delete old Elasticsearch indices? (y/N):
Periodically delete the oldest Elasticsearch indices when the database exceeds a certain size? (y/N):
Automatically analyze all PCAP files with Zeek? (Y/n): y
Perform reverse DNS lookup locally for source and destination IP addresses in Zeek logs? (y/N):
Perform hardware vendor OUI lookups for MAC addresses? (Y/n):
Expose Logstash port to external hosts? (y/N):
Forward Logstash logs to external Elasticstack instance? (y/N):
Enable file extraction with Zeek? (y/N): y
Select file extraction behavior ('none', 'known', 'mapped', 'all', 'interesting'): interesting
Select file preservation behavior ('quarantined', 'all', 'none'): quarantined
Scan extracted files with ClamAV? (y/N): y
Download updated ClamAV virus signatures periodically? (Y/n): y
Should Malcolm capture network traffic to PCAP files? (y/N):
Malcolm has been installed to /home/vagrant/malcolm. See README.md for more information.
Scripts for starting and stopping Malcolm and changing authentication-related settings can be found in /home/vagrant/malcolm/scripts.
[vagrant@localhost ~]$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
malcolmnetsec/moloch 1.8.1 aa8ef0ad01d4 3 days ago 651MB
malcolmnetsec/logstash-oss 1.8.1 e5aee2b0b827 3 days ago 1.06GB
malcolmnetsec/zeek 1.8.1 453268f192c4 3 days ago 231MB
malcolmnetsec/htadmin 1.8.1 26db6703ca44 3 days ago 255MB
malcolmnetsec/curator 1.8.1 71e8f03e5e0e 3 days ago 242MB
malcolmnetsec/pcap-capture 1.8.1 999dee6b8e2c 3 days ago 111MB
malcolmnetsec/kibana-oss 1.8.1 9efec06370f7 3 days ago 765MB
malcolmnetsec/filebeat-oss 1.8.1 0d670ba1ced1 3 days ago 472MB
malcolmnetsec/elastalert 1.8.1 f6a01ef0d467 3 days ago 392MB
malcolmnetsec/file-upload 1.8.1 2bb3de1b644a 3 days ago 198MB
malcolmnetsec/nginx-proxy 1.8.1 e793725c1023 3 days ago 126MB
malcolmnetsec/file-monitor 1.8.1 0d52fef86963 3 days ago 371MB
malcolmnetsec/pcap-monitor 1.8.1 882911b11b5e 3 days ago 156MB
docker.elastic.co/elasticsearch/elasticsearch-oss 7.5.1 9d2c1bab36fa 3 weeks ago 682MB
[vagrant@localhost malcolm]$ ./scripts/auth_setup.sh
Administrator username: admin
admin password:
admin password (again):
(Re)generate self-signed certificates for HTTPS access [Y/n]?
(Re)generate self-signed certificates for a remote log forwarder [Y/n]?
[vagrant@localhost malcolm]$ ./scripts/start.sh
Creating network "malcolm_default" with the default driver
Creating malcolm_file-monitor_1 ... done
Creating malcolm_htadmin_1 ... done
Creating malcolm_pcap-capture_1 ... done
Creating malcolm_elasticsearch_1 ... done
Creating malcolm_logstash_1 ... done
Creating malcolm_kibana_1 ... done
Creating malcolm_pcap-monitor_1 ... done
Creating malcolm_moloch_1 ... done
Creating malcolm_curator_1 ... done
Creating malcolm_zeek_1 ... done
Creating malcolm_elastalert_1 ... done
Creating malcolm_filebeat_1 ... done
Creating malcolm_upload_1 ... done
Creating malcolm_nginx-proxy_1 ... done
In a few minutes, Malcolm services will be accessible via the following URLs:
------------------------------------------------------------------------------
- Moloch: https://localhost/
- Kibana: https://localhost/kibana/
- PCAP Upload (web): https://localhost/upload/
- PCAP Upload (sftp): sftp://username@127.0.0.1:8022/files/
- Account management: https://localhost:488/
Name Command State Ports
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
malcolm_curator_1 /usr/local/bin/cron_env_deb.sh Up
malcolm_elastalert_1 /usr/local/bin/elastalert- ... Up (health: starting) 3030/tcp, 3333/tcp
malcolm_elasticsearch_1 /usr/local/bin/docker-entr ... Up (health: starting) 9200/tcp, 9300/tcp
malcolm_file-monitor_1 /usr/local/bin/supervisord ... Up 3310/tcp
malcolm_filebeat_1 /usr/local/bin/docker-entr ... Up
malcolm_htadmin_1 /usr/bin/supervisord -c /s ... Up 80/tcp
malcolm_kibana_1 /usr/local/bin/dumb-init - ... Up (health: starting) 28991/tcp, 5601/tcp
malcolm_logstash_1 /usr/local/bin/logstash-st ... Up (health: starting) 5000/tcp, 5044/tcp, 9600/tcp
malcolm_moloch_1 /usr/bin/supervisord -c /e ... Up 8000/tcp, 8005/tcp, 8081/tcp
malcolm_nginx-proxy_1 /usr/local/bin/docker_entr ... Up 0.0.0.0:28991->28991/tcp, 0.0.0.0:3030->3030/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:488->488/tcp, 0.0.0.0:5601->5601/tcp, 80/tcp, 0.0.0.0:8443->8443/tcp,
0.0.0.0:9200->9200/tcp, 0.0.0.0:9600->9600/tcp
malcolm_pcap-capture_1 /usr/local/bin/supervisor.sh Up
malcolm_pcap-monitor_1 /usr/bin/supervisord -c /e ... Up 30441/tcp
malcolm_upload_1 /docker-entrypoint.sh /usr ... Up 127.0.0.1:8022->22/tcp, 80/tcp
malcolm_zeek_1 /usr/bin/supervisord -c /e ... Up
...
logstash_1 | [2020-01-13T19:26:13,969][INFO ][logstash.agent ] Pipelines running {:count=>4, :running_pipelines=>[:"malcolm-enrichment", :"malcolm-input", :"malcolm-output", :"malcolm-zeek"], :non_running_pipelines=>[]}
logstash_1 | [2020-01-13T19:26:14,229][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
debug log output:
nginx-proxy_1 | 10.0.2.2 - admin [13/Jan/2020:19:26:45 +0000] "POST /upload/server/php/ HTTP/1.1" 200 401 "https://localhost:21443/upload/" "Mozilla/5.0 (X11; Linux x86_64; rv:72.0) Gecko/20100101 Firefox/72.0"
pcap-monitor_1 | renamed '/pcap/upload/AUTOZEEK,AUTOCARVEinteresting,quic-only.pcap' -> '/pcap/processed/AUTOZEEK,AUTOCARVEinteresting,quic-only.pcap'
filebeat_1 | '/data/zeek//upload/AUTOZEEK,AUTOCARVEinteresting,quic-only.pcap-quic_only-1578943606180532.tar.gz' -> '/data/zeek/AUTOZEEK,AUTOCARVEinteresting,quic-only.pcap-quic_only-1578943606180532.tar.gz'
filebeat_1 | 2020-01-13T19:27:06.638Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/gquic(quic,only,pcap,1578943622366520588).log
filebeat_1 | 2020-01-13T19:27:06.639Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/known_services(quic,only,pcap,1578943622401216514).log
filebeat_1 | 2020-01-13T19:27:06.648Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/packet_filter(quic,only,pcap,1578943622443171240).log
filebeat_1 | 2020-01-13T19:27:06.648Z INFO log/harvester.go:251 Harvester started for file: /data/zeek/current/conn(quic,only,pcap,1578943622320130693,ZEEKFLDx00x03FFFFFF).log
filebeat_1 | 2020-01-13T19:27:06.713Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/packet_filter(quic,only,pcap,1578943622443171240).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-13T19:27:06.713Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/known_services(quic,only,pcap,1578943622401216514).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-13T19:27:06.714Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/conn(quic,only,pcap,1578943622320130693,ZEEKFLDx00x03FFFFFF).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-13T19:27:06.714Z INFO log/harvester.go:274 End of file reached: /data/zeek/current/gquic(quic,only,pcap,1578943622366520588).log. Closing because close_eof is enabled.
filebeat_1 | 2020-01-13T19:27:07.711Z INFO pipeline/output.go:95 Connecting to backoff(async(tcp://logstash:5044))
filebeat_1 | 2020-01-13T19:27:07.751Z INFO pipeline/output.go:105 Connection to backoff(async(tcp://logstash:5044)) established
Here are all of the file permissions at the end of the scenario I showed above:
[vagrant@localhost malcolm]$ find . -path ./elasticsearch -prune -o -type f -exec ls -l "{}" \;
-rw-r--r--. 1 vagrant vagrant 423 Jan 10 18:55 ./cidr-map.txt
-rw-r--r--. 1 vagrant vagrant 1599 Jan 10 18:55 ./elastalert/sample-rules/notice-email.yaml
-rw-r--r--. 1 vagrant vagrant 1518 Jan 10 18:55 ./elastalert/config/elastalert.yaml
-rw-r--r--. 1 vagrant vagrant 465 Jan 10 18:55 ./elastalert/config/config.json
-rw-------. 1 vagrant vagrant 75 Jan 10 18:55 ./elastalert/config/smtp-auth.yaml
-rwxr-xr-x. 1 vagrant vagrant 308 Jan 10 18:55 ./nginx/certs/gen_self_signed_certs.sh
-rw-rw-r--. 1 vagrant vagrant 424 Jan 13 19:20 ./nginx/certs/dhparam.pem
-rw-rw-r--. 1 vagrant vagrant 3272 Jan 13 19:20 ./nginx/certs/key.pem
-rw-rw-r--. 1 vagrant vagrant 1789 Jan 13 19:20 ./nginx/certs/cert.pem
-rw-r--r--. 1 vagrant vagrant 67 Jan 13 19:19 ./nginx/htpasswd
-rw-r--r--. 1 vagrant vagrant 671 Jan 10 18:56 ./nginx/nginx_ldap.conf
-rwxr-xr-x. 1 vagrant vagrant 1963 Jan 10 18:55 ./scripts/wipe.sh
-rwxr-xr-x. 1 vagrant vagrant 3589 Jan 10 18:55 ./scripts/start.sh
-rwxr-xr-x. 1 vagrant vagrant 1307 Jan 10 18:55 ./scripts/restart.sh
-rwxr-xr-x. 1 vagrant vagrant 74308 Jan 10 18:55 ./scripts/install.py
-rwxr-xr-x. 1 vagrant vagrant 1856 Jan 10 18:55 ./scripts/logs.sh
-rwxr-xr-x. 1 vagrant vagrant 1649 Jan 10 18:55 ./scripts/stop.sh
-rwxr-xr-x. 1 vagrant vagrant 6254 Jan 10 18:55 ./scripts/auth_setup.sh
-rw-r--r--. 1 vagrant vagrant 314 Jan 13 19:21 ./zeek-logs/current/signatures(_carved).log
-rw-r--r--. 1 vagrant vagrant 445 Jan 13 19:19 ./htadmin/config.ini
-rw-r--r--. 1 vagrant vagrant 0 Jan 13 19:19 ./htadmin/metadata
-rw-r--r--. 1 vagrant vagrant 441 Jan 10 18:55 ./host-map.txt
-rw-r--r--. 1 vagrant vagrant 507 Jan 10 18:55 ./logstash/certs/server.conf
-rw-r--r--. 1 vagrant vagrant 902 Jan 10 18:55 ./logstash/certs/client.conf
-rw-r--r--. 1 vagrant vagrant 1046 Jan 10 18:55 ./logstash/certs/Makefile
-rw-rw-r--. 1 vagrant vagrant 1675 Jan 13 19:20 ./logstash/certs/ca.key
-rw-rw-r--. 1 vagrant vagrant 1192 Jan 13 19:20 ./logstash/certs/ca.crt
-rw-rw-r--. 1 vagrant vagrant 1139 Jan 13 19:20 ./logstash/certs/server.crt
-rw-rw-r--. 1 vagrant vagrant 1704 Jan 13 19:20 ./logstash/certs/server.key
-rw-r--r--. 1 vagrant vagrant 166962 Jan 10 18:55 ./README.md
-rw-r--r--. 1 root root 2764 Jan 13 19:25 ./moloch-logs/wise.log
-rw-r--r--. 1 vagrant vagrant 209 Jan 13 19:22 ./moloch-logs/viewer.log
-rw-------. 1 vagrant vagrant 189 Jan 13 19:19 ./auth.env
-rw-rw-r--. 1 vagrant vagrant 1192 Jan 13 19:20 ./filebeat/certs/ca.crt
-rw-rw-r--. 1 vagrant vagrant 1675 Jan 13 19:20 ./filebeat/certs/client.key
-rw-rw-r--. 1 vagrant vagrant 1464 Jan 13 19:20 ./filebeat/certs/client.crt
-rw-r--r--. 1 root root 78 Jan 13 19:13 ./install_source.txt
-rw-r--r--. 1 vagrant vagrant 12014 Jan 13 19:14 ./docker-compose.yml
Browsing to the web interface for Moloch/Kibana, I see my data has been inserted.
I think I figured it out, I had to re-permission some of the bind-mounts. Your log output was very helpful in trouble shooting I had to re-permission the bind mounts for pcap-monitor and filebeat. The original permissions on those dirs were 755, I changed them to 766.
I also had to change the permissions for htadmin bind mounts to 666 for the web ui to work.
I think the issue is resolved for now.
Seems the file upload feature isn't working correctly. I've tried using both https:///upload
and https://:8443 interfaces. Still can't see pcap files in moloch.
When I up load a 40mb file into https:/upload/ I receive an nginx 413 error,
When try the :8443 interface, I can see the file makes it to the bind mount, but doesn't seem like it gets distributed past there.