maliceio / malice

VirusTotal Wanna Be - Now with 100% more Hipster
Apache License 2.0
1.63k stars 266 forks source link

WEBUI not starting properly #78

Closed changemenemo closed 5 years ago

changemenemo commented 5 years ago

Describe the bug I got an elastic presentation page, nothing else

A clear and concise description of what the bug is.

To Reproduce

Steps to reproduce the behavior: run malice elk on macos X, malice 0,3,24 Expected behavior

A clear and concise description of what you expected to happen. having the page giving in the screenshot from the wikipage Environment (please complete the following information):

Output of docker version:

Client:
 Version:           18.06.1-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        e68fc7a
 Built:             Tue Aug 21 17:21:31 2018
 OS/Arch:           darwin/amd64
 Experimental:      false

Server:
 Engine:
  Version:          18.06.1-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       e68fc7a
  Built:            Tue Aug 21 17:29:02 2018
  OS/Arch:          linux/amd64
  Experimental:     true

Output of docker info:

Containers: 3
 Running: 3
 Paused: 0
 Stopped: 0
Images: 11
Server Version: 18.06.1-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host ipvlan macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.9.93-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: 33IC:TYMG:FLK6:NHUZ:QJCU:RD7R:5OOJ:NIBY:PSBG:MAPS:GEGL:3RDE
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
 File Descriptors: 46
 Goroutines: 74
 System Time: 2018-11-17T15:42:50.598099279Z
 EventsListeners: 2
HTTP Proxy: gateway.docker.internal:3128
HTTPS Proxy: gateway.docker.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

Additional environment details (AWS, VirtualBox, physical, Docker For Mac, Docker Toolbox, docker-machine, etc.): docker for mac and virtualbox present Additional context Add any other context about the problem here. screen shot 2018-11-17 at 16 39 19

blacktop commented 5 years ago

what is your config.toml? in ~/.malice/config/config.toml ?

blacktop commented 5 years ago

I did a lot of fixes/reworkin of ELK images this weekend. If you do a

docker pull malice/elasticsearch:6.4
docker pull malice/kibana:6.4

It should fix it for you.

Make sure you remove those old running containers (note: you will lose any data in them, I think)

docker rm -f malice-elastic malice-kibana
changemenemo commented 5 years ago

okey so I've upgraded it. Something is taking way more cpu than before when creating malice-elastic FATA[0023] failed to start to database: connecting to elasticsearch timed out after 20 seconds: failed to ping elasticsearch: Get http://localhost:9200/: EOF I always need to start twice. to answer to your previous question, I didn't change anything to the configuration file "didn't know that I had to ?"


#######################################################################
# MALICE Configuration ################################################
#######################################################################

title = "Malice Runtime Configuration"
version = "v0.3.24"

[author]
  name = "blacktop"
  organization = "MaliceIO"

[web]
  url = "0.0.0.0:80"
  admin_url = "127.0.0.1:3333"

[email]
  host = "smtp.example.com"
  port = 25
  user = "username"
  pass = "password"

[database]
  name = "malice-elastic"
  image = "malice/elasticsearch:6.4"
  url = "http://localhost:9200"
  # url = "http://elasticsearch:9200"
  username = ""
  password = ""
  ports = [9200]
  timeout = 20
  enabled = true

[ui]
  name = "malice-kibana"
  image = "malice/kibana:6.4"
  server = "localhost"
  ports = [443]
  enabled = true

[environment]
  run = "development"

[docker]
  machine-name = "malice"
  endpoint = "tcp://localhost:2376"
  timeout = 120
  binds = "malice:/malware:ro"
  links = "malice-elastic:elasticsearch"
  cpu = 500000000
  memory = 524288000

[logger]
  filename = "malice.log"
  maxsize = 10
  maxage = 30
  maxbackups = 7
  localtime = false

[proxy]
  enable = false
  http = ""
  https = ""

I have the same screen on the webui than before.

blacktop commented 5 years ago

did you remove the running containers?

blacktop commented 5 years ago

also what does this cmd output:

$ docker logs -f malice-elastic
blacktop commented 5 years ago

I was able to also experience problems with elasticsearch:6.4 so I have upgraded the config to use 6.5 now. Can you let me know if that fixes it for you?

blacktop commented 5 years ago

If the new release doesn't fix your issue, please re-open. Thank you for your contribution! 👍

https://github.com/maliceio/malice/releases

changemenemo commented 5 years ago
[2018-11-19T17:10:58,331][INFO ][o.e.n.Node               ] [] initializing ...
[2018-11-19T17:10:58,409][INFO ][o.e.e.NodeEnvironment    ] [E2F-SSQ] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [48.1gb], net total_space [58.4gb], types [ext4]
[2018-11-19T17:10:58,409][INFO ][o.e.e.NodeEnvironment    ] [E2F-SSQ] heap size [990.7mb], compressed ordinary object pointers [true]
[2018-11-19T17:10:58,411][INFO ][o.e.n.Node               ] [E2F-SSQ] node name derived from node ID [E2F-SSQcSgip1JUxcYTW6g]; set [node.name] to override
[2018-11-19T17:10:58,411][INFO ][o.e.n.Node               ] [E2F-SSQ] version[6.4.0], pid[1], build[oss/tar/595516e/2018-08-17T23:18:47.308994Z], OS[Linux/4.9.125-linuxkit/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b11]
[2018-11-19T17:10:58,411][INFO ][o.e.n.Node               ] [E2F-SSQ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/usr/share/elasticsearch/tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar]
[2018-11-19T17:10:59,328][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [aggs-matrix-stats]
[2018-11-19T17:10:59,328][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [analysis-common]
[2018-11-19T17:10:59,328][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [ingest-common]
[2018-11-19T17:10:59,328][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [lang-expression]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [lang-mustache]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [lang-painless]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [mapper-extras]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [parent-join]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [percolator]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [rank-eval]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [reindex]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [repository-url]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [transport-netty4]
[2018-11-19T17:10:59,329][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] loaded module [tribe]
[2018-11-19T17:10:59,330][INFO ][o.e.p.PluginsService     ] [E2F-SSQ] no plugins loaded
[2018-11-19T17:11:01,622][WARN ][o.e.d.s.ScriptModule     ] Script: returning default values for missing document values is deprecated. Set system property '-Des.scripting.exception_for_missing_value=true' to make behaviour compatible with future major versions.
[2018-11-19T17:11:03,839][INFO ][o.e.d.DiscoveryModule    ] [E2F-SSQ] using discovery type [zen]
[2018-11-19T17:11:04,289][INFO ][o.e.n.Node               ] [E2F-SSQ] initialized
[2018-11-19T17:11:04,289][INFO ][o.e.n.Node               ] [E2F-SSQ] starting ...
[2018-11-19T17:11:04,468][INFO ][o.e.t.TransportService   ] [E2F-SSQ] publish_address {172.17.0.3:9300}, bound_addresses {0.0.0.0:9300}
[2018-11-19T17:11:04,476][INFO ][o.e.b.BootstrapChecks    ] [E2F-SSQ] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2018-11-19T17:11:07,539][INFO ][o.e.c.s.MasterService    ] [E2F-SSQ] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {E2F-SSQ}{E2F-SSQcSgip1JUxcYTW6g}{uwiCs4OzS8esfx20ln2kug}{172.17.0.3}{172.17.0.3:9300}
[2018-11-19T17:11:07,544][INFO ][o.e.c.s.ClusterApplierService] [E2F-SSQ] new_master {E2F-SSQ}{E2F-SSQcSgip1JUxcYTW6g}{uwiCs4OzS8esfx20ln2kug}{172.17.0.3}{172.17.0.3:9300}, reason: apply cluster state (from master [master {E2F-SSQ}{E2F-SSQcSgip1JUxcYTW6g}{uwiCs4OzS8esfx20ln2kug}{172.17.0.3}{172.17.0.3:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
[2018-11-19T17:11:07,597][INFO ][o.e.h.n.Netty4HttpServerTransport] [E2F-SSQ] publish_address {172.17.0.3:9200}, bound_addresses {0.0.0.0:9200}
[2018-11-19T17:11:07,597][INFO ][o.e.n.Node               ] [E2F-SSQ] started
[2018-11-19T17:11:07,609][INFO ][o.e.g.GatewayService     ] [E2F-SSQ] recovered [0] indices into cluster_state
[2018-11-20T02:12:42,412][WARN ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][young][32377][14] duration [1.2s], collections [1]/[1.8s], total [1.2s]/[2.2s], memory [428.7mb]->[179.7mb]/[990.7mb], all_pools {[young] [266.2mb]->[745.8kb]/[266.2mb]}{[survivor] [45.1kb]->[16.7mb]/[33.2mb]}{[old] [162.4mb]->[162.4mb]/[691.2mb]}
[2018-11-20T02:12:42,585][WARN ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][32377] overhead, spent [1.2s] collecting in the last [1.8s]
[2018-11-20T03:08:26,660][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][35714] overhead, spent [405ms] collecting in the last [1s]
[2018-11-20T05:00:47,885][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][42437] overhead, spent [442ms] collecting in the last [1s]
[2018-11-20T05:56:51,780][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][45794] overhead, spent [344ms] collecting in the last [1s]
[2018-11-20T17:18:31,069][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][86425] overhead, spent [286ms] collecting in the last [1s]
[2018-11-21T13:56:19,841][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][160248] overhead, spent [465ms] collecting in the last [1s]
[2018-11-21T14:52:41,323][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][163620] overhead, spent [345ms] collecting in the last [1.3s]
[2018-11-24T12:03:27,912][INFO ][o.e.m.j.JvmGcMonitorService] [E2F-SSQ] [gc][412031] overhead, spent [287ms] collecting in the last [1s]
blacktop commented 5 years ago

if you upgrade malice it should ask you to "upgrade" your local config too ~/.malice/config/config.toml but I haven't tested that feature a lot so it might not auto replace it properly, if it doesn't you can just rm -rf ~/.malice and remove the whole config folder and try again.

changemenemo commented 5 years ago

okey yeah so it was the 6.5 version the solution. thanks