SpecterOps / Nemesis

An offensive data enrichment pipeline
https://specterops.github.io/Nemesis/
Other
613 stars 59 forks source link

The Job "kibana-init-job" is invalid #11

Closed phughesion closed 1 year ago

phughesion commented 1 year ago

Still working on startup. Different problem this time:

 - nemesis-dotnet -> nemesis-dotnet:10470eb238e7aec743cc3fc6bf1e38b7656166252c28c0f818be1825e6b46c01
 - nemesis-nlp -> nemesis-nlp:6c0759a8213f2588ed46ed55232c2c16502064763df4d6b59881adff0ea77d65
 - nemesis-passwordcracker -> nemesis-passwordcracker:229065b28335f812549d98b92156354f817472268edd15e7cb0217fccf0a609c
 - tensorflow-serving -> tensorflow-serving:7448c93455e4df4c8c67b00b6350af5229b15e450150469a6bb91983045e8963
 - enrichment -> enrichment:bc679e824163732d729733adfa67a231ab1dbe929f36097ffc41bc2fceb15359
 - dashboard -> dashboard:498e4e386f0c376524da23f127ebc2cfddbcd8405d719fef004c8ad9c8d94852
Checking cache...
 - nemesis-dotnet: Found. Tagging
 - nemesis-nlp: Found. Tagging
 - nemesis-passwordcracker: Found. Tagging
 - tensorflow-serving: Found. Tagging
 - enrichment: Found. Tagging
 - dashboard: Found. Tagging
Starting test...
Tags used in deployment:
 - nemesis-dotnet -> nemesis-dotnet:180062b33f54b8634cfa41264774b1df17e0bd46be83188cdc09cda9be2f64cb
 - nemesis-nlp -> nemesis-nlp:03931c0c9ff847e782a6e7f71fafe20ac8069091f68dbcdf706c42f695c4957d
 - nemesis-passwordcracker -> nemesis-passwordcracker:55964289062d11cceafeab9fbded76d7b3ad63bd84e22669fe5829cb91d1b54f
 - tensorflow-serving -> tensorflow-serving:2151cd9bc22cf56b2b77ddf97f92a44f7a260dd3bf562be7997ecdd5c86ea461
 - enrichment -> enrichment:b42b5f6b55e021d6b9896e39c4974a30da752bac0411cfbb122fdcfe1458a797
 - dashboard -> dashboard:22c5b33b6edea81f123938cb5aa5cb4bab5ddf2bfc5b02cbfb2070788f518051
Starting deploy...
 - elasticsearch.elasticsearch.k8s.elastic.co/nemesis unchanged
 - persistentvolume/elasticsearch-data-pv unchanged
 - persistentvolumeclaim/elasticsearch-data-pvc unchanged
 - ingress.networking.k8s.io/kibana-ingress unchanged
 - ingress.networking.k8s.io/elastic-ingress unchanged
 - kibana.kibana.k8s.elastic.co/nemesis unchanged
 - configmap/kibana unchanged
 - The Job "kibana-init-job" is invalid: spec.template: Invalid value: core.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"batch.kubernetes.io/controller-uid":"6d6059a7-aa97-48cd-9c7a-59111b118778", "batch.kubernetes.io/job-name":"kibana-init-job", "controller-uid":"6d6059a7-aa97-48cd-9c7a-59111b118778", "job-name":"kibana-init-job"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:core.PodSpec{Volumes:[]core.Volume{core.Volume{Name:"config", VolumeSource:core.VolumeSource{HostPath:(*core.HostPathVolumeSource)(nil), EmptyDir:(*core.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*core.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*core.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*core.GitRepoVolumeSource)(nil), Secret:(*core.SecretVolumeSource)(nil), NFS:(*core.NFSVolumeSource)(nil), ISCSI:(*core.ISCSIVolumeSource)(nil), Glusterfs:(*core.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*core.PersistentVolumeClaimVolumeSource)(nil), RBD:(*core.RBDVolumeSource)(nil), Quobyte:(*core.QuobyteVolumeSource)(nil), FlexVolume:(*core.FlexVolumeSource)(nil), Cinder:(*core.CinderVolumeSource)(nil), CephFS:(*core.CephFSVolumeSource)(nil), Flocker:(*core.FlockerVolumeSource)(nil), DownwardAPI:(*core.DownwardAPIVolumeSource)(nil), FC:(*core.FCVolumeSource)(nil), AzureFile:(*core.AzureFileVolumeSource)(nil), ConfigMap:(*core.ConfigMapVolumeSource)(0xc0010a31c0), VsphereVolume:(*core.VsphereVirtualDiskVolumeSource)(nil), AzureDisk:(*core.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*core.PhotonPersistentDiskVolumeSource)(nil), Projected:(*core.ProjectedVolumeSource)(nil), PortworxVolume:(*core.PortworxVolumeSource)(nil), ScaleIO:(*core.ScaleIOVolumeSource)(nil), StorageOS:(*core.StorageOSVolumeSource)(nil), CSI:(*core.CSIVolumeSource)(nil), Ephemeral:(*core.EphemeralVolumeSource)(nil)}}}, InitContainers:[]core.Container{core.Container{Name:"kibana-waiter", Image:"alpine:3.17.0", Command:[]string{"/bin/sh", "-c"}, Args:[]string{"apk --update add jq curl\n\necho \"Waiting for Kibana...\"\n\nCOUNTER=1\nURL=\"${KIBANA_URL}/api/status\"\nuntil curl --silent --user \"${ELASTICSEARCH_USER}:${ELASTICSEARCH_PASSWORD}\" --insecure --max-time 5 \\\n  $URL \\\n  | jq '.status.overall.level' \\\n  | grep \"available\" > /dev/null; do\n  echo \"Retry ${COUNTER}: Waiting for Kibana at ${URL}...\"\n  COUNTER=`expr ${COUNTER} + 1`\n  sleep 5\ndone\n\necho \"Kibana available.\"\n"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"ELASTICSEARCH_USER", Value:"", ValueFrom:(*core.EnvVarSource)(0xc005599b80)}, core.EnvVar{Name:"ELASTICSEARCH_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc005599bc0)}, core.EnvVar{Name:"KIBANA_URL", Value:"http://nemesis-kb-http.default.svc.cluster.local:5601/", ValueFrom:(*core.EnvVarSource)(nil)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil), Claims:[]core.ResourceClaim(nil)}, ResizePolicy:[]core.ContainerResizePolicy(nil), VolumeMounts:[]core.VolumeMount(nil), VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]core.Container{core.Container{Name:"kibana-init", Image:"alpine:3.17.0", Command:[]string{"/bin/sh", "-c"}, Args:[]string{"apk --update add curl\n\ncurl \"${KIBANA_URL}/api/saved_objects/_import?overwrite=true\" \\\n    --form file=@/etc/kibana-config/saved-objects.ndjson \\\n    --insecure \\\n    --silent \\\n    --fail \\\n    --show-error \\\n    -H \"kbn-xsrf: true\" \\\n    -o /dev/null \\\n    --user \"${ELASTICSEARCH_USER}:${ELASTICSEARCH_PASSWORD}\"\n\ncurl -X PUT \"${ELASTIC_URL}/file_data_plaintext\" -H \"Content-Type: application/json\" -d'\n    {\n      \"settings\": {\n        \"index\" : {\n          \"highlight.max_analyzed_offset\" : 10000000\n        }\n      }\n    }\n    '\n\ncurl -X PUT \"${ELASTIC_URL}/_cluster/settings\" -H \"Content-Type: application/json\" -d'\n    {\n      \"persistent\": {\n        \"search.max_async_search_response_size\": \"50mb\"\n      }\n    }\n    '\n"}, WorkingDir:"", Ports:[]core.ContainerPort(nil), EnvFrom:[]core.EnvFromSource(nil), Env:[]core.EnvVar{core.EnvVar{Name:"ELASTIC_URL", Value:"http://nemesis-es-http.default.svc.cluster.local:9200/", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"KIBANA_URL", Value:"http://nemesis-kb-http.default.svc.cluster.local:5601/", ValueFrom:(*core.EnvVarSource)(nil)}, core.EnvVar{Name:"ELASTICSEARCH_USER", Value:"", ValueFrom:(*core.EnvVarSource)(0xc005599cc0)}, core.EnvVar{Name:"ELASTICSEARCH_PASSWORD", Value:"", ValueFrom:(*core.EnvVarSource)(0xc005599ce0)}}, Resources:core.ResourceRequirements{Limits:core.ResourceList(nil), Requests:core.ResourceList(nil), Claims:[]core.ResourceClaim(nil)}, ResizePolicy:[]core.ContainerResizePolicy(nil), VolumeMounts:[]core.VolumeMount{core.VolumeMount{Name:"config", ReadOnly:false, MountPath:"/etc/kibana-config", SubPath:"", MountPropagation:(*core.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]core.VolumeDevice(nil), LivenessProbe:(*core.Probe)(nil), ReadinessProbe:(*core.Probe)(nil), StartupProbe:(*core.Probe)(nil), Lifecycle:(*core.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*core.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]core.EphemeralContainer(nil), RestartPolicy:"Never", TerminationGracePeriodSeconds:(*int64)(0xc00b464c50), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", SecurityContext:(*core.PodSecurityContext)(0xc00a933440), ImagePullSecrets:[]core.LocalObjectReference(nil), Hostname:"", Subdomain:"", SetHostnameAsFQDN:(*bool)(nil), Affinity:(*core.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]core.Toleration(nil), HostAliases:[]core.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), PreemptionPolicy:(*core.PreemptionPolicy)(nil), DNSConfig:(*core.PodDNSConfig)(nil), ReadinessGates:[]core.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), Overhead:core.ResourceList(nil), EnableServiceLinks:(*bool)(nil), TopologySpreadConstraints:[]core.TopologySpreadConstraint(nil), OS:(*core.PodOS)(nil), SchedulingGates:[]core.PodSchedulingGate(nil), ResourceClaims:[]core.PodResourceClaim(nil)}}: field is immutable
kubectl apply: exit status 1

Let me know if there is more info required to properly debug. Thanks.

leechristensen commented 1 year ago

Did you run nemesis-cli.py before you ran into that?

Also, what Kubernetes flavor are you running? Minikube?

phughesion commented 1 year ago

Did you run nemesis-cli.py before you ran into that?

Also, what Kubernetes flavor are you running? Minikube?

Yes, I am running minikube. I have just been following setup.md.

I ran minikube stop && minikube start after running nemesis-cli.py. I am not using slack and just trying to get the most minimal setup possible at the moment.

The only nemesis config step that was not as clear to me was nemesis_http_server, which I just set to http://127.0.0.1:5555. Is that just the interface/port you want to access nemesis from?

leechristensen commented 1 year ago

nemesis_http_server is the frontend URL you access Nemesis from. If you're running it on VM, this will be the IP of the VM. The port must match ingress-nginx-controller service's port in skaffold.yaml (port 8080 by default). FWIW, I just added an example config here.

If you want to post your Nemesis config (redact whatever you feel is necessary) I can take a look as well.

If that doesn't work, I'd start with a fresh minikube and try the following:

minikube delete    # delete your current cluster
minikube start       # start up minikube again

./nemesis-cli.py     # Setup Nemesis configuration again

./scripts/pull_images.sh   # Avoid any potential skaffold timeouts that may occur from image pulling taking a long time
skaffold build                  # Manually build everything 

skaffold run  --port-forward   # Kick things off

If that doesn't work, I'll need some more details:

phughesion commented 1 year ago

minikube delete # delete your current cluster minikube start # start up minikube again

./nemesis-cli.py # Setup Nemesis configuration again

That did the trick. Thank you.

./scripts/pull_images.sh # Avoid any potential skaffold timeouts that may occur from image pulling taking a long time skaffold build # Manually build everything

skaffold run --port-forward # Kick things off

leechristensen commented 1 year ago

Excellent! Working on updating the instructions to hopefully make things a bit clearer and some troubleshooting steps one can take. Thanks for the feedback!