Open jayramr opened 1 month ago
This elk stack is for demo / training purposes - please don't use it for anything other than playing / learning.
You are correct about the port mapping - you can see how we stand it up in slides 7-10... https://slides.com/irods/ugm2024-getting-started/#/7
The audit repository itself has some configuration information - but i'm noticing it might need a little update... https://github.com/irods/irods_rule_engine_plugin_audit_amqp?tab=readme-ov-file#configuration
The getting started slides in the first link have been updated in the last couple days as we prepare for UGM2024 at the end of this month.
Thanks @trel i tried to setup this and for me there are no errors.
But on the dashboard, i haven't received any graph and it shows "No results found".
I initiated iput/iget operation and waited for 30 mins and manually refreshed still no luck.
Please advise.
Make sure you are sending the right PEPs..
"pep_regex_to_match" : "audit_pep_(api|resource)_.*"
Yes @trel this is i followed.
I'm getting the messages in the "Discover" tab. But in the iRODS Dashboard show "No results found".
So iRODS is configured correctly - and the messages are being captured by the broker - and retrieved and inserted into Elasticsearch... but Kibana dashboard isn't showing them correctly?
Try slides 7-10 from my link above again... it has been tested in the last 24h to populate the dashboard.
Yes I cross checked again, same configuration, still not sure why kibana not updation.
I fetch the kibana.log from the container for your reference. Maybe this helps.
Not sure...
the only errors I see in there do mention a parser though...
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-10T17:50:58.257+00:00","message":"Failed to initialize service: parse_exception\n\tRoot causes:\n\t\tparse_exception: No processor type exists with name [inference]","log":{"level":"ERROR","logger":"plugins.observabilityAIAssistant.service"},"process":{"pid":959,"uptime":30.26180538},"trace":{"id":"a45ef6e5f828c039e192265a8c1f4aaf"},"transaction":{"id":"6cc401df8bb5ead1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-10T17:50:58.258+00:00","message":"Could not index 7 entries because of an initialisation error","log":{"level":"ERROR","logger":"plugins.observabilityAIAssistant.service"},"process":{"pid":959,"uptime":30.262464175},"trace":{"id":"a45ef6e5f828c039e192265a8c1f4aaf"},"transaction":{"id":"6cc401df8bb5ead1"}}
{"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.11.0"},"@timestamp":"2024-05-10T17:50:58.258+00:00","message":"parse_exception\n\tRoot causes:\n\t\tparse_exception: No processor type exists with name [inference]","error":{"message":"parse_exception\n\tRoot causes:\n\t\tparse_exception: No processor type exists with name [inference]","type":"ResponseError","stack_trace":"ResponseError: parse_exception\n\tRoot causes:\n\t\tparse_exception: No processor type exists with name [inference]\n at KibanaTransport.request (/usr/share/kibana/node_modules/@elastic/transport/lib/Transport.js:479:27)\n at processTicksAndRejections (node:internal/process/task_queues:95:5)\n at KibanaTransport.request (/usr/share/kibana/node_modules/@kbn/core-elasticsearch-client-server-internal/src/create_transport.js:51:16)\n at Ingest.putPipeline (/usr/share/kibana/node_modules/@elastic/elasticsearch/lib/api/api/ingest.js:139:16)\n at ObservabilityAIAssistantService.<anonymous> (/usr/share/kibana/node_modules/@kbn/observability-ai-assistant-plugin/server/service/index.js:120:9)"},"log":{"level":"ERROR","logger":"plugins.observabilityAIAssistant.service"},"process":{"pid":959,"uptime":30.264763753},"trace":{"id":"a45ef6e5f828c039e192265a8c1f4aaf"},"transaction":{"id":"6cc401df8bb5ead1"}}
hmm, no worries. I will try to research, maybe it's a local issue.
I saw this as well, and it had to do with the not-logstash layer needing a tweak which has now been merged. In order to fix the issue, I had to remove the Docker image and all of its constituent layers up to and including the not-logstash layer so that the layers are pulled down correctly. Or, if you are building it yourself, make sure to build with --no-cache
to ensure that the layer is rebuilt.
I saw this work after those changes were applied.
Specifically, this is the one I had success with: https://hub.docker.com/layers/swooshycueb/irods-elk-stack/test1/images/sha256-404571d679fbfbea5bfa4fe59e96581351c521a91360304a56dcf0cd49ca5ae
Hi @alanking thanks for the update. I tried the image advised by you, but still no results in the iRODS dashboard.
Okay, can you please share the contents of /etc/irods/server_config.json
? I'm specifically interested in the /plugin_configuration/rule_engines
stanza.
Hi @alanking i uploaded the entire conf for your review. [Uploading server_config.json…]()
Hmm... that link is back to this issue rather than a config file. Please try again
i tried again, with the same image https://hub.docker.com/layers/swooshycueb/irods-elk-stack/test1/images/sha256-404571d679fbfbea5bfa4fe59e96581351c521a91360304a56dcf0cd49ca5ae still no luck
Sorry, by "try again" I was referring to the uploading of the server_config.json file. The link provided in your previous post appears to just link back to this issue.
So, please try uploading the server_config.json file again so we can see the configuration and maybe that can help.
Please find the server_config.json file attached.
we've now updated the pep_regex_to_match
... and removed the need for the audit_
namespace...
the slides are up to date and are working for us.
And the configuration presented looks correct based on the previous approach, so I'm not sure why it wouldn't be working.
In the original post, you mentioned mapping the ports. What exactly are you running? For ease of reference, here's what the slides do:
docker run -d -p 8080:15672 -p 5672:5672 -p 80:5601 -p 9201:9200 irods/irods_audit_elk_stack
Port 5672 is important in order for the audit REP to communicate with the message broker.
Hi @alanking actually I'm following your latest slide only. I suspect why the dashboard are not updating because I used irods only for keeping file/directory metadata without uploading the real data into iRODS data grid.
I tested as per below
docker run -d -p 8080:15672 -p 5672:5672 -p 80:5601 -p 9201:9200 --name irods_elk irods/irods_audit_elk_stack
Thanks Jay
Okay, please try re-pulling (fresh, if possible) and running again. The image tag you listed (irods/irods_audit_elk_stack
) was known to be not working until images were pushed 2 days ago. So, if you don't have the latest, correct image, it makes sense why this wasn't working.
The slides and image are pretty much where they are going to be as they have been demonstrated to be working in our training environment.
Note: We won't be able to look into this for a while as UGM is next week.
Hi @alanking i tested with the latest image and even retried again still no luck.
Hello,
I tried to follow this https://github.com/irods/contrib/tree/main/irods_audit_elk_stack and build the container, but unfortunately I'm unable to find the steps how to run this docker image.
If I simply run below, it is starting all the services, but this exposed within the container. Actually i need to map the ports with the host.
Also I'm looking for the steps how to configure the audit plugin in /etc/irods/server_config.json. I installed this pacakge "irods-rule-engine-plugin-audit-amqp" on the irods server which is running 4.3.1.
Or for the irods audit configuration, do i need to follow the steps from this article https://slides.com/irods/ugm2018-getting-started ?
Please let me know.
Thanks Jay