Closed dikshachauhan-qasource closed 3 years ago
Pinging @elastic/ingest-management (Team:Ingest Management)
@manishgupta-qasource Please review
Reviewed & Assigned to @EricDavisX
Moved this to Integrations repo as it might be an issue with the naming of these fields in the Linux integration.
Pinging @elastic/integrations (Team:Integrations)
Fix merged, closing.
Hi @EricDavisX
We have re-validated this issue on 8.0.0 Snapshot Kibana Cloud environment. Build details are as follows:
Build: 39872
Commit: 0fe7b9e080c67c43aefdb7ea25d5e90a80cb4ade
Observations: While revalidating, we have observed that issue is Partially Fixed. Our observations are as follows:'
Fixed:
Not Fixed:
Screenshots:
Hence we are reopening this issue. Please let us know if anything else is required.
Thanks QAS
@dikshachauhan-qasource @manishgupta-qasource for any Integration side issue please post the version of the Integration Package you tested with. The version of Kibana when deployed as a Cloud snapshot has an indirect association to the package repo used, but not the package version itself. Package version is usually the most important item for a package issue. This is the version seen in Fleet UI like .0.10.8 for System, for example.
I will summarize some nuances of our package development process shortly, tho with the version info we can know for sure if we expected to see the fix or not.
About the availability of the fixes in this case, the PR above was merged 7 days ago, but I don't see a promotion pr by Alex so maybe it was done (by someone else) and maybe not. Let us compare versions and since it is a docs concern it is easy to assess and fix if needed. @fearful-symmetry ball is in your court. :)
my 'summary' of the dev and testing process is here, tho I am pushing it to email and a training word-doc, this is just for reference to make sure we had laid out expectations for all of the tester team.
1) dev checks in a fix to 'integrations' repo (it is not yet visible in cloud deploys yet). you can look in Integrations repo for this check in, example: https://github.com/elastic/integrations/pull/554
2) the collection of changes in the Integrations repo is pushed into the 'package storage' where it is consumed by more devs and automated tests and eventually by test teams doing manual validation - see a merge of 'Linux' package 2 days back: https://github.com/elastic/package-storage/pull/817
3) fix will eventually be 'promoted' to the next storage phase by dev, and the physical package storage container (that the Elastic Registry uses) needs to be rebuilt so it will have the latest changes.
An example PR of promoting from snapshot storage to staging storage is her: https://github.com/elastic/package-storage/pull/821
So, when the package is promoted (to staging or prod) and the given storage environment (the container) is re-built, then it can easily be tested by a cloud deploy of master stack (currently 8.0) - this relates to the 'staging' package repo. when a package is vetted further it can and will be promoted to 'prod' which means it is available to all deployed shipped versions (like 7.10.x, and the 7.11 BCs we are testing) as well as the coming minor dot releases of the stack (7.12, currently).
More info is here, but it is a developer guide and a longer read, and NOT everything is fully finished here... it is still a bit in-progress, to be fair: https://github.com/elastic/obs-dc-team/blob/master/release/integrations.md#deployment-environments
Let us compare versions and since it is a docs concern it is easy to assess and fix if needed. @fearful-symmetry ball is in your court. :)
So, all the changes should be on SNAPSHOT
of the EPR, assuming nothing needs to be rebuilt. I didn't want to push some of the simple doc fixes to prod, since I didn't want to be cramming in updates this late before the release. That being said, we can still push to prod if needed, but we should be able to test against the SNAPSHOT
EPR, which is 0.10.8. @EricDavisX
I have been chatting with Alex and we are syncing up. The .10.8 changes are pushed 'staging' and are deployed so they can be accessed at least by an 8.0 cloud snapshot. Let us try again to test these and close out other tickets, etc. If cloud snapshot isn't working we can try to find an alternate server.
Hi @EricDavisX
We have validated these changes in 8.0 snapshot Kibana Cloud build having Linux integration version as 0.3.4 and found in not Fixed . Build details are as follows:
BUILD 39998
COMMIT 841ab704b8e50986730a32e68f9afc3ac28b92cd
Observations:
Scroll down to Entropy section and go through table content provided over here. Observe, table fields names are also starting with 'system'. It's happening for all table content on Linux integration page and hence require to be updated.
QUERY: Further @fearful-symmetry, Could you please confirm us if 'linux.socket' dataset is generated only once when agent is installed. Also, is there any way to update data streaming for same on Data Stream page.
Please let us know if anything else is required.
Thanks QAS
So, some of this is stuff we can't fix. The sub-headings (like System Entropy metrics
) can probably be changed, but due to https://github.com/elastic/package-spec/issues/51, we need two different groupings for the linux metrics, so one of them is called system metrics
. if anyone has any other naming ideas, I'm open to suggestions.
As far as the field names go--we can't really change those. They were always system.*
, so swapping them over would be a breaking change, which we can't do for a while.
As far as the socket
metricset, it'll report and event for every new socket, so it may not regularly report events.
Hi @fearful-symmetry
Thanks for the feedback.
However, Could you please confirm us exact system metrics that cant be fixed on Linux integration documentation page and on add/edit linux integration page.
Further, we will attempt to trigger socket dataset again however, currently we are blocked to perform testing on 8.0 due to #74077.
Thanks QAS
The Collect system metrics from Linux instances
is the one that's correct.
@dikshachauhan-qasource hi - If we need to use a given version of the package storage to test changes, we will likely need to use local snapshot setups. I realize that our process of testing on 8.0 is ok to start, but we should ideally test first (and most importantly) with the current shipped or shipping stack version (in this case 7.11). We will need to change the package registry that is set in Kibana to be the registry that points to 'snapshot' by setting the start up value in configs/kibana.yml to: xpack.fleet.registryUrl: "htttp://epr-staging.elastic.co/"
can you try this out with a 7.11 snapshot local deploy and confirm it works for you and that you see the versions of packages that are in http:epr-snapshot.elastic.co/search ? Thank you much.
Hi @EricDavisX
We have attempted to validate above fixes on 7.11 snapshot local deploy by setting the start up value in configs/kibana.yml to: xpack.fleet.registryUrl: "https://epr-staging.elastic.co/search".
However, found that Linux integration package is 3.3 on Kibana access and observed xpack entry didn't make any changes to Kibana.yml. Though, while accessing the Url: "https://epr-staging.elastic.co/search", Linux integration package shows up as 3.4.
Please have a look on below screenshot.
Build details: Artifact link: https://snapshots.elastic.co/7.11.0-84214581/downloads/beats/elastic-agent/elastic-agent-7.11.0-SNAPSHOT-windows-x86_64.zip BUILD 37869 COMMIT 61f4c54124e2ca8b3dfbcfad3ff74942f519dad9
Thanks QAS
Okay, as far as what's relevant to this issue, the PR here will promote the fixes to staging: https://github.com/elastic/package-storage/pull/844
@dikshachauhan-qasource @EricDavisX I'm still not clear why we're testing against anything other than 7.11-SNAPSHOT/epr-staging here. Are there still issues with the right packages not being available on 7.11-SNAPSHOT or are we deliberately testing master/8.0?
Alex, thank for pushing that pr up. we should test on 7.11 and 7.10 for sure and we should use 'staging' epr branch. The process needed to be updated on my end, and there are some logistics to make it even easier / quicker / less-prone to errors, as seen here: https://github.com/elastic/kibana/issues/90131
@dikshachauhan-qasource we can track that Kibana pr! To your problem today, I think the '/search' on the end of the epr value you put in the Kiabana.yml is the problem, it shouldn't have the /search endpoint added on, and it may have failed 'silently' and been pointing to prod repo (or the packages were not there because the latest had not been merged yet and re-rolled out by Integrations team). regardless, we'll have it ready to test again tomorrow.
Hi @EricDavisX
We have revalidated this issue on 7.11 snapshot local deploy with setting xpack.fleet.registryUrl: "https://epr-staging.elastic.co" in configs/kibana.yml.
However, found same results that Linux integration package has v3.3 on Kibana access. Please refer below screenshot:
it looks like the deploy/roll-out job for the staging storage registry was not run. @fearful-symmetry after a merge, when we know testing is process, do you want to immediately run the roll-out jenkins job? I can take ownership to help coordinate testing side if you like, just need to figure out what process works best for us. I have run it now.
@dikshachauhan-qasource You should see the integration there if you refresh Kibana
@fearful-symmetry after a merge, when we know testing is process, do you want to immediately run the roll-out jenkins job?
I would be fine with that, but no idea how it works.
I spoke with Alex and we confirmed the Jenkins usage and elastic-package functionality to rebuild the storage locations. It is updated and ready for final test in Staging - then we can confirm with the team that we can update staging packages to Prod (Eric will handle that, then assign out a spot re-check)
Just confiring, I also spoke with Diksha and we'll update the test content to match the product for now.
Hi @EricDavisX
As per feedback at comment , We have re-run the Kibana.bat file on locally deployed 7.11BC Kibana environment and found no changes in Linux package version.
However, we have verified same on 8.0 snapshot build as well and found changes as merged.
Build details:
BUILD 40145
COMMIT 1818dd7f4a9a99df6e67c9de07f430bd33c08205
Linux package version: 3.5
Observations:
Screenshot:
Further for the Documention changes that can't be fixed we have raised a new ticket #668 for future references.
Hence, closing this out.
Please let us know if anything else is required from our side.
Thanks QAS
Kibana version: 8.0 Snapshot Kibana Cloud environment
Elasticsearch version: 8.0 Snapshot Kibana Cloud environment
Host OS and Browser version: Linux centos, All
Original install method (e.g. download page, yum, from source, etc.): 8.0 Snapshot Kibana Cloud environment
Preconditions
Steps to reproduce:
Reference ticket Id: N/A
Actual Result Toggle options headings are starting with naming as system for Linux integration.
Expected Result Toggle options headings should be starting with linux naming for Linux integration.
What's working N/A
What's not working N/A
Screenshots: