Closed weltenwort closed 1 year ago
Pinging @elastic/infra-monitoring-ui (Team:Infra Monitoring UI)
:information_source: WIP follow-up implementation issue: https://github.com/elastic/kibana/issues/159991
(It's also worth looking at Kyle's brain dump: https://gist.github.com/kpollich/f2ed8f4db2d5c380b8cc4e5bf4537825)
installPackages
permissionfleetAuthz: {
integrations: { installPackages: true },
}
installPackageFromRegistryHandler
for further security hints:
const user = (await appContextService.getSecurity()?.authc.getCurrentUser(request)) || undefined;
const authorizationHeader = HTTPAuthorizationHeader.parseFromRequest(request, user?.username);
rgs.installSource === 'upload'
then installPackageByUpload
args.installSource === 'registry'
AND there is a matching bundled package then installPackageByUpload
args.installSource === 'registry'
AND there isn't a matching bundled package then installPackageFromRegistry
installPackageFromRegistry
contains various chunks of logic with regards to versions. E.g. can we install out of date versions etc.
The main touchpoints here include:
getInstallationObject
which gets the installed version (or undefined)Registry.fetchFindLatestPackageOrThrow
which fetches the latest version of the package from the registry (or throws)Registry.getPackage
which gets the requested version of the package (pulled from cache if available)Once it is determined that the package can be installed (the version is okay, or force=true
etc) installPackageCommon
is called.
installOutOfDateVersionOk
is determined (bar the force option) based on the installType
Inside of installPackageCommon
this logic then determines if new assets should be installed, or whether to return the assets that already exist from the installed SavedObject.
This is where logic starts to use helpers / methods etc that are more relevant to our needs
There is a check to see if the license matches the required subscription level (if any, from package info)
const savedObjectsImporter = appContextServic
.getSavedObjects()
.createImporter(savedObjectsClient, { importSizeLimit: 15_000 });
const savedObjectTagAssignmentService = appContextService
.getSavedObjectsTagging()
.createInternalAssignmentService({ client: savedObjectsClient });
const savedObjectTagClient = appContextService
.getSavedObjectsTagging()
.createTagClient({ client: savedObjectsClient });
Note: the internal assignment service is only supposed to be used on behalf of the internal Kibana user
_installPackage
is then called with these resources. This is where the bulk of the actual asset manipulation happens.
From here _installPackage
performs various jobs:
createInstallation
is called. "Installation" here refers to the base SavedObject. This will have the epm-packages
type, and the ID will be the package name.installKibanaAssetsAndReferences
is called.
paths
. The paths are derived from the package archive (pulled from the registry, and later stored in cache).
installILMPolicy
(again, the main consideration for reuse here would be the paths
)installIlmForDataStream
installMlModel
for machine learning models (we wouldn't need this)installIndexTemplatesAndPipelines
removeLegacyTemplates
updateCurrentWriteIndices
is called to update the current backing indices of each data stream (if they exist already) (the data streams are derived from the index templates)saveArchiveEntries
creates asset saved objects (of type epm-packages-assets
), and updates the base installation SO's (of type epm-packages
) references to include the asset reference.keep_policies_up_to_date
the associated package policies are updatedpackageInfo
object.paths
and cache our assets as buffers (even though we aren't using an archive) using the correct cache naming convention.installPackageCommon
which ultimately calls _installPackage
and handles everything.I've put a section here on paths
specifically as it's very critical to the overall way installation works, and almost every helper function expects these. And because our "custom" integrations won't have a set of paths and assets that are pulled from the registry archive (or from the zip archive in the upload scenario) we need to consider how to best create interoperability. For our basic custom integrations we'll know what index templates and datastreams we want to create upfront, we can more or less hardcode these with a few variables that get changed in a template / interpolation type manner.
How these work in a "from registry" scenario:
archiveBuffer
will be fetched
archivePath =
/epr/${pkgName}/${pkgName}-${pkgVersion}.zip;
unpackBufferEntries
. These are keyed by path and the value is the asset buffer.This is an example of paths
:
[
"elastic_agent-1.8.0/LICENSE.txt",
"elastic_agent-1.8.0/changelog.yml",
"elastic_agent-1.8.0/data_stream/apm_server_logs/fields/agent.yml",
"elastic_agent-1.8.0/data_stream/apm_server_logs/fields/base-fields.yml",
"elastic_agent-1.8.0/data_stream/apm_server_logs/fields/ecs.yml",
"elastic_agent-1.8.0/data_stream/apm_server_logs/fields/fields.yml",
"elastic_agent-1.8.0/data_stream/apm_server_logs/manifest.yml",
"elastic_agent-1.8.0/data_stream/apm_server_metrics/fields/agent.yml",
"elastic_agent-1.8.0/data_stream/apm_server_metrics/fields/base-fields.yml",
"elastic_agent-1.8.0/data_stream/apm_server_metrics/fields/beat-fields.yml",
"elastic_agent-1.8.0/data_stream/apm_server_metrics/fields/ecs.yml",
]
This is an example of packageInfo
:
{
"name":"elastic_agent",
"version":"1.8.0",
"description":"Collect logs and metrics from Elastic Agents.",
"title":"Elastic Agent",
"format_version":"1.0.0",
"owner":{
"github":"elastic/elastic-agent"
},
"license":"basic",
"type":"integration",
"categories":[
"elastic_stack"
],
"conditions":{
"kibana.version":"^8.7.1"
},
"screenshots":[
{
"src":"/img/elastic_agent_overview.png",
"title":"Elastic Agent Overview",
"size":"2560×1234",
"type":"image/png"
},
{
"src":"/img/elastic_agent_metrics.png",
"title":"Elastic Agent Metrics",
"size":"2560×1234",
"type":"image/png"
},
{
"src":"/img/elastic_agent_info.png",
"title":"Elastic Agent Information",
"size":"2560×1234",
"type":"image/png"
},
{
"src":"/img/elastic_agent_integrations.png",
"title":"Elastic Agent Integrations",
"size":"2560×1234",
"type":"image/png"
}
],
"icons":[
{
"src":"/img/logo_elastic_agent.svg",
"title":"logo Elastic Agent",
"size":"64x64",
"type":"image/svg+xml"
}
],
"data_streams":[
{
"title":"Elastic Agent",
"release":"ga",
"type":"logs",
"package":"elastic_agent",
"dataset":"elastic_agent.apm_server",
"path":"apm_server_logs",
"elasticsearch":{
"index_template.mappings":{
"dynamic":false
}
}
},
{
"title":"Elastic Agent",
"release":"ga",
"type":"metrics",
"package":"elastic_agent",
"dataset":"elastic_agent.apm_server",
"path":"apm_server_metrics",
"elasticsearch":{
"index_template.mappings":{
"dynamic":false
}
}
},
],
"policy_templates":[
],
"readme":"/package/elastic_agent/1.8.0/docs/README.md",
"release":"ga"
}
Draft PR with rough prototype code: https://github.com/elastic/kibana/pull/160003
You should be able to use a CURL command similar to the following:
curl -XPOST -u 'elastic:changeme' -H 'kbn-xsrf: something' -d '{
"name": "my_integration",
"dataset": "test.test",
"type": "logs",
"description": "Custom integration via the API"
}' 'http://localhost:5601/<BASE_PATH>/api/fleet/epm/custom_integrations'
Hitting the API will produce the following epm-packages
installation Saved Object:
{
"_index":".kibana_ingest_8.9.0_001",
"_id":"epm-packages:my_integration",
"_score":3.2696838,
"_source":{
"epm-packages":{
"installed_kibana":[
],
"installed_kibana_space_id":"default",
"installed_es":[
{
"id":"logs-test.test-1.0.0",
"type":"ingest_pipeline"
},
{
"id":"logs-test.test",
"type":"index_template"
},
{
"id":"logs-test.test@package",
"type":"component_template"
},
{
"id":"logs-test.test@custom",
"type":"component_template"
}
],
"package_assets":[
{
"id":"ab9e295a-520d-599d-a469-49a81aed1842",
"type":"epm-packages-assets"
},
{
"id":"ac57f446-bde4-5a17-a10f-0ee4384e8c59",
"type":"epm-packages-assets"
},
{
"id":"0fae5f7e-ea49-516e-8126-8fc36f247182",
"type":"epm-packages-assets"
}
],
"es_index_patterns":{
"test.test":"logs-test.test-*"
},
"name":"my_integration",
"version":"1.0.0",
"install_version":"1.0.0",
"install_status":"installed",
"install_started_at":"2023-06-20T11:32:09.777Z",
"install_source":"custom",
"install_format_schema_version":"1.0.0",
"verification_status":"unknown"
},
"type":"epm-packages",
"references":[
],
"managed":false,
"coreMigrationVersion":"8.8.0",
"typeMigrationVersion":"8.6.0",
"updated_at":"2023-06-20T11:32:10.603Z",
"created_at":"2023-06-20T11:32:09.777Z"
}
It will also produce our minimal assets:
And show up on the installed integrations page (however, with a verification warning at the moment):
It will also be returned from the fleet/epm/packages/installed
endpoint, e.g:
{
"name":"my_integration",
"version":"1.0.0",
"status":"installed",
"dataStreams":[
{
"name":"logs-test.test-*",
"title":"test.test"
}
]
},
Closing as we've finished the prototype work and investigations.
:notebook: Summary
While we're still figuring out the exact specification of the initial integration creation API, we could already learn about the library functions available in fleet by writing a prototype of the API.
:heavy_check_mark: Acceptance criteria