elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.82k stars 8.21k forks source link

[DISCUSS] Kibana index version migrations #15100

Closed stacey-gammon closed 6 years ago

stacey-gammon commented 6 years ago

Proposal

Introduce a testable migration process that allows developers to incrementally add complex migration steps throughout the development of several minor releases.

Goals

Testable The process should be easily testable so that at any point a failure to account for a required migration step will be captured. Ex: I almost submitted a PR that removed a field from the kibana mapping. Without a manual migration step, that data would have been lost. This would have gone unnoticed until late in the 7.0 index upgrade testing (if at all).

Incrementally add migration steps We don't want to push the entire migration burden to the last minute. We should be able to incrementally add migration steps and tests to catch issues early and prevent major releases from being pushed back due to last minute bugs being found.

Flexibility Right now we assume the entire re-indexing will fit into a painless script, but this won't hold us over for long. As a specific example, I'd like to migrate some data stored as JSON in a text field. Manipulating JSON from painless is not possible currently. I'd bet even more complicated scenarios are right around the corner. Our approach should be flexible to easily accommodate migration steps that won't fit into painless.

Make it easy for developers Our approach should be un-intimidating so all developers on the team can easily add their own migration steps without requiring too much specific knowledge of the migration code. Making this a simple process will encourage us to fix issues that rely on .kibana index changes which can help clean up our code. There have been outstanding issues for many months that don't get addressed because they require mapping changes. A very small change (the same PR mentioned above) that incidentally removes the need for a field on the kibana mapping, and a pretty straightforward (albiet still not possible in painless) conversion, should be easy to throw into the migration.

Single source of conversion truth There are potentially different areas of the code that need to know how to handle a bwc breaking change. For example, I'd like to introduce a change in a minor release which removes the need for a field on the .kibana index. In order to support backward compatibility, I need to do three things:

I'd have to think more about how/whether all three spots could take advantage of the same code.

Pluggable? We might want to make this pluggable, so saved object types we are unaware of can register their own migration steps.

Questions

Implementation details

TODO: have not thought this far ahead yet.

cc @epixa @archanid @tsullivan @tylersmalley

jbudz commented 6 years ago

Had a short meeting with @tylersmalley and @chrisronline where some of this came up.

Proposal: One of the recurring themes of the 5.x to 6.x migration is we don't have absolute control of the kibana index, but expect it to behave in a certain way. We need to be more aggressive and lock down what we can - the kibana server. We know exactly what types are expected, and if we don't enforce it there's a huge amount of states that the index can get into that can cause errors.

Thoughts on transformations:

This is a quick writeup, I want to think more about it still but thought I'd see if there's any feedback.

chrisronline commented 6 years ago

I did a little investigation/thinking about this concept and have some very basic code in a working state. We could use this as a platform for more discussion about how to do this.

https://github.com/elastic/kibana/compare/master...chrisronline:enhancement/fix_kibana_index

chrisdavies commented 6 years ago

I like @jbudz idea of detecting invalid mappings and giving a clear error.

I've used a number of migration tools (in Ruby, Clojure, .NET, and JavaScript). And a long time ago, I wrote a SQL migration tool for .NET because I didn't like EF. I'd be happy to contribute to this effort.

My thought is that we'd do something similar to Rails db migrations:


// migrations/20180306161135-advanced-bar-chart-color-support.js or somewhere well-defined
export const key = "bar-chart"; // The type of data being migrated

// This gets called for each instance of "bar-chart" data that's in the Kibana index
// if that would be too slow (dunno how big our index is), we can enforce that migrations
// are all done via painless. But I'd personal rather start with a full-fledged language like JS,
// and use painless only as an optimization technique. This is also nicely testable.
export function up(oldState) {
  return {
    ...dissoc(oldState, ["defaultColor"]),
    style: {
      background: oldState.defaultColor,
      borderColor: oldState.defaultColor,
    },
  };
}

// We may not need to bother with this. Most places I've worked, we only really wrote
// up migrations, and if they caused an issue, we wrote further up migrations to address those.
export function down(newState) {
  return {
    ...dissoc(newState, ["style"]),
    defaultColor: newState.style.background,
  };
}
chrisdavies commented 6 years ago

Hm. Thinking about it a bit more...

An example of one of these scenarios is the way the Kibana dashboard stores its plugin data. Right now, there are two records for every widget you see on the dashboard. There's the wrapper/container's record and there's the wrapped visualization's record. We are considering unifying these two records.

A pseudocode migration for this might look something like this:


// migrations/20180306161135-combine-panel-and-child-records.js
// conn is an interface to the Kibana elastic store
export async function up(conn) {
  return conn.each({type: "dashboard-panel"}, async (panel) => {
    const childState = await conn.find({id: panel.childId});
    await conn.upsert({
      ...panel,
      childState,
    });
    await conn.delete(childState);
  });
}
stacey-gammon commented 6 years ago

To clarify a bit for @chrisdavies - dashboard will need the ability to parse fields, then separate and/or combine into more or less fields, on the document. Essentially this issue: https://github.com/elastic/kibana/issues/14754

I don't think right now we have any instance of needing to combine documents, though it's an interesting thought. If we did ever have that need, how could it be worked into a migration process? I we ever flattened the dashboard data structure (put all visualizations into a single dashboard object to ease security issues with nested objects) we might need it! But I'm pretty sure we won't ever do that and we are just going to address security issues assuming our nested object environment.

About the transactional part - we'll need to handle this manually ourselves, since ES doesn't support transactions. Copy the old kibana index migrate to a new one. Maybe even leave the old one around just in case anything goes wrong, rather than deleting right away upon success. We've had bugs in the migration process and retaining that old index is pretty important to avoid data loss in those unexpected cases.

tylersmalley commented 6 years ago

I like the way you're thinking about this @chrisdavies.

One part I have been thinking about is how we handle this from a users point-of-view. If we detect on start-up that a migration is required, probably due to an upgrade, we should force the user to a page explaining the migration. This will give the user an opportunity to backup/snapshot their Kibana index. The page would provide a button which would perform the migration, re-indexing into a new index and update the alias. Currently, we don't use aliases for the Kibana index, so this would be new but would allow for users to revert back (last resort). The Cloud team would need this migration to be an API call they need to avoid presenting this to the user when they upgrade the cluster.

Having this UI would help prevent issues where someone stands up a newer version of Kibana and points it to an existing Kibana index. This way, the user knows they will be performing a migration and would need to have the permissions to do so. If we automatically did this migration, the existing Kibana index would no longer be compatible with the newer schema.

I am not sure that we need a down function considering they would have a way to revert the entire index. The migration could first create a new index with the current mappings of all the plugins. You can see how we do this in the mappings mixin here which is used to create the index template. In the migration script, we could use the scroll API to iterate over the objects to transform them then bulk inserting them. I believe the free-form nature would allow us to combine documents if needed down the road. If they are simple transformations, we could utilize the _reindex API and do it with Painless like we did for the 6.0 upgrade.

I am thinking that there would only be a single migration possible, per minor release. This should greatly simplify things and the new index would be the kibana.index setting with the minor version appended (ex: .kibana-6.3)

chrisdavies commented 6 years ago

@tylersmalley Makes total sense.

I've only just been introduced to painless (and Elastic, for that matter), but it makes sense that we'd have to support it.

The only thing I'm not sure about is the single migration per minor release, though. That seems a little limiting, unless you mean a single migration per type + minor release? In which case, I think we could safely say that.

chrisdavies commented 6 years ago

After chatting with a few folks, here's the game plan so far:

Migration functions

Checking if migration is required

Migrating

Notes:

Why aren't we using reindex + painless?

Why aren't we storing a list of applied migrations, and then simply applying any migrations not in that list?

Why don't we delete the old index?

Why a checksum instead of just comparing the latest migration id in the index with the latest migration id in Kibana source?

trevan commented 6 years ago

Will plugin authors be able to hook into the migration system? Or do plugin authors need to maintain their own migrations?

epixa commented 6 years ago

I have so many thoughts about this but not enough time today to weigh in. For now, I'll just weigh in on what @trevan mentioned and say that it is critical that migrations be pluggable from day one.

chrisdavies commented 6 years ago

Yes. Plugin authors will be able to add their own migrations. I'm not sure of the exact details yet, but my current thought is that it's similar to the way web-frameworks typically do it.

A migration would be something like ./migrations/20180308144121-hello-world.js in the Kibana code-base. Plugins such as x-pack or 3rd party plugins would have their own ./migrations folder with migration files in there. This is assuming that plugins are run with full-trust. (Is that a valid assumption?) So, x-pack might have migrations like ./kibana-extra/x-pack-kibana/migrations/20180308144121-foo.js

If we took that approach, the question is how such migrations should be run:

I'm leaning towards option 2, as it's much simpler, but it can be argued either way.

If we go with option 2-- running each folder of migrations independently-- we can get away with not having to do a migration diff in order to determine what migration(s) need to be run.

Plugins are expected to be compatible with whatever version of Kibana they are hosted in. So, plugin migrations might reasonably be expected to assume that core Kibana docs are already in the current version's format. Plus, I suspect that plugins will only be migrating their own documents and not touching core Kibana documents. (Is that true?)

Anyway, if we go with option 2, there will be a checksum and latest migration id per plugin, and the check for "Do we need to run a migration?" would be comparison of the checksums.

Edit: Another question that arises when you think about plugins is: how do we want to handle the scenario where all migrations succeed, except for some migrations for a certain plugin?

My thought is, we'd probably want Kibana to still run, but just have the failed plugin(s) disabled until their indexes are properly migrated. Thoughts?

trevan commented 6 years ago

@chrisdavies, if a visualization is considered "core Kibana documents" then visualization plugins will definitely migrate core Kibana documents.

You need to make sure that a plugin can migrate without Kibana changing its version. Say you have plugin A with version 1.0 that is compatible with Kibana 7.0. Plugin A has an update to version 1.1 which is still compatible with Kibana 7.0 but it requires a migration. Updating plugin A should trigger a migration even though Kibana isn't being upgraded.

chrisdavies commented 6 years ago

@trevan Right. It won't be tied to a Kibana version for exactly that reason. If we detect any unrun migrations, we'll require migrations to be run before Kibana can be considered properly ready to run. So, if someone drops a new migration into a plugin, Kibana will no longer be considered to be initialized.

That's the current thought. The idea being that plugins are as important as core. For example, if someone has a critical security plugin, and that plugin has not been properly migrated, Kibana should not assume it's OK to run.

I have a couple of questions for anyone interested in this:

Question 1: What strategy should be used to run migrations?

I'm leaning towards the last option, as it makes migrations more predictable for migration authors.

Question 2: What should we do about this scenario:

Should this be considered an error needing manual correction? Should we just run it, and assume the best (I vote no, but am open to discussion)? Should we require migrations to have both an "up" and "down" transformation, in which case, we could rollback, 005-bar.js, then run 002-baz.js, 005-bar.js1

epixa commented 6 years ago

One thing I'm wondering, though is what strategy should be used to run migrations?

This will create a minor delay, but if we do this all through the new platform instead of relying on the old, then we have a topological sorting of plugin dependencies, which means we can guarantee that migrations are always run in the proper order based on the plugin dependency graph. I don't think either of the options you proposed are reliable enough to be used for something this important.

Should this be considered an error needing manual correction? Should we just run it, and assume the best?

We should never even attempt to apply migrations for a system if something seems out of whack with it. If we identified any issue like this, my expectation is that this would error with an appropriate message and Kibana would fail to start. Overwhelming validation is the story of the day here.

Should we require migrations to have both an "up" and "down" transformation

Up/down migrations aren't practical in this world because we can't enforce lossless migrations, so we may not have all the necessary data to revert back to an older state. This is OK in a traditional migration system because the idea is that an intentional revert in a production database is a nuclear option, so consequences are expected. We're talking about these migrations running all the time, so losing data is unacceptable. To us, rolling back a migration means rolling back Kibana to an older index.

What should we do about this scenario: ...

Let's not rely on file names at all for this. Instead, let's require that plugins define an explicit order of their various migrations directly in their plugin code. Sort of like an entry file to migrations for that plugin. This makes it easy to create unit tests to verify migration orders and stuff without having to fall back to testing the file system. Aside from plugin loading itself, we should really avoid loading arbitrary files from the file system where possible.

This has the added benefit of making migrations just another formal extension point that can be documented alongside any other future plugin contract documentation rather than a special case.

chrisdavies commented 6 years ago

Good points, everyone. Re @epixa, agreed. Further notes based on your comments:

Migration order

We'll rely on the new platform. It's not too much risk, as the new platform is scheduled for merge fairly soon, and migrations are a good few months off, even by optimistic estimates. This solves the migration-order problem.

File system

I think you're right about this. We won't do a file-system-based approach, but here are some points in favor of a file-based migration system, which our system needs to address:

A file-system based migration system provides:

You can fairly easily unit test your migrations if done with a file-system. But a big downside to the file-system approach is that it would disallow transpilation, since it's loaded/run dynamically in production. I think that is a deal-breaker.

We can expose migrations programmatically (e.g. as part of a plugin's interface). We'll use something similar for core Kibana. My thought is that it would be essentially an ordered array of migrations, each of which is a hash/object with three keys/props:

This means we need to detect duplicate IDs and error on that, as we can't rely on the filesystem to do this for us. We should also highly encourage a best-practices convention for authoring migrations. I think we do this via a yarn command: yarn migration:new {name} [{plugin-name}] which will create a new migration file, following a well-defined convention. The migration author then fills out the file, possibly transforming it to their preferred language, and imports it/exposes it in their plugin as mentioned above. This gives us (almost) the best of both worlds. And greatly reduces the odds that a migration makes it to production out of order.

Error cases

We'll error if:

chrisdavies commented 6 years ago

How do we want to handle the scenario where a plugin is added to an existing Kibana installation?

The latter is preferable for many reasons, but might not be realistic.

Can we dictate that plugin migrations should only apply to docs created by those plugins?

If not, we need to run migrations as soon as a plugin is added, and each migration needs to be intelligent about detecting the shape of the data it is transforming, as that data may be in any number of shapes depending on the version of Kibana / other plugins in the system at the time the new plugin was added.

epixa commented 6 years ago

How do we want to handle the scenario where a plugin is added to an existing Kibana installation?

I think this one is at least partly dependent on how tightly coupled this migration stuff is to the initial mapping creation stuff. Migrations will be able to change the values of stored data, but they can also modify the mappings for their objects, which is something that applies to all future objects as well. If we treat all of this as the same problem, then the initial mappings are essentially just migration zero, and in order to have the mappings properly set up for today, you must run all of the available migrations.

If you can figure out a nice way to handle this that doesn't require running migrations unnecessarily, that would certainly be ideal.

Can we dictate that plugin migrations should only apply to docs created by those plugins?

I think yes, we should lock this down and not allow any plugin to directly mutate the objects of another plugin. Plugins shouldn't have to worry about their own objects changing without their control. If they want to make the structure of their own objects pluggable, then they can expose a custom way for other plugins to modify them.

trevan commented 6 years ago

Can we dictate that plugin migrations should only apply to docs created by those plugins?

I think yes, we should lock this down and not allow any plugin to directly mutate the objects of another plugin. Plugins shouldn't have to worry about their own objects changing without their control. If they want to make the structure of their own objects pluggable, then they can expose a custom way for other plugins to modify them.

One example of objects that might need to be directly mutated by another plugin are visualizations. A plugin can add its own visualization as well as its own agg type. For the first case, the plugin would be migrating a visualization object that is completely owned by itself. For the second case, a visualization from a completely different plugin might have the agg type from the first plugin that needs to be migrated. We do that where we've added a custom agg type (country code) that is used by the region map visualization.

chrisdavies commented 6 years ago

We do that where we've added a custom agg type (country code) that is used by the region map visualization.

It seems that in this scenario, plugin2 is simply a consumer of plugin1's data. It doesn't seem as if plugin2 should be mutating plugin1's data, right? I suspect nothing but dragons lie down that road.

chrisdavies commented 6 years ago

@epixa

I think this one is at least partly dependent on how tightly coupled this migration stuff is to the initial mapping creation stuff.

My initial thought was that seeding / initializing data should be considered separately from migrations, and I'd only focus on migrations here. But, I don't think that's possible, due to this scenario:

pluginA is being upgraded to 2.0, and 2.0 needs a brand new document where it stores certain settings, so it needs to seed that. And that new seed data shouldn't pass through any previous migrations, but it should pass through future migrations. In other words, it's possible (probable?) that a system will require a combination of seeding and migrating over time. And these must necessarily be consistently ordered or else we'll have an unpredictable outcome.

So, yeah. I think it would be beneficial to have this one system handle both seeding of new data and migration of existing data.

If you can figure out a nice way to handle this that doesn't require running migrations unnecessarily, that would certainly be ideal.

I think we might be able to initialize a plugin without requiring a full migration of the Kibana index. Essentially, if we have a brand new plugin and/or brand new system, we may be able to have new/seed documents pass through the migration pipeline and directly into an existing index.

I don't love modifying an existing index, but in this case, it should be relatively harmless, as the old pre-plugin system should be unaffected by any docs created by the new plugin.

trevan commented 6 years ago

It seems that in this scenario, plugin2 is simply a consumer of plugin1's data. It doesn't seem as if plugin2 should be mutating plugin1's data, right? I suspect nothing but dragons lie down that road.

I'm not sure if we are saying the same thing or different things. As an example, the plugin A has a visualization "region_map". It owns that visualization and so it should probably handle all of the migration. But plugin B adds a custom agg type which is available in the UI as a possible option in "region_map". If the user creates a region_map visualization using the custom agg type from plugin B, then the data stored in elasticsearch will be an entry that is owned by plugin A with a sub part of it (the agg type configuration) owned by plugin B. Plugin B might need to edit the visualization object that plugin A owns to migrate the agg type configuration.

chrisdavies commented 6 years ago

@trevan Ah. Thanks for the clarification. That is really complicated.

In that scenario, do we have a consistent, systematic way for plugin B to detect that plugin A's document has data that it owns?

trevan commented 6 years ago

In that scenario, do we have a consistent, systematic way for plugin B to detect that plugin A's document has data that it owns?

I know for this particular situation, plugin B can load all of the visualizations and check if each one has its agg type. I'm not sure there are many existing situations like this but I doubt there is a "consistent, systematic way". I believe you could make it consistent and systematic, though. It could be something along the lines of what @epixa said. Plugin A would expose a mechanism for plugin B to migrate the agg type data that it owns.

I just wanted to make sure that this was taken into account as it is designed.

epixa commented 6 years ago

If we make the migration system reusable, then we can pass it to each plugin to allow it to manage its own pluggable data. Sort of like a nested set of loops. The main migration system kicks off a "loop through all objects and defer to each of the "type" owners for migration behavior", then inside the migration for each object of type, those plugins can choose to iterate further.

The visualizations plugin invokes a migration function for each object of type "visualization". In that migration function, when it detects an agg_type that it doesn't recognize, it loops through all agg type registrations from other plugins and invokes the custom migration code only on the agg_type data that was provided by the third party plugin.

A simplified completely non-functional example of this flow (do not take as a suggested implementation):

// in my plugin init
visualizations.registerAggType({
  type: 'my_agg_type',
  migrate(aggType) {
    // do stuff with aggType
  }
});

// in visualizations plugin
migrate(obj) {
  const aggType = registeredAggTypes.find(aggtype => aggtype.ownsAggType(obj));
  aggType.migrate(obj.agg_type);
}

// in global migration system
objects.forEach(obj => {
  const plugin = plugins.find(plugin => plugin.ownsType(obj));
  plugin.migrate(obj);
});
chrisdavies commented 6 years ago

@trevan Thinking about this a bit more, I'm not sure that this is a scenario that a migration system would need to directly address. Here's why:

In this scenario, if PluginB has created a breaking change to its public interface (e.g. in our example, it changes its aggregation data shape in some breaking way), it is up to consumers of PluginB to update their own code to conform to the new PluginB interface.

So, in the scenario we mentioned, if someone is upgrading their system to have PluginB 2.0, they'd also need to update any consumers of PluginB (e.g. PluginA) to work with the new version.

Obviously, in an ideal world, plugins should try to never make breaking changes to their public API, though this is not always possible.

chrisdavies commented 6 years ago

I've started working on this, and was planning on allowing for a "seed" migration type. But I now realize @epixa was referring to Elastic mappings in his previous comment.

Do you think we also need to support seeding data, or can we treat migrations as either an Elastic mapping update or a pure transform of v1 state -> v2 state?

epixa commented 6 years ago

I think we should have a seed type as well. We already hack this together in Kibana through a non-standard process with our advanced settings, which get added as a config document. It probably won't be as common as the other types.

chrisdavies commented 6 years ago

Just wanted to update this thread with the latest info: I've got a working prototype, and am hoping to get it into PR-ready shape this week.

Some scenarios still need to be worked out before merge:

epixa commented 6 years ago

What about importing of stored objects / dashboards / etc? These might be old files, containing old document-versions, right? If so, imported data might also need to be sent through a migration process prior to saving it in the index.

If we could extend the migration process to these documents, that would be awesome. Importing from older versions is an important workflow that we want to continue to support.

Kibana-index specific: If we're dealing with a brand-new Kibana installation, there will be no Kibana index, but we may want to run migrations, anyway, if there are any seed documents. In this scenario, is it OK to have migrations run as part of the Kibana bootup process (especially useful when running in dev-mode)?

When were you planning to run the migrations otherwise? I figured they would always run on startup.

chrisdavies commented 6 years ago

Regarding imports, I agree. I'll talk to the management folks and update this thread w/ the results of that discussion.

W/ regard to running migrations, I thought we had agreed to not run them automatically, but I can't seem to find that discussion / agreement anywhere. I would definitely prefer to automatically run them at startup. Is there a good reason not to run them automatically? Would this adversely affect other areas (such as cloud, maybe)?

epixa commented 6 years ago

@chrisdavies The advantages of running them automatically on startup are so numerous, I think we should go down that route and only not do it if we have to rule it out for some currently unknown reason.

One consideration though is that automatic migrations must result in a new kibana index. If we do something royally dumb that breaks Kibana post-migration, users have to be able to quickly downgrade their Kibana install to get Kibana running again.

chrisdavies commented 6 years ago

This seems right to me.

Migrations always create a new index, so we should be fairly safe to auto-migrate, I think.

chrisdavies commented 6 years ago

Talked to @chrisronline about the data import scenario. We think the migration system can fairly easily handle the happy path, but there is one hairy edge-case that needs to be worked out:

We can detect this scenario, as we'll know that the index has was migrated to PluginA v3 at one point, and that PluginA is now disabled. But the question is, what should we do in this case?

chrisdavies commented 6 years ago

Thinking about this, we could take a different approach to migrations:

With this strategy, the previous problem scenario becomes this:

Changes to the saved object client

Something that @chrisronline @kobelb might want to weigh in on:

When it comes to data-imports, we have a number of choices, two of which I list here:

1. Run transforms on all docs before saving them

This is basically a validation step, which is generally advisable at an API boundary, anyway.

2. Have an optional checksum, which if passed, will transform out-of-date docs before saving them

This adds complexitiy to both the saved object client API and to the import/export feature, but is more performant than performing a transform before all saves.

chrisdavies commented 6 years ago

One last point on the enable / disable plugin scenario: Right now, if you disable or enable a plugin which has migrations defined, the index will be migrated, since the system's migration status won't match the index's migration status... Is this behavior OK? It seems sub-optimal, to say the least, so I can spend time thinking of a workaround, if we think it's worthwhile.

tylersmalley commented 6 years ago

@chrisdavies one of the reasons for having migrations is to change mappings types to change how searching works (ex: changing keyword to text). If we only were only able to transform documents on save, and an index could have mixed versions, how would we accomplish this? How would we then manage the mapping if we had mixed document versions?

chrisdavies commented 6 years ago

@epixa @archanid, I discussed this w/ @tylersmalley and @kobelb, and we arrived at a possible solution:

In the following example, two plugins own different parts of this document:

// A hypothetical document
{
  _source: {
    type: 'dashboard',
    migrationState: {
      plugins: [{
        id: 'dashboard',
        migrations: 2,
      }, {
        id: 'fanci-tags',
        migrations: 3,
      }],
    },
    dashboard: {
      name: 'whatevz',
    },
    tags: ['a', 'b'],
  },
}

If this document is imported into a system that has a version of fanci-tags that has 5 migrations, the last 2 tag migrations will be applied to it prior to persisting.

If it is imported into a system that has the fanci-tags plugin disabled, it will store its data in such a way that it can be recovered if the plugin is ever re-enabled, possibly something like this:

// A hypothetical document
{
  _source: {
    type: 'dashboard',
    migrationState: {
      plugins: [{
        id: 'dashboard',
        migrations: 2,
      }, {
        id: 'fanci-tags',
        migrations: 3,
        state: "[\"a\",\"b\"]",
      }],
    },
    dashboard: {
      name: 'whatevz',
    },
  },
}

Not saying those formats are final, just some pseudo code to represent the idea. This still feels a bit complex, so we're going to give it some time to see if a better solution presents itself.

tylersmalley commented 6 years ago

@epixa currently, a document is defined by a type. The change @chrisdavies described would remove that notion and allow a type, which is just a namespaced piece of data, to contain multiple types per document. I feel this would simplify the migrations and mappings when plugins are extending saved objects, allowing you to easily identify who owns what piece of an object. Thoughts?

chrisdavies commented 6 years ago

Keeping everything straight via this comment thread is getting a bit challenging. I'm going to keep this markdown file up-to-date as a sort of mini-specification:

https://gist.github.com/chrisdavies/8a15622a7b482821711aa24ed40d2efb

chrisdavies commented 6 years ago

Basically, wht's new in that gist is:

trevan commented 6 years ago

Just to make sure my understanding of the verbiage is clear, when you say:

In order to properly handle this scenario, we'll lock down what properties a plugin owns on a document

  • Plugins will only be allowed to migrate properties that they own

Does that allow the case where a plugin allows plugins to modify properties that it owns? Take my case above where Kibana plugin owns the agg types but a plugin can create new agg types and might need to update them. In this case, the property is "agg type" and it is owned by Kibana, not by the other plugin. But the other plugin needs to migrate the "agg type" if it owns the specific type.

epixa commented 6 years ago

I haven't yet been able to think about all the new details in here, so apologies for that. I did want to touch on one particular thing that @tylersmalley mentioned:

The change @chrisdavies described would remove that notion and allow a type, which is just a namespaced piece of data, to contain multiple types per document.

I don't see how the notion of type goes away. We still need to associate saved objects with other concrete saved objects (like dashboard -> visualization), which we can only do if there is a definitive type associated with them. Even the high level example shown in the comment prior to that one still included a top level type attribute.

Another thing we're currently relying on type for, for better or for worse, is as an implicit way to establish ownership over an object. I think this notion is important, where no matter what sort of extensions are applied to any given saved object, there's only one definitive plugin that owns that object. So if we ever needed to do something like "uninstall this plugin and nuke everything associated with it", we can do it even when other plugins have extended it.

chrisdavies commented 6 years ago

Yeah. My first stab at this is more raw than we had discussed. So, the notion of types isn't affected. It's possibly too raw, but my thought was that migrations might be used to migrate saved objects from one format to another, and so, maybe they should live at a different abstraction level than saved objects.

After going back and forth, wrote the current implementation such that transforms receive two arguments: (source, doc) where source is the raw _source value, and doc is the raw document read from Elastic, including the _id property.

Transforms can then return either a new source shape, or an object with _id and _source, the idea being that seeds might want to specify a well-known id and transforms might want to (someday) transform the way ids are being stored... This might be putting too much power into the hands of migration authors, though, and I'm not 100% satisfied with the implementation, either.

So, the PR is as much a "let's start talking details" as it is a final implementation.

tylersmalley commented 6 years ago

@epixa I want to touch on something you mentioned in this thread which I missed regarding automatically running migrations.

My concern with the automatic migrations is they would render any existing Kibana instances pointed to the same index as inoperable. Someone bringing up a new version of Kibana would, without knowing, affect any existing instances. I think we should either make this a command which should be explicitly run, or something that can be run in the Kibana UI through an API call.

kobelb commented 6 years ago

@tylersmalley running multiple versions of Kibana against the same Elasticsearch index is something we've historically had poor support for, and likely will have to get a lot more strict about in the near future. We aren't writing to the .kibana index in a backwards compatible manner consistently these days, and when multiple version of Kibana are running at the same time, we can get some rather inconsistent behavior.

epixa commented 6 years ago

Running two different versions of Kibana behind a load balancer at one time isn’t something we can realistically support right now. At the very least, the way we handle config migration can cause data loss in that scenario. Non-sticking sessions (which is what we recommend) can result in random client errors due to mismatched versions. Security privileges will behave strangely.

We may want to support zero downtime rolling upgrades for Kibana one day, but realistically that’s just not safe right now.

chrisdavies commented 6 years ago

Later today, I'm going to move forward with getting the .kibana-specific migration stuff together. My plan right now is to automatically run the migrations when Kibana starts. The reason is that if we run them automatically, devs won't have to manually run migrations every time they launch Kibana. And our existing tests should just work, as migrations will run at start-up, rather than having to be explicitly run.

It's fairly easy to turn off auto-running and just make migrations an explicit script, though, if we want to do that instead.

chrisdavies commented 6 years ago

Bah. I've come around to share the opinion of @tylersmalley that we should not automatically run migrations.

The current approach to migrations (and to import/export) has scattered migration logic into places it really doesn't feel like it belongs, although it's approaching "done" status, it feels messy. (This is particularly true of the saved object client changes.)

This feels cleaner than the current approach, though this will take more work, and involve a new UI.

Secondly...

Import / export is currently planned via a small change to the saved object client. It's a bit of a hack. It would be cleaner, I think to make import / export part of the aforementioned migrations API instead, and modify the import/export UI to call this.

The upside is that migration logic resides in only two places: the migration engine itself, and the Kibana index-specific API.

The downside is: it's more work, more code, and it means that object-level security would need to be enforced in the saved-object client and in the migrations API.

chrisdavies commented 6 years ago

@tylersmalley @epixa

We decided to run migrations when Kibana starts.

If you have 2+ Kibana servers, one will run migrations when it starts. The other(s) will not run migrations, as migrations will already be running (in one of the other servers). So, what should these secondary servers do?

chrisdavies commented 6 years ago

What about this scenario:

I think we need to move mappings over from original indices, but trump them w/ mappings defined by current plugins.

Is there a better alternative?