elastic / elasticsearch-migration

This plugin will help you to check whether you can upgrade directly to the next major version of Elasticsearch, or whether you need to make changes to your data and cluster before doing so.
290 stars 32 forks source link

Warn users when reindexing .watches index using reindex helper #79

Closed ppf2 closed 7 years ago

ppf2 commented 7 years ago

Similar to https://github.com/elastic/elasticsearch-migration/issues/74, but applies to the Watcher use case.

Once .watches is reindexed using the reindex helper, Watcher loses track of all its watches because it doesn't handle .watches as an index alias (vs. an index). Except that in this case, it applies to both Watcher 2.x and 5.0 and I don't think there's a way to configure Watcher to use a different index name, which means that the migration reindex helper tool is not a good fit for reindexing the .watches index. One alternative will be to reindex .watches to .watches-something, delete .watches, and then reindex again from .watches-something back to .watches as a workaround.

clintongormley commented 7 years ago

Reindexing the .watcher index fails because the index can't be deleted, unless you disable watcher first. On top of that, as @ppf2 says, the aliased version of the index won't work regardless.

While we can copy all the watches to a new index, they need to be recreated in the .watcher index using the watcher API. Possibly we could provide a script to do this? Or link to a page with instructions in python, perl, etc for how to do it.

Another option is:

None of these solutions sounds great.

thoughts @eskibars @spinscale @ppf2 ?

ppf2 commented 7 years ago

How about the following (the .watches index can actually be deleted - but they need a special flag to be added) that does not involve modifying the files on disk?

eskibars commented 7 years ago

Couldn't we do what @ppf2 mentioned for all the . indices that the Elastic Stack creates/maintains? Would solve some Kibana situations as well

eskibars commented 7 years ago

also /cc @skearns64

skearns64 commented 7 years ago

@spinscale - any ideas here?

Short of other options, I would imagine that we could write a simple script to dump the contents of the .watches index into a format we could then re-add via the Watcher API?

We do need to sort this out with an answer today, and see if we can set ourselves up for an easier path for users migrating to 6.0.

spinscale commented 7 years ago

@clintongormley if watcher is stopped and you reindex into the .watches index, and then start again, watcher will start up as needed without requiring to reindex everything using the watcher API.

So while this does not seem to work with aliases, stopping watcher, then ensuring that the .watches index gets created, then starting watcher again should be sufficient.

clintongormley commented 7 years ago

Just chatted to @spinscale on zoom. Alex is going to see how easy it would be to add alias support in 5.0, which would make the process:

After upgrading to 5.x, any bad watches (eg that still use filters) will throw an exception on the first execution, same as would happen for any bad watches created in a 2.x .watcher index.

clintongormley commented 7 years ago

The previous plan doesn't work. Stopping watcher doesn't remove the delete protection from the .watches index. Instead, I've added the following popup warning:

Warning: You will need to reindex the .watches index in order to upgrade to Elasticsearch 5.x. However, once you have done so, you will be unable to use Watcher in your current cluster.

The .watches index cannot be reindexed correctly unless Watcher is disabled, which you can do by adding the following to your elasticsearch.yml files and restarting your cluster:

watcher.enabled: false

Do you want to continue?

If the user continues with watcher enabled, then they will see the following error:

Failed to DELETE http://localhost:9200/.watches This endpoint is not supported for DELETE on .watches index.

At this stage they can click Reset and the new index will be deleted and everything continues as before.

i think this is the best we can do at this stage.

clintongormley commented 7 years ago

Closed by 084ce87334b7be20d9cbc95919cfeeaab63bd784