Closed zmoog closed 1 year ago
First, we need some credentials to access the cluster.
If you're locked out it's probably too late, but while you still have access it's a good idea to create and store securely a superuser level credentials in your password manager.
You need the following:
Export credentials and cluster endpoint into two environment variables:
export ELASTICSEARCH_ENDPOINT="https://whatever.bla.bla.io"
export ELASTICSEARCH_USERNAME_AND_PASSWORD="username:password"
Given you credentials, let's test they work.
$ curl -i -u ${ELASTICSEARCH_USERNAME_AND_PASSWORD} ${ELASTICSEARCH_ENDPOINT}
HTTP/2 200
content-type: application/json
x-cloud-request-id: 123
x-elastic-product: Elasticsearch
x-found-handling-cluster: 0426c3b0-49fe-472b-9c5b-02f7ed82f2d5
x-found-handling-instance: instance-0000000004
content-length: 564
date: Wed, 29 Mar 2023 10:15:29 GMT
{
"name" : "instance-0000000004",
"cluster_name" : "0426c3b0-49fe-472b-9c5b-02f7ed82f2d5",
"cluster_uuid" : "fc8ce9a5-2739-4e63-812e-fe408d931f1a",
"version" : {
"number" : "8.5.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "fc8ce9a5-2739-4e63-812e-fe408d931f1a",
"build_date" : "2022-11-17T18:56:17.538630285Z",
"build_snapshot" : false,
"lucene_version" : "9.4.1",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
If the HTTP status is 200
, then we're good to go.
Okay, not it's time to list the existing indices to search for good candidates:
$ curl -i -u ${ELASTICSEARCH_USERNAME_AND_PASSWORD} ${ELASTICSEARCH_ENDPOINT}/_cat/indices
HTTP/2 200
content-type: text/plain; charset=UTF-8
x-cloud-request-id: EMGQbk3cRxe7XJc0N1SoGg
x-elastic-product: Elasticsearch
x-found-handling-cluster: 752891f0-1510-4268-87dd-ac65873df947
x-found-handling-instance: instance-0000000004
content-length: 35644
date: Wed, 29 Mar 2023 10:20:26 GMT
<filtering out real indices and leaving only my testing ones>
yellow open .ds-zmoog-testing-2023.03.01-000002 kS8xF9vMSCq8tMfdi8DuiA 1 1 2 0 8.3kb 8.3kb
green open restored-.ds-zmoog-testing-2023.03.01-000001 xicFynDzTZeXpjMQyQvpeQ 1 0 1 0 4.3kb 4.3kb
Filter the list and look for indices we can drop:
curl -u ${ELASTICSEARCH_USERNAME_AND_PASSWORD} ${ELASTICSEARCH_ENDPOINT}/_cat/indices | grep gb | sort
You may want to delete a single index or the whole data stream.
Do not try to delete the write index of a data stream.
NOTE: the following HTTP requests are UNTESTED, still working on this commit.
If you want to delete the whole data stream, you can send:
curl -X DELETE -i -u ${ELASTICSEARCH_USERNAME_AND_PASSWORD} ${ELASTICSEARCH_ENDPOINT}/_data_stream/data-stream-name
If you want to delete a single index, you can run:
curl -X DELETE -i -u ${ELASTICSEARCH_USERNAME_AND_PASSWORD} ${ELASTICSEARCH_ENDPOINT}/index-name
You have an Elasticsearch cluster on Elastic Cloud, and you started to ingesting too much data so that your cluster space is almost finished. In this situation, Kibana can stops working and you can't free up space.
What are our options?