Open patpatpat123 opened 1 year ago
Spring Data Elasticsearch does not support the creation of data streams. And using a name like <prefix>-*
to define that a data stream should be created is not feasible, as these names are valid for normal Elasticsearch indices as well.
To add support for data streams, we'd need additional parameters in the @Document
annotation and in addition to that, the bulk/save methods would probably need to be extended to handle the data stream case.
Thank you @sothawo for the clear explanation.
Agreed, some sort of @Document(indexName = "myindex", datastream = true)
would be really nice.
I hope this enhancement request will see the light one day.
Good day
Update two months after:
It is confirmed Spring data elasticsearch cannot create index of type data stream.
I did another test. Even if the data stream is created beforehand, meaning, the data stream is created prior to any spring data elasticsearch interaction, via the Index Management WebUI, or via curl, etc , spring data elasticsearch still cannot write into a data stream created (even if the data stream is already there)
reactor.core.publisher.Operators : Operator called default onErrorDropped
reactor.core.Exceptions$ErrorCallbackNotImplemented: org.springframework.data.elasticsearch.BulkFailureException: Bulk operation has failures. Use ElasticsearchException.getFailedDocuments() for detailed messages [{null=only write ops with an op_type of create are allowed in data streams}]
Caused by: org.springframework.data.elasticsearch.BulkFailureException: Bulk operation has failures. Use ElasticsearchException.getFailedDocuments() for detailed messages [{null=only write ops with an op_type of create are allowed in data streams}]
at org.springframework.data.elasticsearch.client.elc.ReactiveElasticsearchTemplate.checkForBulkOperationFailure(ReactiveElasticsearchTemplate.java:261)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:132)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.MonoCompletionStage$MonoCompletionStageSubscription.apply(MonoCompletionStage.java:125)
at reactor.core.publisher.MonoCompletionStage$MonoCompletionStageSubscription.apply(MonoCompletionStage.java:71)
at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
at co.elastic.clients.transport.rest_client.RestClientTransport$1.onSuccess(RestClientTransport.java:183)
at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:676)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:399)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:393)
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:122)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:182)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:448)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:338)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:87)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:40)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
Just wanted to add this observation to this enhancement request.
Thank you
Hello team,
I wanted to reach out with an issue observed while trying to use the method save() and/or saveAll() from spring data elasticsearch, when the index name is on purpose starting with logs-
What I would like to achieve please:
Using the methods save() and saveAll() from spring data elasticsearch, when on purpose, using this
I would expect the documents of type Foo to be saved/created as Data Stream (not just Index) in elasticsearch
Actual:
I am encountering this issue:
And as a result, the document does not end up as Data Stream.
Justification:
It can be interesting for spring data elastic search to save data as data streams. From official elasticsearch documentation, data streams are well suited for metrics, and logs type of data. But also, for some specific time series data.
If nothing from above, having a custom LogsPojo object to be saved as a data stream should be beneficial.
Issue:
Would it be possible to help on this issue, to allow saving to the data stream (not index) logs-foo?
Thank you