COPRS / rs-issues

This repository contains all the issues of the COPRS project (Scrum tickets, ivv bugs, epics ...)
2 stars 2 forks source link

[BUG] DISTRIBUTION refuses plublication of some products due to mapper_parsing_exception #958

Closed Woljtek closed 1 year ago

Woljtek commented 1 year ago

Environment:

Traçability:

Current Behavior: Some products from S3_PUG_ZIP family are not published in PRIP because of the following errors:

Note: This error is not traced. This is the issue #957

Here below, the list of product not published due to this behavior (S3):

curl -X GET -H "ApiKey:${API_KEY}" http://${API_FQDN}/api/v1/failedProcessings | jq -c '.[] | select(.topic | contains("compression-event"))' | jq '.message' | sed -e 's;\\";";g' | sed -e 's;"{";{";g' | sed -e 's;"}";"};g' | jq '.storagePath'
$  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 77.0M    0 77.0M    0     0  15.3M      0 --:--:--  0:00:05 --:--:-- 20.2M
"s3://ops-rs-pug/S3A_SR_0_SRA____20230427T102628_20230427T111558_20230427T125301_2970_098_136______LN3_O_NR_002.SEN3"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T60HYC_N04.00"
"s3://ops-rs-s3-l1-nrt/S3A_SL_1_RBT____20230427T185032_20230427T185532_20230427T224042_0299_098_141______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l2-nrt/S3A_SL_2_FRP____20230427T185032_20230427T185532_20230427T225851_0299_098_141______MAR_O_NR_002.SEN3"
"s3://ops-rs-l0/S3A_SR_0_SRA____20230427T202549_20230427T203549_20230427T232256_0599_098_142______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l1-nrt/S3A_SR_1_SRA_A__20230427T202549_20230427T203549_20230427T232524_0599_098_142______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l2-nrt/S3A_SL_2_LST____20230427T185032_20230427T185532_20230427T232447_0299_098_141______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l1-nrt/S3A_SR_1_LAN_RD_20230427T202549_20230427T203549_20230427T232524_0599_098_142______LN3_D_NR_002.SEN3"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T60HYB_N04.00"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T01HBU_N04.00"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T01GBR_N04.00"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T60GYA_N04.00"
"s3://ops-rs-s3-l1-nrt/S3B_OL_1_EFR____20230427T222420_20230427T222620_20230428T024055_0119_079_001______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l2-nrt/S3B_OL_2_LFR____20230427T222420_20230427T222620_20230428T031256_0119_079_001______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l1-nrt/S3A_OL_1_ERR____20230428T022728_20230428T022928_20230428T051911_0119_098_146______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l1-nrt/S3A_OL_1_EFR____20230428T022728_20230428T022928_20230428T051908_0119_098_146______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l2-nrt/S3A_OL_2_LRR____20230428T022728_20230428T022928_20230428T053814_0119_098_146______LN3_D_NR_002.SEN3"
"s3://ops-rs-s3-l2-nrt/S3A_OL_2_LFR____20230428T022728_20230428T022928_20230428T053924_0119_098_146______LN3_D_NR_002.SEN3"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T01HBS_N04.00"
"s3://ops-rs-s2-l1-ds/S2B_OPER_MSI_L1C_DS_REFS_20230425T212906_S20230422T220623_N04.00"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T60HYD_N04.00"
"s3://ops-rs-pug/S3A_OL_0_EFR____20230409T204847_20230409T213258_20230428T101848_2651_097_271______LN3_O_NR_002.SEN3"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T60GYV_N04.00"
"s3://ops-rs-pug/S3B_OL_0_EFR____20230409T215032_20230409T223442_20230428T102425_2650_078_129______LN3_O_NR_002.SEN3"
"s3://ops-rs-pug/S3A_SR_0_SRA____20230428T022210_20230428T031240_20230428T115028_3029_098_146______LN3_O_NR_002.SEN3"
"s3://ops-rs-pug/S3B_SR_0_SRA____20230428T014224_20230428T023254_20230428T125421_3029_079_003______LN3_O_NR_002.SEN3"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T01HBT_N04.00"
"s3://ops-rs-pug/S3A_OL_0_EFR____20230409T204847_20230409T213258_20230428T143417_2651_097_271______LN3_O_NR_002.SEN3"
"s3://ops-rs-s2-l1-tl/S2B_OPER_MSI_L1C_TL_REFS_20230425T212906_A032001_T01GBQ_N04.00"
"s3://ops-rs-s2-l2-ds/S2B_OPER_MSI_L2A_DS_REFS_20230428T162920_S20230422T220623_N04.00"
"s3://ops-rs-s2-l2-tl/S2B_OPER_MSI_L2A_TL_REFS_20230427T092548_A032001_T01JBH_N04.00"
"s3://ops-rs-pug/S3A_SR_0_SRA____20230428T081912_20230428T090842_20230428T235634_2970_098_149______LN3_O_NR_002.SEN3"
"s3://ops-rs-s2-l2-tl/S2B_OPER_MSI_L2A_TL_REFS_20230427T092548_A032001_T60JYN_N04.00"
"s3://ops-rs-s2-l2-tl/S2B_OPER_MSI_L2A_TL_REFS_20230428T162920_A032001_T60GYV_N04.00"

Expected Behavior: Footprints shall be valid to allow publication

Steps To Reproduce: Test 24h

Test execution artefacts (i.e. logs, screenshots…) Context with the product: S3A_SR_0_SRA__20230428T081912_20230428T090842_20230428T235634_2970_098_149____LN3_O_NR_002.SEN3

Whenever possible, first analysis of the root cause The root cause of error is a lowercase issue: image.png Once replacing polygon by Polygon, the footprint is valid: image.png The reason why some footprints are invalid is unknown.

Bug Generic Definition of Ready (DoR)

Bug Generic Definition of Done (DoD)

LAQU156 commented 1 year ago

IVV_CCB_2023_w18 : Moved into "Accepted Werum" for investigation, Priority blocking, to be fixed phase 1

LAQU156 commented 1 year ago

Werum_CCB_2023_w18 : Moved into "Product Backlog" for further analysis

w-jka commented 1 year ago

@Woljtek We think the root cause of this issue might be the same as #959. Additionally we updated the type-string for the coordinates to be "Polygon" instead of "polygon" however we don't think, that this is the problem of the Elasticsearch Exception. Moreso the provided logs indicate, that the same self intersecting error occurs as in #959.

suberti-ads commented 1 year ago

New occurrence for following products:

S3A_SL_2_LST____20230427T185032_20230427T185532_20230528T044842_0299_098_141______LN3_D_NR_002.SENS3
S3A_SR_1_SRA_A__20230527T204805_20230527T205805_20230527T224706_0599_099_185______LN3_D_NR_002.SENS3
S3A_SR_1_LAN_RD_20230527T204805_20230527T205805_20230527T224706_0600_099_185______LN3_D_NR_002.SENS3
S3A_SR_0_SRA____20230527T204805_20230527T205805_20230527T222633_0599_099_185______LN3_D_NR_002.SENS3
w-jka commented 1 year ago

@suberti-ads Can you provide the used version and the logs with the exception for one of the new occurrences? I would assume the logs of the distribution-worker shall be enough to understand what the reason is behind these new occurences.

suberti-ads commented 1 year ago

Hereafter error logs for S3A_SR_0_SRA__20230527T204805_20230527T205805_20230527T222633_0599_099_185____LN3_D_NR_002.SEN3 DistributionErrors.log.gz

version deployed 1.13.1-rc1 :

 safescale  gw-cluster-ops  ~/suberti  kubectl get deployments.apps -n processing distribution-part1-distribution-worker-v11 -o yaml | grep image:
        image: artifactory.coprs.esa-copernicus.eu/rs-docker/rs-core-distribution-worker:1.13.1-rc1
w-fsi commented 1 year ago

This issue was tackled in V1.13.1 and thus will be contained in V2.

suberti-ads commented 1 year ago

New occurrence this week Version metadata catalog used: develop Version distribution deployed: develop Products impacted:

S3B_SR_0_SRA____20230603T202750_20230603T203750_20230603T221322_0599_080_142______LN3_D_NR_002.SEN3.zip 
S3B_SR_1_LAN_RD_20230603T202750_20230603T203750_20230603T222702_0600_080_142______LN3_D_NR_002.SEN3.zip 

Error in log

Error on publishing file S3B_SR_0_SRA____20230603T202750_20230603T203750_20230603T221322_0599_080_142______LN3_D_NR_002.SEN3 in PRIP: java.lang.RuntimeException: Error: Number of retries has exceeded while performing saving prip metadata of S3B_SR_0_SRA____20230603T202750_20230603T203750_20230603T221322_0599_080_142______LN3_D_NR_002.SEN3.zip after 4 attempts: ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)]];
    at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176)
    at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2011)
    at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1988)
    at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1745)
    at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1702)
    at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1672)
    at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:1029)
    at esa.s1pdgs.cpoc.prip.metadata.PripElasticSearchMetadataRepo.save(PripElasticSearchMetadataRepo.java:101)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.lambda$createAndSave$0(PripPublishingService.java:186)
    at esa.s1pdgs.cpoc.common.utils.Retries.performWithRetries(Retries.java:23)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.createAndSave(PripPublishingService.java:185)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:104)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:55)
    at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.invokeConsumer(SimpleFunctionRegistry.java:976)
    at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.doApply(SimpleFunctionRegistry.java:705)
    at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.apply(SimpleFunctionRegistry.java:551)
    at org.springframework.cloud.stream.function.PartitionAwareFunctionWrapper.apply(PartitionAwareFunctionWrapper.java:84)
    at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionWrapper.apply(FunctionConfiguration.java:754)
    at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.handleMessageInternal(FunctionConfiguration.java:586)
    at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56)
    at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115)
    at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133)
    at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
    at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72)
    at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317)
    at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272)
    at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
    at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
    at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
    at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
    at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:216)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:397)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:83)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:454)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:428)
    at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:125)
    at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329)
    at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:255)
    at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:119)
    at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:42)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2629)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2609)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2536)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2427)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2305)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1979)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1364)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1355)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1247)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.lang.Thread.run(Thread.java:829)
    Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://elasticsearch-processing-es-http.database.svc.cluster.local:9200], URI [/prip/_doc/S3B_SR_0_SRA____20230603T202750_20230603T203750_20230603T221322_0599_080_142______LN3_D_NR_002.SEN3.zip?timeout=1m], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)"}},"status":400}
        at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326)
        at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
        at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
        at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2082)
        at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1732)
        ... 48 more
Caused by: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)]]
    at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
    at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
    at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
    at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592)
    at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168)
    ... 51 more

    at esa.s1pdgs.cpoc.common.utils.Retries.throwRuntimeException(Retries.java:53)
    at esa.s1pdgs.cpoc.common.utils.Retries.performWithRetries(Retries.java:28)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.createAndSave(PripPublishingService.java:185)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:104)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:55)
    at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.invokeConsumer(SimpleFunctionRegistry.java:976)
    at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.doApply(SimpleFunctionRegistry.java:705)
    at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.apply(SimpleFunctionRegistry.java:551)
    at org.springframework.cloud.stream.function.PartitionAwareFunctionWrapper.apply(PartitionAwareFunctionWrapper.java:84)
    at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionWrapper.apply(FunctionConfiguration.java:754)
    at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.handleMessageInternal(FunctionConfiguration.java:586)
    at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56)
    at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115)
    at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133)
    at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
    at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72)
    at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317)
    at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272)
    at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
    at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
    at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
    at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
    at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:216)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:397)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:83)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:454)
    at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:428)
    at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:125)
    at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329)
    at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:255)
    at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:119)
    at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:42)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2629)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2609)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2536)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2427)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2305)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1979)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1364)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1355)
    at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1247)
    at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
    at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)]];
    at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176)
    at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2011)
    at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1988)
    at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1745)
    at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1702)
    at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1672)
    at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:1029)
    at esa.s1pdgs.cpoc.prip.metadata.PripElasticSearchMetadataRepo.save(PripElasticSearchMetadataRepo.java:101)
    at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.lambda$createAndSave$0(PripPublishingService.java:186)
    at esa.s1pdgs.cpoc.common.utils.Retries.performWithRetries(Retries.java:23)
    ... 42 more
    Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://elasticsearch-processing-es-http.database.svc.cluster.local:9200], URI [/prip/_doc/S3B_SR_0_SRA____20230603T202750_20230603T203750_20230603T221322_0599_080_142______LN3_D_NR_002.SEN3.zip?timeout=1m], status line [HTTP/1.1 400 Bad Request]
{"error":{"root_cause":[{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)"}},"status":400}
        at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326)
        at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296)
        at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270)
        at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2082)
        at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1732)
        ... 48 more
Caused by: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.28191877747824, -67.35075825895639, NaN)]]
    at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485)
    at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396)
    at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426)
    at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592)
    at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168)
    ... 51 more

Complete logs: DistributionS3B_SR_0_SRA__20230603T202750_20230603T203750_20230603T221322_0599_080_142____LN3_D_NR_002.log.gz

w-fsi commented 1 year ago

The logs are often just snippets of the logs and making it difficult to identify the context. This product was produced new or was it restarted from a old one? Can you please also provide the safe-like manifest from one of the affected products?

suberti-ads commented 1 year ago

Dear @w-fsi , To provide more context for this case. Product S3B_SR_0_SRA__20230603T202750_20230603T203750_20230603T221322_0599_080_142____LN3_D_NR_002.SEN3. I has been generated last Saturday by S3-L0P with current 3% production. image

Product has been used as input by S3-PUG-NRT/ S3-SR1-NRT

image

It have been successfully compressed: image

As we had huge lag on distribution chain https://github.com/COPRS/rs-issues/issues/982 It was processed 2 days after by distribution (last version deployed)

image

This product was not generated by PUG. PUG generate these products using it

S3B_SR_0_SRA____20230603T202820_20230603T211750_20230605T070544_2970_080_142______LN3_O_NR_002.SEN3
S3B_SR_0_SRA____20230603T193750_20230603T202820_20230605T005520_3029_080_142______LN3_O_NR_002.SEN3
S3B_SR_0_SRA____20230603T202820_20230603T211750_20230603T225117_2970_080_142______LN3_O_NR_002.SEN3
S3B_SR_0_SRA____20230603T193750_20230603T202820_20230603T223740_3029_080_142______LN3_O_NR_002.SEN3

image

suberti-ads commented 1 year ago

Dear @w-fsi , Sorry, i forgot your last question: Hereafter xfdumanifest.xml for product S3A_SR_0_SRA__20230527T204805_20230527T205805_20230527T222633_0599_099_185____LN3_D_NR_002.SEN3: xfdumanifest.xml.gz

w-fsi commented 1 year ago

Something here is different than what would be expected. The original issue was in the metadata extraction. We were aware already that especially the L0 are containing not valid footprints that might self-intersect. Thus it was decided that if this kind of exception is raised, the footprint will be dropped. As PUG are having basically the same footprints they caused also issues during the extraction as the WA was not applied. This had been fixed in the last version.

We are here now however observing that this issue also occurs when writting it to the PRIP. The metadata for this is however retrieved from the MDC where the footprints shall be fine and thus not contained when the Distribution is retrieving the data. When did you perform that last upgrade of the MDC extaction? Are we sure that it had been deployed, when the product was originally added to the ES? I would assume that at the beginning of the month they had been deployed already?

LAQU156 commented 1 year ago

To_validate_CCB_2023_w24 : Validation in progress, @suberti-ads @w-fsi

Woljtek commented 1 year ago

This issue occurs again with the production of the week: image.png

The last update of the pod metadata-catalog-part1-metadata-extraction was realized by @suberti-ads the 06/12 to deploy the WA of this bug. The WA consisted as a deployment of rs-core-metadata-catalog-extraction:develop.

This deployment was the 2nd WA deployments using the develop version. It is possible that the develop image was not downloaded again because image was already present on the node. We cannot achieve to prove this hypothesis.

We forced the downloaded of the rs-core-metadata-catalog-extraction:develop on a new node:

kp get event | grep metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz
61s         Normal    Scheduled                    pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Successfully assigned processing/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz to cluster-ops-node-165
60s         Normal    Pulled                       pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Container image "cr.l5d.io/linkerd/proxy-init:v1.4.0" already present on machine
60s         Normal    Created                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Created container linkerd-init
60s         Normal    Started                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Started container linkerd-init
59s         Normal    Pulled                       pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Container image "cr.l5d.io/linkerd/proxy:stable-2.11.1" already present on machine
59s         Normal    Created                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Created container linkerd-proxy
59s         Normal    Started                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Started container linkerd-proxy
57s         Normal    Pulling                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Pulling image "artifactory.coprs.esa-copernicus.eu/rs-docker/rs-core-metadata-catalog-extraction:develop"
53s         Normal    Pulled                       pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Successfully pulled image "artifactory.coprs.esa-copernicus.eu/rs-docker/rs-core-metadata-catalog-extraction:develop" in 4.27183692s
53s         Normal    Created                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Created container metadata-catalog-part1-metadata-extraction-v30
53s         Normal    Started                      pod/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz     Started container metadata-catalog-part1-metadata-extraction-v30
61s         Normal    SuccessfulCreate             replicaset/metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97    Created pod: metadata-catalog-part1-metadata-extraction-v30-85f9b9fd97-2fmzz

I am going to write a procedure to restart these errors.

Woljtek commented 1 year ago

Here below a procure to resubmit the DISTRIBUTION error on catalog-job: Example with S3B_OL_2_LFR__20230610T214432_20230610T214632_20230613T004404_0119_080_243____LN3_D_NR_002.SEN3

Get message from catalog-job

NAMESPACE_KAFKA="infra"
KAFKA_URL="kafka-headless.${NAMESPACE_KAFKA}.svc.cluster.local:9092";
BOOTSTRAP_URL="kafka-cluster-kafka-bootstrap.infra.svc.cluster.local:9092"
POD="kafka-cluster-kafka-0"
CONTAINER="kafka";

# Set the missing product
PRODUCT=S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3
MESSAGE=$PRODUCT"_catalog-job.msg"

kubectl -n ${NAMESPACE_KAFKA} exec -ti ${POD} -c ${CONTAINER} -- bash /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server ${BOOTSTRAP_URL} --topic catalog-job --from-beginning --timeout-ms 20000 | grep $PRODUCT > ${MESSAGE}

# Check message content
cat $MESSAGE

Clean the entry in the catalog index

ES_NAMESPACE="database";
ES_SVC="elasticsearch-processing-es-coordinating.${ES_NAMESPACE}.svc.cluster.local";
ES_PORT="9200";

# The name of the index depends on the type of the product
INDEX=s3_l2_nrt

#check existence:
curl -X GET http://${ES_SVC}:${ES_PORT}/${INDEX}/_doc/${PRODUCT} | jq
{
  "_index": "s3_l2_nrt",
  "_type": "_doc",
  "_id": "S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3",
  "_version": 1,
  "_seq_no": 6682,
  "_primary_term": 5,
  "found": true,
  "_source": { ...

# delete
curl -X DELETE http://${ES_SVC}:${ES_PORT}/${INDEX}/_doc/${PRODUCT}
{"_index":"s3_l2_nrt","_type":"_doc","_id":"S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3","_version":2,"result":"deleted","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":6759,"_primary_term":5}

# check deletion
curl -X GET http://${ES_SVC}:${ES_PORT}/${INDEX}/_doc/${PRODUCT} | jq
{
  "_index": "s3_l2_nrt",
  "_type": "_doc",
  "_id": "S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3",
  "found": false
}

Clean products in prip bucket

BUCKET=ops-rs-s3-l2-nrt-zip
s3cmd rm --recursive s3://$BUCKET/$PRODUCT

Republish the message in catalog-job topic:

# Copy file on pod
kubectl cp -n ${NAMESPACE_KAFKA} -c ${CONTAINER} ${MESSAGE} ${POD}:/tmp/message.txt

# Write message in kafka topic
kubectl -n ${NAMESPACE_KAFKA} exec -ti ${POD} -c ${CONTAINER} -- sh -c "/opt/kafka/bin/kafka-console-producer.sh --bootstrap-server ${BOOTSTRAP_URL} --topic catalog-job < /tmp/message.txt"

# Clean file
kubectl -n ${NAMESPACE_KAFKA} exec -ti ${POD} -c ${CONTAINER} -- rm -f /tmp/message.txt
rm -f ${MESSAGE}

# Check new existence:
curl -X GET http://${ES_SVC}:${ES_PORT}/${INDEX}/_doc/${PRODUCT} | jq

{
  "_index": "s3_l2_nrt",
  "_type": "_doc",
  "_id": "S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3",
  "_version": 1,
  "_seq_no": 6760,
  "_primary_term": 5,
  "found": true,
  "_source": {
Woljtek commented 1 year ago

After the resubmit, I continue to observe the buggy behavior on Dirstribution: image.png

Error on publishing file S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3 in PRIP: java.lang.RuntimeException: Error: Number of retries has exceeded while performing saving prip metadata of S3B_OL_2_LFR____20230610T214432_20230610T214632_20230613T004404_0119_080_243______LN3_D_NR_002.SEN3.zip after 4 attempts: ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=Self-intersection at or near point (-149.01907508964212, 87.37189749272041, NaN)]];
...

Th WA doesnt work.

Woljtek commented 1 year ago

Here below the manisfest file of the product: S3B_OL_2_LFR__20230610T214432_20230610T214632_20230613T004404_0119_080_243____LN3_D_NR_002.SEN3 https://app.zenhub.com/files/398313496/c5d7e1dc-dcb7-4a9d-ba0b-a7f49261ed9c/download

Woljtek commented 1 year ago

@w-fsi I extracted the 7 last days DISTRBUTION errors: https://app.zenhub.com/files/398313496/6d7c325b-ef22-4d00-a136-bbcf1f689eb6/download I tested a sample of coordinates. All are next to the artic or antartic poles. Moreover, as said during the last CCB, this errors can be raised by both S3 and S3 products (TL, TC DS).

w-jka commented 1 year ago

@Woljtek

We found a small difference between the metadata-extraction and the distribution-worker, that might be the root cause. On footprints traversing the dateline, we introduced a patch with #280. This patch added the parameter orientation to the footprint with a value clockwise when this scenario happened.

Unfortunately, the MDC SearchController does only return the coordinates, the orientation information is lost when the product gets published by the distribution-worker. We added the logic, which determines the orientation of the polygon to the distribution-worker, in order to achieve a similar behaviour as in the metadata-extract, where the footprints do not cause any problems.

We will include this fix in the next delivery, however it is already present on the develop branch if you want to test it beforehand.

Woljtek commented 1 year ago

Thank for the fix. We schedule to redeploy MDC SearchController after the production (which is deplayed this week).

Woljtek commented 1 year ago

@w-fsi This issue is in the pipeline DONE on the workspace WERUM Maintenance. Has it already been delivered ???

Woljtek commented 1 year ago

WA to deploy in PR#269

w-fsi commented 1 year ago

Yes, the original issue with the polygone writting was already delivered. That's the reason why it is in done. The discussion was however continued as it was assumed not to be fixed and basically new observations was mapped to it. These modifications are not delivered yet and just on the devel branch. We really need to be careful to map new observations not on existing issues as it might be difficult to track them correctly.

Woljtek commented 1 year ago

Maybe the best solution is to open a new bug for MDSC part of this bug ? Do you agree?

The WA for SMDC is deployed now.

w-jka commented 1 year ago

@Woljtek Please note, that the fix mentioned in my last comment was not implemented in the MDSC but in the distribution-worker itself, in order to prevent different return values for other components.

So in order to test the fix, please update the distribution-worker to the develop branch.

w-fsi commented 1 year ago

Yes, maybe. Even tho it is commited and it would just be possible to trace it in the changelog.

Woljtek commented 1 year ago

Despite of the WA deployment, the error occurred again 2 times this week:

w-fsi commented 1 year ago

@Woljtek : Can you confirm that the Distribution worker was redeployt and not the Search Controller?

w-jka commented 1 year ago

@Woljtek As we checked the version on the cluster and could confirm, that it indeed includes our bugfix, can you please send the logs of one error for further analysis?

Woljtek commented 1 year ago

Hi @w-jka & @w-fsi

You can find here below the logs of the two errors: https://app.zenhub.com/files/398313496/bfea93ba-8ba5-43b0-9500-e8ebd4bba5bf/download

w-jka commented 1 year ago

@Woljtek From what we can tell (from the logs, software and tests), the footprints of these products do indeed self intersect. The metadata extraction does not have any problem with ingestion of the original product, as the index has a different mapping for the coordinates:

Screenshot_20230703_114013.png

This mapping does not check any validity of the footprints, but only stores them in the database, without any optimizations for geo queries. When the s2_l0_ds mapping would be corrected, the product would have been stored without a footprint, and the distribution would not fail.

In order to progress this issue we identify two different routes we could take:

  1. You could fix the mappings of the ES indices to also use geo_shape for the coordinates (similar to S1 and S3). This would remove self intersecting footprints in the metadata extraction and prevent problems in the distribution.
  2. We can apply the same logic of the metadata extraction in the distribution worker, that in case of Self-Intersecting errors the footprint is removed and another save is performed. This would only be a workaround of the misconfiguration of the ES indices.
LAQU156 commented 1 year ago

Werum_CCB_2023_w27 : @Woljtek please discuss about this issue with @pcuq-ads and @SYTHIER-ADS and please give a quick feedback. If the solution 2 is chosen please open a new issue

pcuq-ads commented 1 year ago

Dear @w-jka ,

I try to understand the change. With the proposal n°1, you ask for updating the configuration of index PRIP from : image

to

image

Is it correct ?

I have an additional question ? How can we configure a prip_2 index for the core service Distribution and Lifecycle ?

Regards

suberti-ads commented 1 year ago

New occurrence for S3A_SL_2_LST20230708T191847_20230708T192347_20230708T221914_0299_101_013__LN3_O_NR_002.SEN3 S3A_SL_2_FRP__20230708T191847_20230708T192347_20230708T220207_0299_101_013LN3_O_NR_002.SEN3 S3A_SL_1_RBT__20230708T191847_20230708T192347_20230708T212700_0299_101_013____LN3_D_NR_002.SEN3

Error:

Error on publishing file S3A_SL_2_LST____20230708T191847_20230708T192347_20230708T221914_0299_101_013______LN3_O_NR_002.SEN3 in PRIP: java.lang.RuntimeException: Error: Number of retries has exceeded while performing saving prip metadata of S3A_SL_2_LST____20230708T191847_20230708T192347_20230708T221914_0299_101_013______LN3_O_NR_002.SEN3.zip after 4 attempts: ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)]]; at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176) at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2011) at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1988) at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1745) at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1702) at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1672) at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:1029) at esa.s1pdgs.cpoc.prip.metadata.PripElasticSearchMetadataRepo.save(PripElasticSearchMetadataRepo.java:101) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.lambda$createAndSave$0(PripPublishingService.java:198) at esa.s1pdgs.cpoc.common.utils.Retries.performWithRetries(Retries.java:23) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.createAndSave(PripPublishingService.java:197) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:105) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:56) at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.invokeConsumer(SimpleFunctionRegistry.java:976) at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.doApply(SimpleFunctionRegistry.java:705) at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.apply(SimpleFunctionRegistry.java:551) at org.springframework.cloud.stream.function.PartitionAwareFunctionWrapper.apply(PartitionAwareFunctionWrapper.java:84) at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionWrapper.apply(FunctionConfiguration.java:754) at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.handleMessageInternal(FunctionConfiguration.java:586) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:216) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:397) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:83) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:454) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:428) at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:125) at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329) at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:255) at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:119) at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:42) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2629) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2609) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2536) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2427) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2305) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1979) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1364) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1355) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1247) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:829) Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://elasticsearch-processing-es-http.database.svc.cluster.local:9200], URI [/prip/_doc/S3A_SL_2_LST____20230708T191847_20230708T192347_20230708T221914_0299_101_013______LN3_O_NR_002.SEN3.zip?timeout=1m], status line [HTTP/1.1 400 Bad Request] {"error":{"root_cause":[{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)"}},"status":400} at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270) at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2082) at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1732) ... 48 more Caused by: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)]] at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485) at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396) at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426) at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592) at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168) ... 51 more at esa.s1pdgs.cpoc.common.utils.Retries.throwRuntimeException(Retries.java:53) at esa.s1pdgs.cpoc.common.utils.Retries.performWithRetries(Retries.java:28) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.createAndSave(PripPublishingService.java:197) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:105) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.accept(PripPublishingService.java:56) at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.invokeConsumer(SimpleFunctionRegistry.java:976) at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.doApply(SimpleFunctionRegistry.java:705) at org.springframework.cloud.function.context.catalog.SimpleFunctionRegistry$FunctionInvocationWrapper.apply(SimpleFunctionRegistry.java:551) at org.springframework.cloud.stream.function.PartitionAwareFunctionWrapper.apply(PartitionAwareFunctionWrapper.java:84) at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionWrapper.apply(FunctionConfiguration.java:754) at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.handleMessageInternal(FunctionConfiguration.java:586) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:216) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:397) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:83) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:454) at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:428) at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:125) at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329) at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:255) at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:119) at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:42) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2629) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2609) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2536) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2427) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2305) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1979) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1364) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1355) at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1247) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.lang.Thread.run(Thread.java:829) Caused by: ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)]]; at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:176) at org.elasticsearch.client.RestHighLevelClient.parseEntity(RestHighLevelClient.java:2011) at org.elasticsearch.client.RestHighLevelClient.parseResponseException(RestHighLevelClient.java:1988) at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1745) at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1702) at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1672) at org.elasticsearch.client.RestHighLevelClient.index(RestHighLevelClient.java:1029) at esa.s1pdgs.cpoc.prip.metadata.PripElasticSearchMetadataRepo.save(PripElasticSearchMetadataRepo.java:101) at esa.s1pdgs.cpoc.prip.worker.service.PripPublishingService.lambda$createAndSave$0(PripPublishingService.java:198) at esa.s1pdgs.cpoc.common.utils.Retries.performWithRetries(Retries.java:23) ... 42 more Suppressed: org.elasticsearch.client.ResponseException: method [PUT], host [http://elasticsearch-processing-es-http.database.svc.cluster.local:9200], URI [/prip/_doc/S3A_SL_2_LST____20230708T191847_20230708T192347_20230708T221914_0299_101_013______LN3_O_NR_002.SEN3.zip?timeout=1m], status line [HTTP/1.1 400 Bad Request] {"error":{"root_cause":[{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)"}],"type":"mapper_parsing_exception","reason":"failed to parse","caused_by":{"type":"invalid_shape_exception","reason":"invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)"}},"status":400} at org.elasticsearch.client.RestClient.convertResponse(RestClient.java:326) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:296) at org.elasticsearch.client.RestClient.performRequest(RestClient.java:270) at org.elasticsearch.client.RestHighLevelClient.performClientRequest(RestHighLevelClient.java:2082) at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1732) ... 48 more Caused by: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.72016429135144, -83.3704868206969, NaN)]] at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:485) at org.elasticsearch.ElasticsearchException.fromXContent(ElasticsearchException.java:396) at org.elasticsearch.ElasticsearchException.innerFromXContent(ElasticsearchException.java:426) at org.elasticsearch.ElasticsearchException.failureFromXContent(ElasticsearchException.java:592) at org.elasticsearch.rest.BytesRestResponse.errorFromXContent(BytesRestResponse.java:168) ... 51 more
vgava-ads commented 1 year ago

Delivered in Production Common v1.14.0 (refer to https://github.com/COPRS/production-common/releases)

pcuq-ads commented 1 year ago

To be discussed during next CCB.

If a deployment from scratch creates the right index, we should not face the issue #958 and #1028. To be confirmed before closing these issues.

pcuq-ads commented 1 year ago

SYS_CCB_w29 : information to be parse here : https://github.com/COPRS/production-common/blob/develop/processing-common/doc/indices.md

w-fsi commented 1 year ago

Werum_CCB_2023_w30: Original fix was delivered and working. New aspect handled in #928

suberti-ads commented 1 year ago

New occurrence for

S3B_SL_2_LST____20230722T191715_20230722T192215_20230723T001542_0299_082_070______LN3_O_NR_002
S3B_SL_2_FRP____20230722T191715_20230722T192215_20230722T235610_0299_082_070______LN3_O_NR_002
S3B_SL_1_RBT____20230722T191715_20230722T192215_20230722T233826_0299_082_070______LN3_D_NR_002

Error:

Error on publishing file S3B_SL_2_LST____20230722T191715_20230722T192215_20230723T001542_0299_082_070______LN3_O_NR_002.SEN3 in PRIP: java.lang.RuntimeException: Error: Number of retries has exceeded while performing saving prip metadata of S3B_SL_2_LST____20230722T191715_20230722T192215_20230723T001542_0299_082_070______LN3_O_NR_002.SEN3.zip after 4 attempts: ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=failed to parse]]; nested: ElasticsearchException[Elasticsearch exception [type=invalid_shape_exception, reason=invalid_shape_exception: Self-intersection at or near point (-178.58410177893722, -83.40651026732428, NaN)]];

version used image: artifactory.coprs.esa-copernicus.eu/rs-docker/rs-core-distribution-worker:develop

suberti-ads commented 1 year ago

This error should be fix if we installed index from scratch. We will continue to live with this issue under ops environment.