trinodb / trino

Official repository of Trino, the distributed SQL query engine for big data, formerly known as PrestoSQL (https://trino.io)
https://trino.io
Apache License 2.0
10.25k stars 2.95k forks source link

io.trino.spi.TrinoException: Failed reading parquet data: Socket is closed by peer #9097

Closed kpocius closed 3 years ago

kpocius commented 3 years ago

After upgrading to 361, I'm facing an issue when running a fairly straight forward query:

SELECT * FROM some_table WHERE some_column = 'some_value' LIMIT 10

However, if I remove WHERE clause, it works as expected.

I have verified that parquet file is not corrupted in any way, and is indeed readable. Same query, using the same data source works as expected in v360.

Here's the full error:

io.trino.spi.TrinoException: Failed reading parquet data; source= s3://<REDACTED>; can not read class org.apache.parquet.format.PageHeader: Socket is closed by peer.
    at io.trino.plugin.hive.parquet.ParquetPageSource$ParquetBlockLoader.load(ParquetPageSource.java:230)
    at io.trino.spi.block.LazyBlock$LazyData.load(LazyBlock.java:396)
    at io.trino.spi.block.LazyBlock$LazyData.getFullyLoadedBlock(LazyBlock.java:375)
    at io.trino.spi.block.LazyBlock.getLoadedBlock(LazyBlock.java:282)
    at io.trino.operator.project.DictionaryAwarePageFilter.filter(DictionaryAwarePageFilter.java:59)
    at io.trino.operator.project.PageProcessor.createWorkProcessor(PageProcessor.java:121)
    at io.trino.operator.ScanFilterAndProjectOperator$SplitToPages.lambda$processPageSource$1(ScanFilterAndProjectOperator.java:293)
    at io.trino.operator.WorkProcessorUtils.lambda$flatMap$4(WorkProcessorUtils.java:245)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:319)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
    at io.trino.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils.lambda$flatten$6(WorkProcessorUtils.java:277)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:319)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
    at io.trino.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
    at io.trino.operator.WorkProcessorUtils.lambda$finishWhen$3(WorkProcessorUtils.java:215)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorSourceOperatorAdapter.getOutput(WorkProcessorSourceOperatorAdapter.java:151)
    at io.trino.operator.Driver.processInternal(Driver.java:387)
    at io.trino.operator.Driver.lambda$processFor$9(Driver.java:291)
    at io.trino.operator.Driver.tryWithLock(Driver.java:683)
    at io.trino.operator.Driver.processFor(Driver.java:284)
    at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1076)
    at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)
    at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)
    at io.trino.$gen.Trino_361____20210901_181740_2.run(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.io.IOException: can not read class org.apache.parquet.format.PageHeader: Socket is closed by peer.
    at org.apache.parquet.format.Util.read(Util.java:365)
    at org.apache.parquet.format.Util.readPageHeader(Util.java:132)
    at org.apache.parquet.format.Util.readPageHeader(Util.java:127)
    at io.trino.parquet.reader.ParquetColumnChunk.readPageHeader(ParquetColumnChunk.java:76)
    at io.trino.parquet.reader.ParquetColumnChunk.readAllPages(ParquetColumnChunk.java:89)
    at io.trino.parquet.reader.ParquetReader.createPageReader(ParquetReader.java:388)
    at io.trino.parquet.reader.ParquetReader.readPrimitive(ParquetReader.java:368)
    at io.trino.parquet.reader.ParquetReader.readColumnChunk(ParquetReader.java:444)
    at io.trino.parquet.reader.ParquetReader.readBlock(ParquetReader.java:427)
    at io.trino.plugin.hive.parquet.ParquetPageSource$ParquetBlockLoader.load(ParquetPageSource.java:224)
    ... 39 more
Caused by: io.trino.hive.$internal.parquet.org.apache.thrift.transport.TTransportException: Socket is closed by peer.
    at io.trino.hive.$internal.parquet.org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:130)
    at io.trino.hive.$internal.parquet.org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
    at io.trino.hive.$internal.parquet.org.apache.thrift.protocol.TCompactProtocol.readByte(TCompactProtocol.java:635)
    at io.trino.hive.$internal.parquet.org.apache.thrift.protocol.TCompactProtocol.readFieldBegin(TCompactProtocol.java:541)
    at org.apache.parquet.format.InterningProtocol.readFieldBegin(InterningProtocol.java:155)
    at org.apache.parquet.format.PageHeader$PageHeaderStandardScheme.read(PageHeader.java:1026)
    at org.apache.parquet.format.PageHeader$PageHeaderStandardScheme.read(PageHeader.java:1019)
    at org.apache.parquet.format.PageHeader.read(PageHeader.java:896)
    at org.apache.parquet.format.Util.read(Util.java:362)
    ... 48 more

SET SESSION hive.parquet_ignore_statistics = true seems to bypass the issue. And the query works as expected on v361.

findepi commented 3 years ago

$ git log 360..361 lib/trino-parquet/ 550e4241d3 - Refactor code to compute Domain based on Statistics (2 weeks ago) cc 6eb42f2aec - Skip reading Parquet pages using Column Indexes feature of Parquet (2 weeks ago) 4312479b21 - Rename varchar/varbinary Parquet writer (4 weeks ago)

@kpocius how often is it reproducible? What is your hive.properties, or do you have any configuration in environment variables or in in ~/.aws?

@JamesRTaylor please take a look

BTW i realized that while we have tests with S3 (TestHiveFileSystemS3, TestHiveFileSystemS3SelectPushdown), we don't seem to have file formats coverage. cc @losipiuk @joshthoward @alexjo2144

kpocius commented 3 years ago

hive.properties:

connector.name=hive-hadoop2
hive.metastore=glue
hive.metastore.glue.aws-access-key=<REDACTED>
hive.metastore.glue.aws-secret-key=<REDACTED>
hive.s3.aws-access-key=<REDACTED>
hive.s3.aws-secret-key=<REDACTED>
hive.table-statistics-enabled=true
hive.allow-drop-table=true
hive.parquet.use-column-names=true

~/.aws is not present.

I can reproduce this 100% of the time with a particular table, though each time it shows a different parquet file as the source of error.

On top of that I've encountered another parquet related issue (not sure if this should be reported separately):

io.trino.spi.TrinoException: Corrupted statistics for column "<REDACTED>" in Parquet file "s3://<REDACTED>". Corrupted column index: [Boudary order: UNORDERED
                      null count  min                                       max                                     
page-0                     19984  2021-05-31                                2021-08-30                              
page-1                     19978  2021-05-08                                2021-08-30                              
page-2                     19969  2021-07-25                                2021-08-30                              
page-3                     19981  2021-07-23                                2021-08-30                              
page-4                     19965  2021-07-14                                2021-08-30                              
page-5                     19982  2021-06-23                                2021-08-30                              
page-6                      1603  <none>                                    <none>                                  
]

This issue also goes away if hive.parquet_ignore_statistics = true is set.

findepi commented 3 years ago

On top of that I've encountered another parquet related issue (not sure if this should be reported separately): ... Corrupted column index

This is surely related to 6eb42f2aeca9a07a53249ae4bd56c13c8f002335, but I don't know whether we it is same problem or different.

@electrum any idea what can be the reason for "Socket is closed by peer" exception?

@kpocius would you be able to share (here, or over a DM on our slack) an example of a problematic file? assuming you have sensitive data in these files, would you be able to create a new file that causes similar problems?

@martint @JamesRTaylor we need a kill switch for column index handling code. Currently parquet_ignore_statistics seems like the only kill switch, but it disables too much. (cc @losipiuk @joshthoward)

kpocius commented 3 years ago

@findepi, unfortunately I can't share the original files as they include some sensitive information, but I'll see if I can cook up a working sample that exhibits the same behavior. Just to confirm though: are you referring to the original issue or the second one (with corrupted column index), or both? Because those two issues were triggered on different files.

martint commented 3 years ago

we need a kill switch for column index handling code

There is already a kill switch for that feature: parquet.use-column-index

Insferatu commented 3 years ago

@findepi Good morning. I worked together with @kpocius on investigating this issue and here is what I found. Originally the table where we can observe the error was created through Trino CREATE TABLE AS SELECT query, our table metadata are managed in AWS Glue. As far as I understand Trino puts some additional table statistics in Glue. So I tried to copy underlying Parquet files to another S3 location and created over them another table with the same columns. And error is not reproduced anymore on this new table. So my suggestion is that probably this bug appears on the edge between Glue statistics and Parquet block statistics?

findepi commented 3 years ago

There is already a kill switch for that feature: parquet.use-column-index

thanks, i missed that.

@martint What was the reason to have it for Hive (along with parquet_use_column_index session property), but not add it for Iceberg? cc @phd3 @JamesRTaylor

@kpocius can you please check whether the problem is still reproducible (without any configuration changes)? assuming it still is, can you please check whether it's reproducible after you SET SESSION parquet_use_column_index = false;?

@Insferatu stats stored in Glue affect the high level query plan (https://trino.io/blog/2019/07/04/cbo-introduction.html), but they do not directly impact how the data is read. In particular, Glue stats should zero impact on simple queries like the one in the issue description: SELECT * FROM some_table WHERE some_column = 'some_value' LIMIT 10.

kpocius commented 3 years ago

I can confirm that SET SESSION hive.parquet_use_column_index = false; fixes the issue.

kpocius commented 3 years ago

I'm a little confused about the use of SET SESSION hive.parquet_use_column_index = false; -- if I check SHOW session; it says default is true, but in the source code it seems like it should default to false https://github.com/trinodb/trino/blob/361/plugin/trino-hive/src/main/java/io/trino/plugin/hive/HiveSessionProperties.java#L415

martint commented 3 years ago

The “false” in that section is to indicate whether the session property is hidden. The default comes from “parquetReaderConfig.isUseColumnIndex()”

JamesRTaylor commented 3 years ago

@kpocius - would it be possible to share the schema of your table? If not, would it be possible to get a list of types you're relying on? Any luck reproducing the issue with a subset of the table schema?

kpocius commented 3 years ago

@Insferatu, you weren't able to create a minimal reproducible parquet file, correct?

@JamesRTaylor, this is what the schema looks like for one of the tables that was having the issue:

trino:dwhdb> SHOW CREATE TABLE datalocker_ext_airflow;
                   Create Table
--------------------------------------------------
 CREATE TABLE hive.dwhdb.datalocker_ext_airflow (
    af_ad varchar,
    af_ad_id varchar,
    af_ad_type varchar,
    af_adset varchar,
    af_adset_id varchar,
    af_c_id varchar,
    af_channel varchar,
    af_siteid varchar,
    app_version varchar,
    appsflyer_id varchar,
    campaign varchar,
    city varchar,
    country_code varchar,
    event_name varchar,
    event_value varchar,
    event_time timestamp(3),
    install_time timestamp(3),
    media_source varchar,
    network_account_id varchar,
    platform varchar,
    product_id varchar,
    subscription_ltv_country double,
    subscription_ltv_average double,
    activation double,
    payment_count double,
    subscription_ltv_country_add_coeff double,
    subscription_ltv_worldwide_add_coeff double,
    subscription_ltv_country_coeff double,
    subscription_ltv_worldwide_coeff double,
    subscription_ltv_segment real,
    spend double,
    impressions bigint,
    clicks bigint,
    appsflyer_data_trials double,
    appsflyer_data_installs integer
 )
 WITH (
    format = 'PARQUET'
 )
(1 row)

Query 20210920_194653_31486_du79u, FINISHED, 1 node
Splits: 1 total, 1 done (100.00%)
0.45 [0 rows, 0B] [0 rows/s, 0B/s]
GaruGaru commented 3 years ago

Hi, we are experiencing the same issue after migrating from trino 356 to 362 we solved by performing a rollback to version 360. Trino failes to read parquet files (with column indexes) generated by a spark job using AWS Glue as metastore.

File schema with metadata

creator:           parquet-mr version 1.11.1 (build 765bd5cd7fdef2af1cecd0755000694b992bfadd) 
extra:             writer.time.zone = UTC 

file schema:       hive_schema 
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
oid:          OPTIONAL BINARY L:STRING R:0 D:1
mar: OPTIONAL INT32 R:0 D:1
ui:                OPTIONAL INT64 R:0 D:1
nog:               OPTIONAL INT32 L:INTEGER(16,true) R:0 D:1
ref:               OPTIONAL BOOLEAN R:0 D:1
din:               OPTIONAL BOOLEAN R:0 D:1
lu:                OPTIONAL BOOLEAN R:0 D:1
brk:               OPTIONAL BOOLEAN R:0 D:1
chind:             OPTIONAL INT32 L:DATE R:0 D:1

row group 1:       RC:711710 TS:34045865 OFFSET:4 
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
oid:                BINARY SNAPPY DO:0 FPO:4 SZ:28784309/31316571/1,09 VC:711710 ENC:BIT_PACKED,RLE,PLAIN ST:[min: <REDACTED>, max: <REDACTED>, num_nulls: 0]
mar:                INT32 SNAPPY DO:0 FPO:28784313 SZ:358774/358582/1,00 VC:711710 ENC:PLAIN_DICTIONARY,BIT_PACKED,RLE ST:[min: 0, max: 10, num_nulls: 0]
ui:                 INT64 SNAPPY DO:0 FPO:29143087 SZ:1469763/1558941/1,06 VC:711710 ENC:PLAIN_DICTIONARY,BIT_PACKED,RLE ST:[min: 1, max: 7498053, num_nulls: 0]
nog:                INT32 SNAPPY DO:0 FPO:30612850 SZ:437530/447375/1,02 VC:711710 ENC:PLAIN_DICTIONARY,BIT_PACKED,RLE ST:[min: 1, max: 35, num_nulls: 0]
ref:                BOOLEAN SNAPPY DO:0 FPO:31050380 SZ:90402/90222/1,00 VC:711710 ENC:BIT_PACKED,RLE,PLAIN ST:[min: false, max: true, num_nulls: 0]
din:                BOOLEAN SNAPPY DO:0 FPO:31140782 SZ:66058/90224/1,37 VC:711710 ENC:BIT_PACKED,RLE,PLAIN ST:[min: false, max: true, num_nulls: 0]
lu:                 BOOLEAN SNAPPY DO:0 FPO:31206840 SZ:29012/90224/3,11 VC:711710 ENC:BIT_PACKED,RLE,PLAIN ST:[min: false, max: true, num_nulls: 0]
brk:                BOOLEAN SNAPPY DO:0 FPO:31235852 SZ:90401/90221/1,00 VC:711710 ENC:BIT_PACKED,RLE,PLAIN ST:[min: false, max: true, num_nulls: 0]
chind:              INT32 SNAPPY DO:0 FPO:31326253 SZ:2956/3505/1,19 VC:711710 ENC:PLAIN_DICTIONARY,BIT_PACKED,RLE ST:[min: <REDACTED>, max: <REDACTED>, num_nulls: 0]

Glue table schema

oid       varchar(40)
mar       integer
ui        bigint
nog       smallint
ref       boolean
din       boolean
lu        boolean
brk       boolean
chind     date
dp        date (partition_key)
lhofhansl commented 3 years ago

9382 has repro instructions.

Insferatu commented 3 years ago

@Insferatu, you weren't able to create a minimal reproducible parquet file, correct?

Unfortunately no, all my attempts to reduce file in any way and exclude sensible data led to situation when issue stops to reproduce.

findepi commented 3 years ago

9382 has repro instructions.

Confirming that the steps https://github.com/trinodb/trino/issues/9382#issue-1007150134 reproduce the problem locally, when run against HiveQueryRunner.

Script:

CREATE TABLE test_parquet (pk1 integer,v1 real,v2 real,v3 real) WITH (format = 'PARQUET');

insert into test_parquet values (rand()*1000000000, rand(), rand(), rand());

insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;
insert into test_parquet select rand()*1000000000, rand(), rand(), rand() from test_parquet;

-- now this fails:
select count(*) from test_parquet where v2 < 0.00001;

Query 20210925_172448_00046_22m7k, FAILED, 1 node
Splits: 38 total, 18 done (47.37%)
0.23 [0 rows, 0B] [0 rows/s, 0B/s]

Query 20210925_172448_00046_22m7k failed: Failed reading parquet data; source= hdfs://localhost:9000/hive/test_parquet/20210925_172144_00042_22m7k_e4ec253f-f3e4-4898-b451-a73d0772ca7f; can not read class org.apache.parquet.format.PageHeader: Socket is closed by peer.

-- strangely this works:
select count(*) from test_parquet where v2 < 0.001;
_col0
-------
1000
(1 row)

-- and this too:
select count(*) from test_parquet where v2 < 1.00001;
_col0
---------
1048576
(1 row)
raunaqmorarka commented 3 years ago

https://github.com/trinodb/trino/pull/9389 attempts to fix this

isharamet commented 2 years ago

Hello there,

Just updated to 364 and still have the same issue (although I was sure it was fixed in 363):

io.trino.spi.TrinoException: Corrupted statistics for column "<REDACTED>" in Parquet file "s3://<REDACTED>". Corrupted column index: [Boudary order: UNORDERED
                      null count  min                                       max                                     
page-0                     19973  2021-09-24                                2021-10-30                              
page-1                     19980  2021-10-11                                2021-10-31                              
page-2                     19968  2021-09-13                                2021-10-30                              
page-3                     19985  2021-10-13                                2021-10-31                              
page-4                       530  <none>                                    <none>                                  
]
    at io.trino.plugin.hive.parquet.ParquetPageSourceFactory.createPageSource(ParquetPageSourceFactory.java:278)
    at io.trino.plugin.hive.parquet.ParquetPageSourceFactory.createPageSource(ParquetPageSourceFactory.java:164)
    at io.trino.plugin.hive.HivePageSourceProvider.createHivePageSource(HivePageSourceProvider.java:286)
    at io.trino.plugin.hive.HivePageSourceProvider.createPageSource(HivePageSourceProvider.java:175)
    at io.trino.plugin.base.classloader.ClassLoaderSafeConnectorPageSourceProvider.createPageSource(ClassLoaderSafeConnectorPageSourceProvider.java:49)
    at io.trino.split.PageSourceManager.createPageSource(PageSourceManager.java:68)
    at io.trino.operator.ScanFilterAndProjectOperator$SplitToPages.process(ScanFilterAndProjectOperator.java:268)
    at io.trino.operator.ScanFilterAndProjectOperator$SplitToPages.process(ScanFilterAndProjectOperator.java:196)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:319)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils$3.process(WorkProcessorUtils.java:306)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
    at io.trino.operator.WorkProcessorUtils.lambda$processStateMonitor$2(WorkProcessorUtils.java:200)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorUtils.getNextState(WorkProcessorUtils.java:221)
    at io.trino.operator.WorkProcessorUtils.lambda$finishWhen$3(WorkProcessorUtils.java:215)
    at io.trino.operator.WorkProcessorUtils$ProcessWorkProcessor.process(WorkProcessorUtils.java:372)
    at io.trino.operator.WorkProcessorSourceOperatorAdapter.getOutput(WorkProcessorSourceOperatorAdapter.java:151)
    at io.trino.operator.Driver.processInternal(Driver.java:388)
    at io.trino.operator.Driver.lambda$processFor$9(Driver.java:292)
    at io.trino.operator.Driver.tryWithLock(Driver.java:685)
    at io.trino.operator.Driver.processFor(Driver.java:285)
    at io.trino.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1078)
    at io.trino.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)
    at io.trino.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)
    at io.trino.$gen.Trino_364____20211105_120014_2.run(Unknown Source)
    at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
    at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
    at java.base/java.lang.Thread.run(Thread.java:829)
findepi commented 2 years ago

@isharamet -- "Corrupted statistics for column... Corrupted column index" is a different problem. i don't see us having an issue about that already, can you create one?

isharamet commented 2 years ago

Sorry, I saw same issue mention in the thread and thought that they're somehow related. Created the new one. Thanks.