-
>Overview
The partitions are not getting dropped when the delete query matches the entire partitions either with values or using in clause
```
trino> create table pso_test.par_and_buck_by1 (a var…
-
HDP3.1的集群,使用的是HDP自带的spark,防火墙都关闭了,找不到9820在哪里配置的
=====================提交日志========================
{"msg_info":"2022-03-03 09:28:41:submit job get unknown error
java.net.ConnectException: Call Fro…
-
It seems that there is an issue in the `trino-parquet` module for writing the data.
Failing build:
https://github.com/trinodb/trino/runs/4723741729?check_suite_focus=true
Failing test:
…
-
Issue Type: Bug
Use the following code to create an external table:
CREATE EXTERNAL TABLE [dbo].[helloORC_HDP3_0]
([name] varchar(20)
)
WITH
( LOCATION='/',
DATA_SOURCE = Az…
-
Steps to reproduce:
1. Open any table in Hive
2. Choose "Create new column"
3. Choose "Not null" constraint for column
![2020-11-09 15_43_24-DBeaver Enterprise 7 3 0 - olympic](https://user-images…
-
I'm using kyuubi-1.2.0-rc5. When I test authentication with LDAP,I found that I need to modify code :
LdapAuthenticationProviderImpl.scala line:68 :
![image](https://user-images.githubuserconten…
-
# Why
When running a k8s cluster, advanced users need a way of health checking ray and its components. In particular, we want to be able to health check components of the cluster like the ray clien…
-
env:
1、presto-server-0.251
2、HDP 3.1.4.0-315
3、hdfs-connector : oci-hdfs-full-3.3.0.7.0.1.jar
logs:
[root@bigdata-hadoop-1 presto-server-0.251]# presto-cli --server bigdata-hadoop-1:8089 --…
-
DSS0.9.1, linkis0.9.4 基于HDP3.1.4( hadoop 3.1 , hive 3.0 , spark2.3) 编译,可以在scriptis里写sparksql脚本及执行。
创建工作流并在上面拖拽了sparksql 控件,写了SQL脚本后,保存工作流。关闭后再打开,工作流显示空白页。
要怎么去调试这个问题呢?
-
DSS0.9.1, linkis0.9.4 基于HDP3.1.4( hadoop 3.1 , hive 3.0 , spark2.3) 编译,可以在scriptis里写sparksql脚本及执行。
创建工作流并在上面拖拽了sparksql 控件,写了SQL脚本后,保存工作流。关闭后再打开,工作流显示空白页。
要怎么去调试这个问题呢?