Closed czywj closed 3 months ago
@azexcy Could you help to have a look?
Ok, I will check it
Migration must use cluster mode of ShardingSphere-Proxy, can you check the server.yaml
config file is exitsing?
just like this
mode:
type: Cluster
repository:
type: ZooKeeper
props:
namespace: xxx
server-lists: localhost:2181
retryIntervalMilliseconds: 500
timeToLiveSeconds: 60
maxRetries: 3
operationTimeoutMilliseconds: 500
@czywj
Migration must use cluster mode of ShardingSphere-Proxy, can you check the
server.yaml
config file is exitsing?just like this
mode: type: Cluster repository: type: ZooKeeper props: namespace: xxx server-lists: localhost:2181 retryIntervalMilliseconds: 500 timeToLiveSeconds: 60 maxRetries: 3 operationTimeoutMilliseconds: 500
@czywj
The configuration is as follows:
rules:
- !AUTHORITY
users:
- root@%:root
- sharding@:sharding
provider:
type: ALL_PRIVILEGES_PERMITTED
- !TRANSACTION
defaultType: XA
providerType: Atomikos
- !SQL_PARSER
sqlCommentParseEnabled: true
databaseName: sharding_db
dataSources:
ds_0:
url: jdbc:opengauss://10.10.10.10:30095/ds_m?serverTimezone=UTC&useSSL=false&connectTimeout=10
username: test
password: test
connectionTimeoutMilliseconds: 30000
idleTimeoutMilliseconds: 60000
maxLifetimeMilliseconds: 1800000
maxPoolSize: 50
minPoolSize: 1
The sharding-proxy is used
@azexcy
I know, please read the document https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/migration/build/
The normal configuration file is as follows
mode:
type: Cluster
repository:
type: ZooKeeper
props:
namespace: xxx
server-lists: localhost:2181
retryIntervalMilliseconds: 500
timeToLiveSeconds: 60
maxRetries: 3
operationTimeoutMilliseconds: 500
authority:
users:
- user: root@%
password: root
privilege:
type: ALL_PERMITTED
props:
max-connections-size-per-query: 1
kernel-executor-size: 16 # Infinite by default.
proxy-frontend-flush-threshold: 128 # The default value is 128.
proxy-hint-enabled: false
sql-show: false
check-table-metadata-enabled: false
# Proxy backend query fetch size. A larger value may increase the memory usage of ShardingSphere Proxy.
# The default value is -1, which means set the minimum value for different JDBC drivers.
proxy-backend-query-fetch-size: -1
proxy-frontend-executor-size: 0 # Proxy frontend executor size. The default value is 0, which means let Netty decide.
# Available options of proxy backend executor suitable: OLAP(default), OLTP. The OLTP option may reduce time cost of writing packets to client, but it may increase the latency of SQL execution
# and block other clients if client connections are more than `proxy-frontend-executor-size`, especially executing slow SQL.
proxy-backend-executor-suitable: OLAP
proxy-frontend-max-connections: 0 # Less than or equal to 0 means no limitation.
# Available sql federation type: NONE (default), ORIGINAL, ADVANCED
sql-federation-type: NONE
# Available proxy backend driver type: JDBC (default), ExperimentalVertx
proxy-backend-driver-type: JDBC
proxy-mysql-default-version: 8.0.11 # In the absence of schema name, the default version will be used.
proxy-default-port: 3307 # Proxy default port.
proxy-netty-backlog: 1024 # Proxy netty backlog.
can you show me the server.yaml
config file?
can you show me the
server.yaml
config file?
Source code local compilation deployment.
server.yaml configuration
Miss the mode
config, add it and retry again.
Hi @czywj , thanks for your feedback.
Migration could be used only with Cluster mode for now.
We could do some improvement to show better error message.
On latest version, when mode configuration is not Cluster, it'll show detailed error message.
Fixed by #30339
Bug Report
For English only, other languages will not accept.
Before report a bug, make sure you have:
Please pay attention on issues you submitted, because we maybe need more details. If no response anymore and we cannot reproduce it on current information, we will close it.
Please answer these questions before submitting your issue. Thanks!
Which version of ShardingSphere did you use?
V5.2.1 commit:2d12f9f5045ba75fae024caa3b3db895ef691afe
Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?
ShardingSphere-Proxy
Expected behavior
Actual behavior
Reason analyze (If you can)
Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc.
V5.2.1 commit:2d12f9f5045ba75fae024caa3b3db895ef691afe
REGISTER MIGRATION SOURCE STORAGE UNIT ds_old ( url = "JDBC: opengauss: / / 10.10.10.10:30095 / migration_ds_0? batchmode=on", user="root", password="root", properties("minPoolSize"="1","maxPoolSize"="20","idleTimeout"="60000") ); ERROR: java.lang.NullPointerException
An error is reported when JobType is MIGRATION
Commit ID: dirty-20bf595dfeced4dd8ffee2f6d95de52fdf3e569d
ADD MIGRATION SOURCE RESOURCE ds_old ( url="jdbc:opengauss://10.29.180.204:15000/test_db?batchmode=on", USER="tpccuser", PASSWORD="ggg@123", properties("minPoolSize"="1","maxPoolSize"="20","idleTimeout"="60000") );
Example codes for reproduce this issue (such as a github link).