apache / shardingsphere

Empowering Data Intelligence with Distributed SQL for Sharding, Scalability, and Security Across All Databases.
Apache License 2.0
20k stars 6.76k forks source link

2170/5000 When opengauss uses shardingphere for data migration, it configures the null pointer from the sources-end data source #22242

Closed czywj closed 3 months ago

czywj commented 2 years ago

Bug Report

For English only, other languages will not accept.

Before report a bug, make sure you have:

Please pay attention on issues you submitted, because we maybe need more details. If no response anymore and we cannot reproduce it on current information, we will close it.

Please answer these questions before submitting your issue. Thanks!

Which version of ShardingSphere did you use?

V5.2.1 commit:2d12f9f5045ba75fae024caa3b3db895ef691afe

Which project did you use? ShardingSphere-JDBC or ShardingSphere-Proxy?

ShardingSphere-Proxy

Expected behavior

Actual behavior

Reason analyze (If you can)

Steps to reproduce the behavior, such as: SQL to execute, sharding rule configuration, when exception occur etc.

V5.2.1 commit:2d12f9f5045ba75fae024caa3b3db895ef691afe

REGISTER MIGRATION SOURCE STORAGE UNIT ds_old ( url = "JDBC: opengauss: / / 10.10.10.10:30095 / migration_ds_0? batchmode=on", user="root", password="root", properties("minPoolSize"="1","maxPoolSize"="20","idleTimeout"="60000") ); ERROR: java.lang.NullPointerException

An error is reported when JobType is MIGRATION

Commit ID: dirty-20bf595dfeced4dd8ffee2f6d95de52fdf3e569d

ADD MIGRATION SOURCE RESOURCE ds_old ( url="jdbc:opengauss://10.29.180.204:15000/test_db?batchmode=on", USER="tpccuser", PASSWORD="ggg@123", properties("minPoolSize"="1","maxPoolSize"="20","idleTimeout"="60000") ); image

Example codes for reproduce this issue (such as a github link).

java.lang.NullPointerException: null

at org.apache.shardingsphere.data.pipeline.core.api.PipelineAPIFactory$1.initialize(PipelineAPIFactory.java:56)

at org.apache.shardingsphere.data.pipeline.core.api.PipelineAPIFactory$1.initialize(PipelineAPIFactory.java:52)

at org.apache.commons.lang3.concurrent.LazyInitializer.get(LazyInitializer.java:106)

at org.apache.shardingsphere.data.pipeline.core.api.PipelineAPIFactory.getGovernanceRepositoryAPI(PipelineAPIFactory.java:6 7)

at org.apache.shardingsphere.data.pipeline.core.api.impl.PipelineDataSourcePersistService.load(PipelineDataSourcePersistSer vice.java:43)

at org.apache.shardingsphere.data.pipeline.scenario.migration.MigrationJobAPIImpl.addMigrationSourceResources(MigrationJobA PIImpl.java:340)

at org.apache.shardingsphere.migration.distsql.handler.update.RegisterMigrationSourceStorageUnitUpdater.executeUpdate(Regis terMigrationSourceStorageUnitUpdater.java:56)

at org.apache.shardingsphere.migration.distsql.handler.update.RegisterMigrationSourceStorageUnitUpdater.executeUpdate(Regis terMigrationSourceStorageUnitUpdater.java:42)

at org.apache.shardingsphere.proxy.backend.handler.distsql.ral.migration.update.UpdatableScalingRALBackendHandler.execute(U pdatableScalingRALBackendHandler.java:46)

at org.apache.shardingsphere.proxy.frontend.opengauss.command.query.simple.OpenGaussComQueryExecutor.execute(OpenGaussComQu eryExecutor.java:76)

at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.executeCommand(CommandExecutorTask.java:111)

at org.apache.shardingsphere.proxy.frontend.command.CommandExecutorTask.run(CommandExecutorTask.java:78)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)
tuichenchuxin commented 2 years ago

@azexcy Could you help to have a look?

azexcy commented 2 years ago

Ok, I will check it

azexcy commented 2 years ago

Migration must use cluster mode of ShardingSphere-Proxy, can you check the server.yaml config file is exitsing?

just like this

mode:
  type: Cluster
  repository:
    type: ZooKeeper
    props:
      namespace: xxx
      server-lists: localhost:2181
      retryIntervalMilliseconds: 500
      timeToLiveSeconds: 60
      maxRetries: 3
      operationTimeoutMilliseconds: 500

@czywj

czywj commented 2 years ago

Migration must use cluster mode of ShardingSphere-Proxy, can you check the server.yaml config file is exitsing?

just like this

mode:
  type: Cluster
  repository:
    type: ZooKeeper
    props:
      namespace: xxx
      server-lists: localhost:2181
      retryIntervalMilliseconds: 500
      timeToLiveSeconds: 60
      maxRetries: 3
      operationTimeoutMilliseconds: 500

@czywj

The configuration is as follows:

rules:
  - !AUTHORITY
    users:
      - root@%:root
      - sharding@:sharding
    provider:
      type: ALL_PRIVILEGES_PERMITTED
  - !TRANSACTION
    defaultType: XA
    providerType: Atomikos
  - !SQL_PARSER
    sqlCommentParseEnabled: true
databaseName: sharding_db

dataSources:
  ds_0:
    url: jdbc:opengauss://10.10.10.10:30095/ds_m?serverTimezone=UTC&useSSL=false&connectTimeout=10
    username: test
    password: test
    connectionTimeoutMilliseconds: 30000
    idleTimeoutMilliseconds: 60000
    maxLifetimeMilliseconds: 1800000
    maxPoolSize: 50
    minPoolSize: 1

The sharding-proxy is used

@azexcy

azexcy commented 2 years ago

I know, please read the document https://shardingsphere.apache.org/document/current/en/user-manual/shardingsphere-proxy/migration/build/

The normal configuration file is as follows

mode:
  type: Cluster
  repository:
    type: ZooKeeper
    props:
      namespace: xxx
      server-lists: localhost:2181
      retryIntervalMilliseconds: 500
      timeToLiveSeconds: 60
      maxRetries: 3
      operationTimeoutMilliseconds: 500

authority:
  users:
    - user: root@%
      password: root
  privilege:
    type: ALL_PERMITTED

props:
  max-connections-size-per-query: 1
  kernel-executor-size: 16  # Infinite by default.
  proxy-frontend-flush-threshold: 128  # The default value is 128.
  proxy-hint-enabled: false
  sql-show: false
  check-table-metadata-enabled: false
    # Proxy backend query fetch size. A larger value may increase the memory usage of ShardingSphere Proxy.
    # The default value is -1, which means set the minimum value for different JDBC drivers.
  proxy-backend-query-fetch-size: -1
  proxy-frontend-executor-size: 0 # Proxy frontend executor size. The default value is 0, which means let Netty decide.
    # Available options of proxy backend executor suitable: OLAP(default), OLTP. The OLTP option may reduce time cost of writing packets to client, but it may increase the latency of SQL execution
    # and block other clients if client connections are more than `proxy-frontend-executor-size`, especially executing slow SQL.
  proxy-backend-executor-suitable: OLAP
  proxy-frontend-max-connections: 0 # Less than or equal to 0 means no limitation.
    # Available sql federation type: NONE (default), ORIGINAL, ADVANCED
  sql-federation-type: NONE
    # Available proxy backend driver type: JDBC (default), ExperimentalVertx
  proxy-backend-driver-type: JDBC
  proxy-mysql-default-version: 8.0.11 # In the absence of schema name, the default version will be used.
  proxy-default-port: 3307 # Proxy default port.
  proxy-netty-backlog: 1024 # Proxy netty backlog.

can you show me the server.yaml config file?

czywj commented 2 years ago

can you show me the server.yaml config file?

Source code local compilation deployment.

server.yaml configuration image

azexcy commented 2 years ago

Miss the mode config, add it and retry again.

sandynz commented 1 year ago

Hi @czywj , thanks for your feedback.

Migration could be used only with Cluster mode for now.

We could do some improvement to show better error message.

sandynz commented 3 months ago

On latest version, when mode configuration is not Cluster, it'll show detailed error message.

sandynz commented 3 months ago

Fixed by #30339