aws / aws-advanced-jdbc-wrapper

The Amazon Web Services JDBC Driver has been redesigned as an advanced JDBC wrapper. This wrapper is complementary to and extends the functionality of an existing JDBC driver to help an application take advantage of the features of clustered databases such as Amazon Aurora.
Apache License 2.0
207 stars 41 forks source link

Failed to set a custom configuration for HikariCP (Aurora mysql) #1121

Open dpiva97 opened 1 week ago

dpiva97 commented 1 week ago

Describe the bug

Hi, I have an RDS Aurora MySQL cluster with two instances (writer and reader), and I’m trying to follow the SpringHibernateBalancedReaderTwoDataSourceExample to reduce the load on the writer instance.

Everything seems to work fine, but when I attempt to set a custom Hikari configuration, as described in the commented section of the example , it doesn’t get applied. This appears to be because the HikariPooledConnectionProvider's acceptsUrl method does not accept cluster type URL, so it uses a default provider.

Expected Behavior

Set a custom HikariConfig and print the log inside the HikariPoolConfigurator's configurePool method when a connection pool is created.

What plugins are used? What other connection properties were set?

Using a custom profile starting from the SF_F0 preset (details in additional info section)

Current Behavior

The HikariPooledConnectionProvider is never used.

Reproduction Steps

Follow the SpringHibernateBalancedReaderTwoDataSourceExample with an Aurora mysql cluster

Possible Solution

Modify the acceptsUrl method in HikariPooledConnectionProvider

Additional Information/Context

My custom profile:

static {

        ConfigurationProfileBuilder.get().from(ConfigurationProfilePresetCodes.SF_F0)
                .withName("datasource-with-internal-connection-pool")
                .withConnectionProvider(new ClusterHikariPooledConnectionProvider(
                        (hostSpec, originalProps) -> {
                            LOGGER.info("Start a new HikariCP pool for {}", hostSpec.getHost());

                            final HikariConfig config = new HikariConfig();
                            config.setMaximumPoolSize(25);
                            // holds few extra connections in case of sudden traffic peak
                            config.setMinimumIdle(2);
                            // close idle connection in 15min; helps to get back to normal pool size after load peak
                            config.setIdleTimeout(TimeUnit.MINUTES.toMillis(15));
                            // verify pool configuration and creates no connections during initialization phase
                            config.setInitializationFailTimeout(-1);
                            config.setConnectionTimeout(TimeUnit.SECONDS.toMillis(10));
                            // validate idle connections at least every 3 min
                            config.setKeepaliveTime(TimeUnit.MINUTES.toMillis(3));
                            // allows to quickly validate connection in the pool and move on to another connection if needed
                            config.setValidationTimeout(TimeUnit.SECONDS.toMillis(1));
                            config.setMaxLifetime(TimeUnit.DAYS.toMillis(1));

                            config.setAutoCommit(false);

                            return config;
                        },
                        null
                ))
                .buildAndSet();

    }

application.yml:

spring:
  datasource:
    writer-datasource:
      url: jdbc:aws-wrapper:mysql://####.cluster-####.eu-south-1.rds.amazonaws.com:3306/####?wrapperProfileName=datasource-with-internal-connection-pool&wrapperDialect=aurora-mysql
      username: ####
      password: ####
      driver-class-name: software.amazon.jdbc.Driver
      type: org.springframework.jdbc.datasource.SimpleDriverDataSource
    load-balanced-reader-datasource:
      url: jdbc:aws-wrapper:mysql://####.cluster-ro-####.eu-south-1.rds.amazonaws.com:3306/####?wrapperProfileName=datasource-with-internal-connection-pool&wrapperDialect=aurora-mysql&readerInitialConnectionHostSelectorStrategy=roundRobin
      username: ####
      password: ####
      driver-class-name: software.amazon.jdbc.Driver
      type: org.springframework.jdbc.datasource.SimpleDriverDataSource

The AWS Advanced JDBC Driver version used

2.3.9

JDK version used

17

Operating System and version

windows 11

sergiyvamz commented 1 week ago

Hi, @dpiva97

Thank you for reaching out with this issue. It seems that the behaviour that you're experiencing is a correct one. By design, a connection pool provider can accept urls that are instance endpoints and reject other urls like cluster endpoints. The reason for that is simple. Instance endpoint always points to a particular instance and they never change. This guarantees that every connection pool contains connections to a particular node. Contrary, cluster endpoints (cluster writer endpoint or cluster reader endpoint) can resolve to different instances depending on a cluster topology. Cluster reader endpoint resolves to a random reader by design. If cluster endpoints were allowed for connection pool provider, it would lead to connections to different instances opened within a single connection pool. That would make practically impossible to manage connections when cluster topology and instance roles change during failover.

When the driver opens a first connection, a cluster topology is unknown and that gives the driver no option other than to use a provided cluster endpoint (as in your configuration) and to create a brand new connection. This first connection is directed to a default connection provider, not to a HikariCP connection provider. When then a cluster topology is fetched, it gives AuroraInitialConnectionStrategyPlugin (that is a part of original SF_F0 configuration profile) a chance to replace a cluster endpoint with an instance endpoint and opened a new connection with a help of HikariCP connection provider.

By reviewing your configuration I've noticed that load-balanced-reader-datasource datasource may need an additional parameter. I'd recommend to add failoverMode=strict-reader (or reader-or-writer if preferred). This parameter will configure failover plugin and instruct it to failover to a reader nodes rather than to a writer. I hope it makes sense to for your configuration.

Thank you!

dpiva97 commented 1 week ago

Hi @sergiyvamz,

Thank you for your response.

Could you please explain when and how the driver determines the instance URL and subsequently uses the custom provider?

If I need to configure a custom datasource setting, such as disabling autocommit, how can I set this up from the first connection?

Thank you!