swaldman / c3p0

a mature, highly concurrent JDBC Connection pooling library, with support for caching and reuse of PreparedStatements.
http://www.mchange.com/projects/c3p0
Other
1.28k stars 337 forks source link

Creating more connections in the pool than needed? #65

Open x8699876 opened 8 years ago

x8699876 commented 8 years ago

I have a test to spawn N# of threads and each thread runs the same query like 20 times in sequential order. I started 10 threads. after all 10 threads are started, the pool instantly has about 20 connections created.

In comparison, the Oracle UCP connection pool has only 10 connections allocated in the pool for the same test.

tried on both 0.9.5.1 and 0.9.5.2 same result.

Is this an issue or something I did wrong?

thanks, Tony

see below configuration in the pool

c3p0 Pooled Datasource: com.mchange.v2.c3p0.PoolBackedDataSource@b901e16a [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@495c4114 [ acquireIncrement -> 3, acquireRetryAttempts -> 5, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 10000, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> library, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2sfz9m9f1iul95v1ir9n4c|1782a4af, idleConnectionTestPeriod -> 30, initialPoolSize -> 0, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 3600, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 100, maxStatements -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 0, nestedDataSource -> oracle.jdbc.pool.OracleDataSource@6914a3ab, preferredTestQuery -> null, privilegeSpawnedThreads -> true, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> true, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> TMS-PlateauDS, extensions -> {}, factoryClassLocation -> null, identityToken -> 2sfz9m9f1iul95v1ir9n4c|153d9aaf, numHelperThreads -> 10 ]

tester code below ' public class DataSourceTester {

static final String sql = "select * from pa_app_admin ";

String status ="None";

List<ConnectionTestRunnable> threadList = new ArrayList<>();

private static final TenantLocal<DataSourceTester> dataSourceTesterTenanatLocal = new TenantLocal<DataSourceTester>() {
    @Override
    protected DataSourceTester initialValue() {
        return new DataSourceTester();
    }
};

public static DataSourceTester getInstance() {
    return dataSourceTesterTenanatLocal.get();
}

private  DataSourceTester() {
}

public void start(final String outputFileName, final long numberOfThreads) throws IOException {
    PrintWriter out = new PrintWriter(new FileWriter(outputFileName, true), true);
    threadList.clear();

    try {
        for (int i = 0; i < numberOfThreads; i++) {
            ConnectionTestRunnable t = new ConnectionTestRunnable("ds-tester-thread-" + i, out);
            threadList.add(t);
            t.start();
        }

    } finally {
        out.close();
    }
    status ="All Thread Started";

}

public String getStatus() {
    return status;
}

public List<ConnectionTestRunnable> getThreadList() {
    return threadList;
}

public void stop() {
    for (int i = 0; i < threadList.size(); i++) {
        ConnectionTestRunnable t = threadList.get(i);
        t.setQuit(true);
    }

    try {
        Thread.sleep(2000);
    } catch (InterruptedException e) {
        //e.printStackTrace();
    }

    for (int i = 0; i < threadList.size(); i++) {
        ConnectionTestRunnable t = threadList.get(i);
        while (!t.isEnded()) {
            t.interrupt();
        }
    }

    threadList.clear();
    status="All Tthread Stopped";

}

static class ConnectionTestRunnable extends Thread {
    String name;
    boolean quit;
    boolean ended = false;
    PrintWriter out;

    public ConnectionTestRunnable(String name, PrintWriter out) {
        this.name = name;
        this.out = out;
    }

    public boolean isQuit() {
        return quit;
    }

    public void setQuit(boolean quit) {
        this.quit = quit;
    }

    Random rand = new Random();

    //1..max
    private long getRamdomNumber(int max) {
        return rand.nextInt(max) + 1;
    }

    //return the time took
    public long cycle() {
        long t1 = System.nanoTime();
        try {

            //runs 20 seperate connection and query in each cycle.
            for (int i = 0; i < 20; i++) {
                runQuery();
                if (Thread.interrupted()) {
                    // We've been interrupted: no more crunching.
                    return -1;
                }
            }

        } catch (Exception e) {
            e.printStackTrace();
        }
        long t2 = System.nanoTime();
        return t2 - t1;
    }

    private void runQuery() {
        PreparedStatement p = null;
        ResultSet rs = null;
        Connection c=null;
        try {
            c = DBUtil.getInstance().getConnection();
            p = c.prepareStatement(sql);
            rs = p.executeQuery();
            rs.next();
            rs.getString("data_fld_1");
        } catch (Throwable t) {
            t.printStackTrace();
        } finally {
            DBUtil.getInstance().closeFinally(rs, p, c);
        }
    }

    public boolean isEnded() {
        return ended;
    }

    void report(long timeTook) {
        out.print(timeTook);
        out.print("," + name);
    }

    @Override
    public void run() {
        ended = false;
        while (!quit && !Thread.currentThread().isInterrupted()) {

            long took = cycle();

        //  report(took);

            try {
                Thread.sleep(getRamdomNumber(5)*100);  //sleep 100 to 500 milli seconds.
            } catch (InterruptedException e) {
                quit = true;
                ended = true;
            }

        }
        ended = true;
        System.out.println("--->thread " + name + " ended");
    }
}

} '

swaldman commented 8 years ago

hi,

it’s neither an issue, nor did you do anything wrong. it just happened.

c3p0 will expand the pool’s size whenever it is below maxPoolSize and experiences a condition where a client wants a Connection but the pool has none available. When it expands, it tries to ensure it will acquire acquireIncrement Connections if it finds itself scheduled to acquire fewer. It won’t try to expand past maxPoolSize.

your test starts with an empty pool, and immediately puts the pool under concurrent load. that’s going to create a lot of unsatisfiable checkout attempts that trigger an attempt to expand the pool. each such attempt may cause the pool to add one, two, or three Connections to its estimate of its desired size, depending on how much it is currently scheduled to expand (three is the maximum, because that is your acquireIncrement).

your test triggers a bunch of expansion requests, and you end up at 20. that’s fine. c3p0 makes no particular guarantees about where between minPoolSize and maxPoolSize the pool will be. it grows in response to crises (Connection requests it can’t fulfill), and stops growing when it stops experiencing those crises.

in your case, you set up a lot of crises by starting the pool so small and putting it immediately under concurrent load. so the pool grows big fast, bigger than we know is necessary under your test conditions. that’s fine: too big will handle the load very well.

if you are resource constrained and you want your pool to spike as necessary in response to load (so you use a big maxPoolSize), but to keep a steady state close to actual steady-state load, then set maxIdleTimeExcessConnections. the pool will still grow too large under the conditions you have set up, but it will quickly pare away the excess Connections (with quickly defined by the value you set) back down towards the minimum value necessary to serve the load, 10 Connections in your case.

i’m sure Oracle’s pool uses different policies and heuristics to determine its resizing. neither is right or wrong. c3p0’s policy is to always expand the size of the pool when it experiences a “stock out”, even if some expansion is already scheduled, until it hits maxPoolSize. that does mean c3p0 will err on the side of a “too large” pool when it experiences lots more load than it can handle. that choice is good for client performance, but bad for application footprint. Oracle’s more conservative expansion policy means more clients (under real-life load uncertainty) might have to wait for a new Connection, but the application footprint remains smaller. different tradeoffs have been made.

with c3p0, you can tune the tradeoff with settings maxIdleTimeExcessConnections and acquireIncrement. if you set acquireIncrement to 1 (i don’t recommend this), you’ll probably see a smaller pool expansion. (it still might go higher than 10 though.) if you set maxIdleTimeExcessConnections, when you overshoot the steady state, you’ll see the pool size quickly shrink back.

smiles,
   Steve

On Mar 29, 2016, at 9:37 PM, Tony Xue notifications@github.com wrote:

I have a test to spawn N# of threads and each thread runs the same query like 20 times in sequential order. After all 10 threads are started, the pool instantly has about 20 connections created.

In comparison, the Oracle UCP connection pool has only 10 connections allocated in the pool for the same test.

tried on both 0.9.5.1 and 0.9.5.2 same result.

Is this an issue or something I did wrong?

thanks, Tony

see below configuration in the pool

c3p0 Pooled Datasource: com.mchange.v2.c3p0.PoolBackedDataSource@b901e16a [ connectionPoolDataSource -> com.mchange.v2.c3p0.WrapperConnectionPoolDataSource@495c4114 [ acquireIncrement -> 3, acquireRetryAttempts -> 5, acquireRetryDelay -> 1000, autoCommitOnClose -> false, automaticTestTable -> null, breakAfterAcquireFailure -> false, checkoutTimeout -> 10000, connectionCustomizerClassName -> null, connectionTesterClassName -> com.mchange.v2.c3p0.impl.DefaultConnectionTester, contextClassLoaderSource -> library, debugUnreturnedConnectionStackTraces -> false, factoryClassLocation -> null, forceIgnoreUnresolvedTransactions -> false, forceSynchronousCheckins -> false, identityToken -> 2sfz9m9f1iul95v1ir9n4c|1782a4af, idleConnectionTestPeriod -> 30, initialPoolSize -> 0, maxAdministrativeTaskTime -> 0, maxConnectionAge -> 3600, maxIdleTime -> 300, maxIdleTimeExcessConnections -> 0, maxPoolSize -> 100, maxState ments -> 0, maxStatementsPerConnection -> 0, minPoolSize -> 0, nestedDataSource -> oracle.jdbc.pool.OracleDataSource@6914a3ab, preferredTestQuery -> null, privilegeSpawnedThreads -> true, propertyCycle -> 0, statementCacheNumDeferredCloseThreads -> 0, testConnectionOnCheckin -> true, testConnectionOnCheckout -> false, unreturnedConnectionTimeout -> 0, usesTraditionalReflectiveProxies -> false; userOverrides: {} ], dataSourceName -> TMS-PlateauDS, extensions -> {}, factoryClassLocation -> null, identityToken -> 2sfz9m9f1iul95v1ir9n4c|153d9aaf, numHelperThreads -> 10 ]

tester code below ' public class DataSourceTester {

static final String sql = "select * from pa_app_admin ";

String status ="None";

List threadList = new ArrayList<>();

private static final TenantLocal dataSourceTesterTenanatLocal = new TenantLocal() { @Override protected DataSourceTester initialValue() { return new DataSourceTester(); } };

public static DataSourceTester getInstance() { return dataSourceTesterTenanatLocal.get(); }

private DataSourceTester() { }

public void start(final String outputFileName, final long numberOfThreads) throws IOException { PrintWriter out = new PrintWriter(new FileWriter(outputFileName, true), true); threadList.clear();

try {
    for (int i = 0; i < numberOfThreads; i++) {
        ConnectionTestRunnable t = new ConnectionTestRunnable("ds-tester-thread-" + i, out);
        threadList.add(t);
        t.start();
    }

} finally {
    out.close();
}
status ="All Thread Started";

}

public String getStatus() { return status; }

public List getThreadList() { return threadList; }

public void stop() { for (int i = 0; i < threadList.size(); i++) { ConnectionTestRunnable t = threadList.get(i); t.setQuit(true); }

try {
    Thread.sleep(2000);
} catch (InterruptedException e) {
    //e.printStackTrace();
}

for (int i = 0; i < threadList.size(); i++) {
    ConnectionTestRunnable t = threadList.get(i);
    while (!t.isEnded()) {
        t.interrupt();
    }
}

threadList.clear();
status="All Tthread Stopped";

}

static class ConnectionTestRunnable extends Thread { String name; boolean quit; boolean ended = false; PrintWriter out;

public ConnectionTestRunnable(String name, PrintWriter out) {
    this.name = name;
    this.out = out;
}

public boolean isQuit() {
    return quit;
}

public void setQuit(boolean quit) {
    this.quit = quit;
}

Random rand = new Random();

//1..max
private long getRamdomNumber(int max) {
    return rand.nextInt(max) + 1;
}

//return the time took
public long cycle() {
    long t1 = System.nanoTime();
    try {

        //runs 20 seperate connection and query in each cycle.
        for (int i = 0; i < 20; i++) {
            runQuery();
            if (Thread.interrupted()) {
                // We've been interrupted: no more crunching.
                return -1;
            }
        }

    } catch (Exception e) {
        e.printStackTrace();
    }
    long t2 = System.nanoTime();
    return t2 - t1;
}

private void runQuery() {
    PreparedStatement p = null;
    ResultSet rs = null;
    Connection c=null;
    try {
        c = DBUtil.getInstance().getConnection();
        p = c.prepareStatement(sql);
        rs = p.executeQuery();
        rs.next();
        rs.getString("data_fld_1");
    } catch (Throwable t) {
        t.printStackTrace();
    } finally {
        DBUtil.getInstance().closeFinally(rs, p, c);
    }
}

public boolean isEnded() {
    return ended;
}

void report(long timeTook) {
    out.print(timeTook);
    out.print("," + name);
}

@Override
public void run() {
    ended = false;
    while (!quit && !Thread.currentThread().isInterrupted()) {

        long took = cycle();

    //  report(took);

        try {
            Thread.sleep(getRamdomNumber(5)*100);  //sleep 100 to 500 milli seconds.
        } catch (InterruptedException e) {
            quit = true;
            ended = true;
        }

    }
    ended = true;
    System.out.println("--->thread " + name + " ended");
}

}

} '

— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub

x8699876 commented 8 years ago

Thanks Steve for explaining in such details! It greatly helped me understanding the pool behavior.

The connections are very valuable resources in a cloud application so I wanted the pool to grow more conservatively if possible. Just in case I need to customize this behavior: "it grows in response to crises (Connection requests it can’t fulfill), and stops growing when it stops experiencing those crises." , so instead of rapidly growing the pool to the max I want to create only one a time if there are no idle /available connections available in the pool. Can you point me to the source code where I can start looking at?

Thanks, Tony

swaldman commented 8 years ago

no need to go to the source. if you want just one new Connection in response to nonavailability, set the config param acquireIncrement to 1.

if you are curious, pool growth behavior is defined in com.mchange.v2.resourcepool.BasicResourcePool.

 smiles,
       Steve

~oo~ Steve Waldman swaldman@mchange.com

On Mar 30, 2016, at 9:57 AM, Tony Xue notifications@github.com wrote:

Thanks Steve for explaining in such details! It greatly helped me understanding the pool behavior.

The connections are very valuable resources in a cloud application so I wanted the pool to grow more conservatively if possible. Just in case I need to customize this behavior: "it grows in response to crises (Connection requests it can’t fulfill), and stops growing when it stops experiencing those crises." , so instead of rapidly growing the pool to the max I want to create only one a time if there are no idle /available connections available in the pool. Can you point me to the source code where I can start looking at?

Thanks, Tony

— You are receiving this because you commented. Reply to this email directly or view it on GitHub

x8699876 commented 8 years ago

Steve, I changed the acquireIncrement to1 . I see sometimes the pool still grow very rapidly. Then of cause shrinks because I had the the maxIdleTimeExcessConnections also to 20 seconds . But I feel using a too tight of maxIdleTimeExcessConnections to shrink down the pool also has side effects. Because I wanted to pool to not shrink down that fast either. But if I set the maxIdleTimeExcessConnections to 1 min, the pool will allocate have too many connections in average. It is nearly double the count of the thread I run. ex 40-50 (50 being the max) connections in the pool for 20 threads.

What I would like is the minimum footprint but not purely relying on cutting down the excess connections using the maxIdleTimeExcessConnections. The pool still grows too fast in the first place at least for my requirement. If the pool does not grow that fast, it probably also means letting the request starve a little bit when request a free connection but it is not there. It may have wait a little longer .

with maxIdleTimeExcessConnections set to 60 seconds. I observed some idle connections idle time reach all the way to a min and then got removed from the pool when reaching 1min. But in the meantime new connections are created . looks like those idle connections are not being actively used and the pool still decide to create new physical connections. I think this is an issue.

So why is it growing so fast? can you shed some light in terms of design? It seems the resource aquire is done asynchronously through the background threads. what is the difference of the ScatteredAcquireTask and regular AcquireTask?

3-31-2016 12-53-17 pm

rahulr692 commented 5 years ago

@mhisoft Hi, Did you ever got the reason the pool was increasing rapidly ?