Open george-pc opened 7 months ago
This is really interesting, thank you for asking for this.
I have a question: are the SQLs part of setup/teardown logic or are they part of actual load/performance test?
If they are part of setup/teardown logic you might use existing test framework methods (like @BeforeEach
and @AfterEach
in Junit 5) and implement necessary logic with Java.
In the other case we might add directoryFilesDataSet
to JMeter DSL and you could use something like:
String sqlFileVar = "SQL_FILE";
String queryVar = "QUERY";
String poolName = "pool";
testPlan(
// using directory listing plugin (https://github.com/Blazemeter/jmeter-bzm-plugins/blob/master/directory-listing/DirectoryListing.md)
directoryFilesDataSet(new TestResource("sqls"), sqlFileVar),
jdbcConnectionPool(poolName, Driver.class, "jdbc:postgresql://localhost/my_db")
.user("user")
.password("pass"),
threadGroup(1, 1,
jdbcSampler(poolName, "${" + queryVar + "}")
.children(jsr223PreProcessor(s ->
s.vars.put(queryVar, new TestResource(s.vars.get(sqlFileVar)).rawContents())))
)
).run();
If you have different query types (not just selects), you could use s.sampler.setProperty("queryType", X)
inside the pre processor or we could extend jdbcSampler.queryType
method to allow any arbitrary string which would allow you to use queryType("${QUERY_TYPE}")
and set the query type in the pre processor the same way we are setting the query.
PS: as with csvDataSet
we could add a stopThreadOnEOF
method to stop test plan on directory listing exhaustion or to just re start from beginning of the list
Thank you for your quick and fast responses. The sql are not part of the setup/tear down. Actually right now i have a JMX test plan where i store the queries in csv file and read then using the csv reader. This works for small files but when the number of enteries in the csv file (queries) or the query itself becomes large, then maintaining and reading from csv becomes a bit of challenge. I therefore wanted to see if I can place all the sql files in a directory and then the directory reader can read them one by one. This could be further enhanced as per below logic when reading the sql from directory:
Attached Test Plan for reference: Test_Plan_Scale_UP_down_UTG.jmx.zip
I run this as below using properties file as below:
jmeter -t Test_Plan_Scale_UP_down_UTG.jmx -q connection.properties -q test.properties
connection.properties:
HOSTNAME=localhost PORT=80 USER=george PASSWORD= CATALOG=tpcds DATABASE=sf100
CONNECTION_STRING=jdbc:presto://localhost:8080/tpcds/sf10 DRIVER_CLASS=com.facebook.presto.jdbc.PrestoDriver
test.properties JMETER_HOME=/Users/macbookpro/jmeter/apache-jmeter-5.6.2
QUERY_PATH=/Users/macbookpro/Documents/Jmeter_Test_Plans/Data_Files/CSV_Files/Jmeter-TPCDS-50.csv
REPORT_PATH=/Users/macbookpro/Documents/Jmeter-Results
CONCURRENT_QUERY_COUNT=10 HOLD_PERIOD=10
QUERY_TIMEOUT=10 LIMIT_RESULTSET=1000 MAX_CONCURRANCY=10
Note : The test plan i don't think fully converts to a valid DSL using the jmx converter
Great, thank you for the detailed example and explanation!
If you would like to try to implement and contribute the directoryFilesDataSet
, I think it would be a really good contribution. Otherwise you might wait until we can dedicate to it (in a few weeks).
Another potential implementation might be something like:
String queryVar = "QUERY";
String poolName = "pool";
testPlan(
// using directory listing plugin (https://github.com/Blazemeter/jmeter-bzm-plugins/blob/master/directory-listing/DirectoryListing.md)
directoryFilesDataSet(new TestResource("sqls"), queryVar)
.fileContents(),
jdbcConnectionPool(poolName, Driver.class, "jdbc:postgresql://localhost/my_db")
.user("user")
.password("pass"),
threadGroup(1, 1,
jdbcSampler(poolName, "${" + queryVar + "}")
)
).run();
This would make resulting test plan simpler but more effort to implement since it requires implementation of new JMeter test element.
Regards
We need support to read files from directory. The use case is to read sql files from a directory and execute it iteratively. It will be great to make the support such that it provides similar properties to iterate only once or continue on EoF based on a flag.
This feature helps to create data driven testing, where the data files and possibly the expected results are in a directory. We can then create a test plan to read and compare the files based on the files present in the directory.