Open andrewazores opened 3 years ago
@hareetd as discussed, we can work on this together by submitting related PRs for each handler
How thorough should the integration tests be? For example, the RulesPostHandler
unit test contains invalid mime tests that aren’t tested in AutoRulesIT
. There are also other scenarios that would result in a 4xx or 5xx error that aren't covered in either test.
Although it’s probably not feasible to cover all the 5xx errors, should we aim to cover all the 4xx situations with known causes?
I think that sounds reasonable. Any 100-3xx
status codes defined by the handler (including any implicit 200
s that come from just returning intermediate responses in V2 handlers) should be exercised for sure, and any explicitly defined 4xx
codes as well. The generic 4xx
s like 401
and 427
could probably be added to some testing base class akin to the StandardSelfTest
we already have, so that that behaviour is checked consistently in all tests where the handler is expected to require authorization (which is almost all of them).
If you can find any case where you can force a 500
to occur in an integration test then I would say that should actually be filed as a bug against Cryostat itself, and either fixed or handled to turn it into a 4xx
or whatever else is appropriate.
As the number of itests grow, how would you recommend organizing the tests?
AutoRulesIT.java
already contains 6 tests for adding or deleting rules, adding or deleting credentials and various form validations. I thought of grouping tests for one handler in a single file, except some handlers are related. For example, you have to create a rule before deleting it.
Some of my tests can also reuse the same code with @BeforeAll
only if all of the tests in one file use the same rule definition:
static MultiMap testRule = MultiMap.caseInsensitiveMultiMap();
@BeforeAll
static void setup() throws Exception {
testRule.add("name", "Auto Rule");
...
Since each integration test file can potentially leave behind some state (ex. created recordings, new containers in the pod, rule definitions), and tests are intentionally run in a randomized order, we need to be careful that each test also cleans up after itself after its run. Of course, it's possible for the cleanup to also fail, so in that case that failure should be clearly printed out and the test file marked as failed. To keep this as easy as possible to understand and maintain, each file should therefore be runnable individually and in any order relative to other test files, even if two (or more) files exercise the same or similar functionalities/API handlers. We could create some subdirectories to group related files together just so that it's a little easier to navigate, but this would just be symbolic and help (human) readers. It's also possible to create base test files like the existing StandardSelfTest
, which can be used to extract and reuse common setup/teardown between test files that do similar things or test the same APIs, but this should be done in such a way that the tests are still independent from each other.
Is this a bug in TemplatesPostHandler
?
~/Downloads> vi invalidTemplate.xml
this is not valid XML.
:wq
~/Downloads> curl -v --insecure -F templateName=invalidTemplate https://0.0.0.0:8181/api/v1/templates/
> POST /api/v1/templates/ HTTP/1.1
...
< HTTP/1.1 200 OK
It would seem so. If you do that and then GET /api/v1/targets/localhost:0/templates
afterward, does the invalid document show up? The LocalStorageTemplateService
in -core
is supposed to validate the document before writing it to storage, but maybe that's bugged.
GET /api/v1/targets/localhost:0/templates
The invalid template doesn't get written to storage even though I get a 200 OK
. I'll file a bug for it.
[{"name":"Profiling","description":"Low overhead configuration for profiling, typically around 2 % overhead.","provider":"Oracle","type":"TARGET"},{"name":"Cryostat","description":"Continuous self-profiling for Cryostat","provider":"Cryostat","type":"TARGET"},{"name":"Continuous","description":"Low overhead configuration safe for continuous use in production environments, typically less than 1 % overhead.","provider":"Oracle","type":"TARGET"},{"name":"ALL","description":"Enable all available events in the target JVM, with default option values. This will be very expensive and is intended primarily for testing Cryostat's own capabilities.","provider":"Cryostat","type":"TARGET"}]
Does anyone know of a better way to step through integration tests than inserting print statements into handlers?
I'm trying to find out where an exception gets thrown in CertificatePostHandler
.
You could probably attach a debugger to the Cryostat instance:
https://stackoverflow.com/a/56823745
Add that line to the JVM flags passed in entrypoint.sh, and expose the 8000
port in the podman run
invocation in pom.xml
. You should then be able to attach your jdb, Eclipse Remote Debugger, etc. to it.
If I accidentally send a request without a :targetId
in the request URL, the server returns a 500
. Should this be left as a 500
because the server doesn't know how to handle this URL or a 400
because the user entered a URL with a missing parameter?
eg GET /api/v2/targets/:targetId/recordingOptionsList
:
curl -v --insecure https://0.0.0.0:8181/api/v2/targets/recordingOptionsList
...
< HTTP/1.1 500 Internal Server Error
< content-type: text/plain
< content-length: 74
I see:
$ curl -v --insecure https://0.0.0.0:8181/api/v2/targets/recordingOptionsList
* Trying 0.0.0.0:8181...
* Connected to 0.0.0.0 (127.0.0.1) port 8181 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/pki/tls/certs/ca-bundle.crt
CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server did not agree to a protocol
* Server certificate:
* subject: C=CA; O=Cryostat; CN=cryostat
* start date: Aug 5 20:10:38 2021 GMT
* expire date: Feb 1 20:10:38 2022 GMT
* issuer: C=CA; O=Cryostat; CN=cryostat
* SSL certificate verify result: self signed certificate (18), continuing anyway.
> GET /api/v2/targets/recordingOptionsList HTTP/1.1
> Host: 0.0.0.0:8181
> User-Agent: curl/7.71.1
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< content-type: text/html
< content-length: 2968
<
<!doctype html>...
which is what I would have expected. It may seem odd, but the reason is that you're doing a GET
on a URL that the server doesn't recognize, so it defaults to assuming you're trying to access a routed path within the web-client - therefore it just sends you the web-client assets so that your browser can load the web-client and let the React router figure out what it wants to do. Since this path also doesn't route to anything in the web-client, you get this view:
I'm not sure how/why you got a 500
on that request, though.
Oh, wait - were you running a minimal image? You'll get a 500
on paths like that after the request times out, if so.
Yeah I was running a minimal image. Good to know, thanks.
Is it worthwhile to verify whether cleanup has succeeded or not in each test (where relevant)? If so, should I throw a RuntimeException
with a detailed message when cleanup does fail?
Edit: By verify I mean using another request (eg. GET) to check if the object in question has actually been deleted and not just relying on a successful DELETE response code.
I think a successful response code on the cleanup is fine. If that's broken then it's also fairly likely that doing a GET
to verify the state would also be broken, or at least we would have an inconsistent model of what actually happened.
Throwing an ITestCleanupFailedException
or something when cleanup fails could be a good idea, though. This could be used both in try/finally
in @Test
as well as in @AfterEach
/@AfterAll
methods.
Currently we have lots of unit test coverage, including for things like API request handlers, but we should also have a battery of integration tests to truly exercise the API surface and ensure that handlers are properly registered with the webserver and behave as expected in "real" conditions (not with mocks injected).