Closed al-niessner closed 2 months ago
@al-niessner similar to https://github.com/NASA-PDS/harvest/pull/161#issuecomment-2274140476, any type of integration test we can create for this?
@jordanpadams
I do not have any ideas. You need to ingest two units then update the one to do a complete test. My test was half of it - one harvest followed by one update. Automating such a test would require a bunch of stuff like running opensearch, secrets, expectations that can change with test data, etc.
@al-niessner Could we run the Registry docker-compose to load some data, and then verify we can change the status? I can just foresee as we add any new functionality to registry-mgr, we will want to have some integration tests showing they can successfully complete those capabilities. The Registry repo has the ability to spin up a registry and load some data, so it seems natural that we could use that and then do some manipulation of that data to test?
@tloubrieu-jpl @alexdunnjpl thoughts?
Basically something like this, but then execute an additional test script from this repo to do some additional regression tests.
@jordanpadams , @al-niessner ,
For the integration test, harvest/registry-mgr, I usually add the script which initialize the data in the docker compose setup and then validate in postman the status of the data.
However, this is limited since it is painful to add test by updating a specific script which initializes the data in the docker compose setup. See for registry-loader: https://github.com/NASA-PDS/registry-loader/blob/33a842e860af98266414a0251a4c28ae6723cdbb/docker/entrypoint.sh#L64
We can handle the traceability with the requirement in the postman related test.
That is very much improvable...
While we keep using docker compose (until we move to terraform and localstack for example), we can move the harvest/registry-mgr tested calls in dedicated location in the registry repository instead of this entrypoint.sh file where they don't belong.
@jordanpadams @tloubrieu-jpl
Short answer: maybe but weeks of effort and will not meet your goal.
You will not get the same results out of this testing as you do validate if that is your goal. You will be hyper sensitive to changes in the test data which this repo has no control over. validate works so well because we are given test data for each item. Here you are using some other random data and when it changes your test will fail without changes to code. Not because something is wrong with the code, but the changes in the data change your expectations but the expectations coded here have not changed. Result will be ignoring test because of too costly to fix continuous false positives.
If you want validate regression test value, then you will have to generate a specific test data and expectations first. Then you need to mock the opensearch stuff. It is isolated now making it possible but expensive in effort. Could use the registry and a live DB but that has two problems: First, when it changes (data it loads or versions it runs) test will fail through no fault of code and somebody has to figure out why and then fix expectations. Two, you have to write code that can walk the database and look at values to see if they meet expectations. Again, expensive in effort.
To be easy like validate, it would need it to be simple to create minimal data item like document with just lidvid, lid, packageid, and status to make it clear what should/could change. Then do your tests with the clean data -- will miss fewer side effects this way too like touching things it should not. All the other stuff in our current documents is just going to make working with it more expensive (time consuming) to add data for small tests like this one and then break expectations for reasons outside the scope of the code being tested.
Does any of that make sense? Did you even read this far?
My thoughts after @al-niessner comment:
We are in between integration and unit tests.
Unit tests are simple and easy to reproduce, that is what we would do if we mock opensearch. That might be useful to start doing unit tests for harvest and registry-mgr.
Current Integration tests (docker compose) are more complicated to maintain but we don't do as many tests or as detailed controls on the results as we would with unit tests. We control the test data though (it comes from a github repository). But the tests are not independent, we have a single dataset on which all the tests interact.
We could start intermediate integration tests with only harvest/registry-mgr/opensearch and have more granular test datasets (one for each test case, to have more independent/controlled environment).
But overall, I don't know if we have time now to start something new and be able to maintain it.
@al-niessner @tloubrieu-jpl copy that. I will create a future task for us to create a regression test suite for harvest and registry-mgr. right now, our regression tests test basic functionality of the registry loading, but this could become an issue as we add more features to harvest and registry-mgr in the future that we actually want to test.
in the meantime, the only problem now is these PRs are now blocked on merge until someone can manually test them. I will try to look at it this weekend.
@al-niessner @jordanpadams I am looking at the PR and something does not sound right.
Some documents are deleted with this command:
% ./bin/registry-manager delete-data \
-auth /Users/loubrieu/Documents/pds/registry/es-auth.txt \
-es file:/Users/loubrieu/Documents/pds/registry/mcp_dev.xml \
-packageId 8d12a9ba-2ba0-4d80-8ce9-65da271ecf89
Elasticsearch URL: file:/Users/loubrieu/Documents/pds/registry/mcp_dev.xml
null
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Deleted 10 document(s) from jaxa-registry index
Deleted 9 document(s) from jaxa-registry-refs index
But some remain in the database, see:
I am wondering if the field _package_id
works as we would like to. The warning in the screenshot above says field starting with _ are not supported.
I will investigate that.
I might be wrong somewhere else but the query:
GET /jaxa-registry/_search
{
"query": {
"match": {
"_package_id": "8d12a9ba-2ba0-4d80-8ce9-65da271ecf8"
}
}
}
Returns:
{
"took": 15,
"timed_out": false,
"_shards": {
"total": 0,
"successful": 0,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 0,
"relation": "eq"
},
"max_score": null,
"hits": []
}
}
Which is consistent with the fact that I've just run delete-data on this package id. But is not consistent with the fact that I am still seeing documents with this package id in the discovery interface (screenshot in prevous comment).
Also this query:
GET /jaxa-registry/_search?size=1
Returns:
{
"took": 18,
"timed_out": false,
"_shards": {
"total": 0,
"successful": 0,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 45,
"relation": "eq"
},
"max_score": 1,
"hits": [
{
"_index": "jaxa-registry",
"_id": "urn:nasa:pds:mars2020.spice:spice_kernels:sclk_m2020_168_sclkscet_refit_v02.tsc::1.0",
"_score": 1,
"_source": {
"lid": "urn:nasa:pds:mars2020.spice:spice_kernels:sclk_m2020_168_sclkscet_refit_v02.tsc",
"vid": "1.0",
"lidvid": "urn:nasa:pds:mars2020.spice:spice_kernels:sclk_m2020_168_sclkscet_refit_v02.tsc::1.0",
"title": "m2020_168_sclkscet_refit_v02.tsc",
"product_class": "Product_SPICE_Kernel",
"_package_id": "8d12a9ba-2ba0-4d80-8ce9-65da271ecf89",
"ref_lid_document": [
"urn:nasa:pds:mars2020.spice:document:spiceds"
],
.....
}
}
]
}
}
Since this feature is not critical for the next release, I suggest to merge and close this PR, and keep the ticket open since it does not completelly work.
There is this discussion on elasticsearch from 2020 (before opensearch fork ?) https://discuss.elastic.co/t/field-names-starting-with-underscore-are-not-matched-with-wildcard/221254
@tloubrieu-jpl
Um, I am confused. You ran the delete-data command but this is the status via package id ticket.
@al-niessner sorry, that me who is confused, I will make my test right...
@tloubrieu-jpl
pull and try again change archive status with package id. I must have missed a commit of an edit or something.
Hi @al-niessner ,
I pulled and rebuild and I am having a different error:
% ./bin/registry-manager set-archive-status \
-auth /Users/loubrieu/Documents/pds/registry/es-auth.txt \
-es file:/Users/loubrieu/Documents/pds/registry/mcp_dev.xml \
-packageId 8d12a9ba-2ba0-4d80-8ce9-65da271ecf89 \
-status archived
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[ERROR] Request failed: [parse_exception] request body is required
Regarding the delete issue I will open a different ticket.
@tloubrieu-jpl
Now I see. I got (un)lucky on my test. The paging of the SDK v2 default is 10 so when I did a change I just happened to look at a record that changed. All 47 did not because only 10 lidvids where found with search then updated. Oh, that is going to be a super mess because need to transition to scroll and then limit the bulk pages to fit its limits. New ticket and move forward with this.
Thanks @al-niessner I also created a ticket for delete-data, see https://github.com/NASA-PDS/registry-mgr/issues/93 . We could re-use the same ticket for set-archive-status if the pagination issues applies to both.
🗒️ Summary
Added code to translate package id to lidvids then update their archive status.
⚙️ Test Data and/or Report
Checked using registry-ref-data in local DB and looks correct.
♻️ Related Issues
Closes #69 Requires NASA-PDS/registry-common#56 Related to NASA-PDS/harvest#161 through code moving from this repository to registry-common.