Closed vkt1414 closed 8 months ago
Well done moving forward, I will review and help out tmrw.
I will spend more time reviewing, but few comments for now:
idc_v17
in the query - should that point to idc_current
instead? I do not think the GHA should be updating the version in the query. The comment in the earlier discussion was to replace idc_current
with the specific version and save the result as an artifact, so that the versioned queries could be included in the release for provenance of the packaged indices.In light of the reogranization of the repositories by @jcfr, I think we should revisit the original plan outlined in https://github.com/ImagingDataCommons/idc-index-data/issues/2#issuecomment-1989574256.
In particular, as I understand it, there is no need anymore to attach index files as release artifacts. They can be uploaded directly to PyPI as part of the package. This would also make it unnecessary to create an issue
Maybe we should have one workflow with manual trigger to generate results of running the query, as you did it - this will be useful to verify everything works as expected before preparing the package - and then another one that would be triggered on the creation of the release (as done in idc-index
) that will upload the package to PyPI?
Suggestions:
scripts
directorysrc/sql/idc_index.sql
to scripts/sql/idc_index.sql
.github/get_latest_index.py
to scripts/idc_index_data_manager.py
scripts
directoryinitialize_idc_manager_class
step of cd.yml
as a function to idc_index_builder.py
Update CMakeLists
with code similar to the following:
find_package(
Python
COMPONENTS Interpreter
REQUIRED)
set(IDC_INDEX_DATA_VERSION "17")
if(NOT DEFINED ENV{GCP_PROJECT_ID})
message(FATAL_ERROR "GCP_PROJECT_ID env. variable is not set")
endif()
option(IDC_INDEX_DATA_GENERATE_PARQUET "Generate idc_index.parquet file" OFF)
set(download_dir "${PROJECT_BINARY_DIR}")
add_custom_command(
OUTPUT
${download_dir}/idc_index.csv.zip
$<$<BOOL:IDC_INDEX_DATA_GENERATE_PARQUET>:${download_dir}/idc_index.parquet> \
COMMAND Python::Interpreter -m idc_index_data_manager \
--generate-csv-archive \
$<$<BOOL:IDC_INDEX_DATA_GENERATE_PARQUET>:--generate-parquet> \
--version ${IDC_INDEX_DATA_VERSION}
...
cd.yml
bump-version.yml
) that will get latest version, get current version from CMakeLists.txt
(by parsing it using a regex), and if it applies, update CMakeLists.txt
and create a pull request This first version should package v17
without parquet, once this is working, we will then release the version 17.0.0
on PyPI.
After that, we will add the bump-version.yml
workflow to effectively detect that a new version if available and release it as well.
After that, we will add the
bump-version.yml
workflow to effectively detect that a new version if available and release it as well.
Can you clarify what you mean by this? Effectively detect new version of what and in which package?
Can you clarify what you mean by this? Effectively detect new version of what and in which package?
The bump-idc-data-index-version.yml
workflow would be scheduled to run at given frequency (daily, weekly or monthly), it would then do the following:
retrieve_latest_idc_release_version
^1CMakeLists.txt
The
bump-idc-data-index-version.yml
workflow would be scheduled to run at given frequency (daily, weekly or monthly), it would then do the following:
I see. We considered this approach before, but at this point I do not believe this is a good approach. First, IDC releases do not happen frequently - once every 1-3 months. So there is no significant burden to respond to IDC version updates manually. Second, new IDC releases will have new data or schema updates, that may break assumptions, or produce index that is too large, or I do not know what else that we can't anticipate. Because of this, it is safer to manually test the queries, and consider the result of the query before publishing.
I would prefer to have:
@jcfr I was finally able to call the datamanager directly from cmake, cleaned up data manager and cd heavily. I realized I was very far from your vision about using cmake to handle index generation but am also slowly understanding how pyproject.toml, cmake, and hynek/build-and-inspect-python-package
gha combination are working. Thanks for your patience. The latest iteration addresses most of not all of the things Andrey would like. I hardcoded to generate both CSV and parquet as I realized we will be hardcoding even if we use cmake lists. But with somehelp, we can fix it if we should ideally parameterize cmake instead of the python file
latest successful run: https://github.com/vkt1414/idc-index-data/actions/runs/8366868268
One more update: After including the GCP authentication step, CI is also working on all combinations from python 3.8-.12 on latest mac, ubuntu, and windows. However pypi 3.10 on latest ubuntu did not work and logs did help to troubleshoot, so I disabled pypi-3.10 for now. latest CI run: https://github.com/vkt1414/idc-index-data/actions/runs/8376050860
For the reference, this PR was needed to fix the CI failure due to inability to access GHA secrets from the PR submitted from a fork of the repo: https://github.com/ImagingDataCommons/idc-index-data/pull/7.
The issue with the current approach is that wheels are not reproducible and the content of the index file being generated depend on time at which they are generated.
@jcfr this is a very good point. I did not consider that it is important for the wheel packaging process to be reproducible.
revisit the versioning from being "number" based to be date based (e.g YYYY.MM.DD.N)
The current approach of using IDC data release version as the major version of the package is exactly the purpose of enabling reproducibility. I would argue that mapping the date to the IDC release version is not trivial. But if the user knows the IDC data version, this is all that is needed to make queries reproducible.
Thinking about this a bit more, we could modify the queries to have, for example {idc_versioned_dataset}
in place of idc_current
, and replace that placeholder with the IDC version based on the idc-index-data
major release number. So, for example, if the user generates release tag 17.2.10
, GHA workflow would pick 17
, replace the dataset name in each of the queries with idc_v17
, run queries, and package+publish the result into idc-index-data
v17.2.10.
for example {idc_versioned_dataset} in place of
Indeed, that would be similar to the use of @IDC_TABLENAME@
discussed in https://github.com/ImagingDataCommons/idc-index-data/pull/5#discussion_r1527111484
For convenience, supporting --output-sql-query
would also be nice to have.
version
Extracting the major version and map it to the name of the table is an option. For this to work, we would need to also customize the versioning with the following settings:
[tool.setuptools_scm]
version_scheme = "no-guess-dev"
The "challenge" with relying only on the tag is that we would have no way of testing the generation of a wheel associated with a new version of the index by simply creating a pull request, indeed a new tag and release would have to be created first without if it actually work.
To address this I would instead suggest to hard-code the major version in the CMakeLists.txt
as we originally discussed.
The "challenge" with relying only on the tag is that we would have no way of testing the generation of a wheel associated with a new version of the index by simply creating a pull request, indeed a new tag and release would have to be created first without if it actually work.
To address this I would instead suggest to hard-code the major version in the
CMakeLists.txt
as we originally discussed.
If all of the solutions we discussed involve manually updating the version, can't we simply hardcode the idc version in the sql query itself?
If all of the solutions we discussed involve manually updating the version, can't we simply hardcode the idc version in the sql query itself?
I have a draft of bump
function for addressing this and allow updating files using nox -s bump
(query the latest) or nox -s bump -- <version>
Before moving forward, we really need @fedorov to integrate https://github.com/ImagingDataCommons/idc-index-data/pull/8 as none of the code in this pull request is being tested.
@jcfr that PR is integrated now, please go ahead or let us know what we should do to fix this.
@jcfr I recall when we met last time you mentioned that one could handle packaging with just python, and CMake is not critical. Maybe you can comment on that a bit. I do not want to derail the current development, but also I do not know if learning CMake is a worthwhile investment of effort for Vamsi. Just wanted to raise the idea if he could understand the alternatives better.
re: CMake vs Python
Ditto. A pure python plugin can be written to implement similar behavior, there is some hints in the "Discussions" section of one of our recent scikit-build meeting^1
Given my limited experience with such hatch
plugin, I suggest we move forward with CMakeLists.txt
and consider revisiting afterward. Revisiting later will be much easier as we will have a working system.
Also worth noting that I expect the CMakeLists.txt
to remain small and be straightforward to maintain.
re: permission error
For now, it suggest we create the branch feat-cd-2
directly into the repo and not from a fork.
Closing. This pull request is superseded by https://github.com/ImagingDataCommons/idc-index-data/pull/9
This PR aims to address part of #2.
[x] Manual trigger only for now
[x] Take all of the queries in the queries folder and run them
[x] create artifacts containing the result for each query saved as CSV and Parquet, files should be named consistently with the query file name and include IDC version in the file name by figuring out what idc_current maps to at the time the query is executed
[x] to get the number of the latest version of IDC, list all of the dataset in bigquery-public-data project and get latest idc_v*
bigquery-public-data.idc_current.version_metadata
to get the latest idc release[x] create an issue that will include links to the artifacts generated by the GHA, title "[github-action] Index updates IDC v..." (something like that)
[x] replace idc_current with the actual version in the query and save each of the queries as GHA artifact
@jcfr I'm still figuring out how cmake works and am not completely sure if this is the best way to package the indices. As we are not uploading the indices as gh release attachments, I removed the download by urls and instead tried to package the generated indices as part of the cd. I'm working on ci.yml too, but unable to figure out how to package the indices. Any guidance is much appreciated. https://github.com/vkt1414/idc-index-data/actions/runs/8288398907
@fedorov we may need to decide whether to keep both csv and parquet or choose one. In my opinion we should keep both and slowly phase out csv. Also, please let me know if this PR is satisfactory to the goals mentioned in #2
we will need to create a couple of secrets for this gha. A sample successful run is available here: https://github.com/vkt1414/idc-index-data/actions/runs/8289604309/job/22686311347