Closed yarikoptic closed 5 years ago
No problem, this one is not actually used for production code... Have a look at #1515, see if that fixes it for you.
I could outline potential benefits if you are interested ;)
No need, but you've caught my interest now... What are we talking about?
One more thing: I notice that many of these builds are failing due to failed dependencies. I assume this is related to Qt? If so, you may still be able to offer a command-line only build: just run ./configure -nogui
, and it'll skip the Qt tests and skip building mrview
and shview
.
And one more while I'm at it: it looks like the install runs from a download of the source code, rather than a git clone. The issue there is that the version information that gets baked into the executables is derived from git describe
. So these executables will show up as 'version unknown' in their help page and --version
output, which is not ideal. Is there any way to use a git clone here...?
Is there any way to use a git clone here...?
not directly. But since I will be preparing the source tarball out of the git repo, we could generate a file with versioning information which could be picked up during build. Many projects have such a practice:
$> grep export-subst */.gitattributes
cctools/.gitattributes:configure export-subst
dipy/.gitattributes:dipy/COMMIT_INFO.txt export-subst
nipy/.gitattributes:nipy/COMMIT_INFO.txt export-subst
nipype/.gitattributes:nipype/COMMIT_INFO.txt export-subst
nipype-master/.gitattributes:nipype/COMMIT_INFO.txt export-subst
pandas/.gitattributes:pandas/_version.py export-subst
pymvpa/.gitattributes:mvpa2/COMMIT_HASH export-subst
pynifti/.gitattributes:nibabel/COMMIT_INFO.txt export-subst
pynifti-localcopy/.gitattributes:nibabel/COMMIT_INFO.txt export-subst
so, as inspired by good old CVS, we get e.g. for pandas:
$> grep '$Format:' pandas/pandas/_version.py
git_refnames = "$Format:%d$"
git_full = "$Format:%H$"
where it is placing refnames and hexsha inside Python variable right in the code. It would though cause a bit of difficulty in packaging (source tarball might have different expanded lines than I would have in the source tree which would be the git tree) but IIRC avoidable. Less ambitious, is to have a separate file with that information to be picked up (like in our good old PyMVPA):
$> cat pymvpa/mvpa2/COMMIT_HASH
$Format:%H$
I could outline potential benefits if you are interested ;)
No need, but you've caught my interest now... What are we talking about?
oy... ok - two main ones
0, 1
sequence became 128, 0
or smth like that since one of the bytes was not aligned correctly. So, although assuring correct operation on those platforms sounds like a pain -- it does come handy at times even for the basic QA.and then still that coolness factor like (and that was way way back) to demonstrate that your software runs virtually everywhere (if someone is concerned about portability)
No problem, this one is not actually used for production code... Have a look at #1515, see if that fixes it for you.
THANKS! Re "production" -- well, we already uploaded a "non production" version to debian unstable ;)
$> git describe master
3.0_RC3-86-g4b523b413
and current master seems to be as non-production as the previous one:
$> git describe origin/master
3.0_RC3-135-g2b8e7d0c2
so I think we will be fine ;) I will update packaging, and upload, so we will see!
THANKS for quick replies!
generate a file with versioning information which could be picked up during build
I've had a similar problem with the homebrew package where I had to manually clone and checkout our code into the homebrew repo (instead of simply setting a link to release code tarball) just to get the version string into the commands. How about we check for a file build.version
or similar and use the content of that file as the version string in the absence of a working git repository?
How about we check for a file build.version or similar and use the content of that file as the version string in the absence of a working git repository?
We do something like that in BIDS Apps.
Better would be if a text file in the repo contained up-to-date version information, but that file was actually script-generated and CI verified correspondence. However I'm not sure that's possible given the generation of both commit hashes and tags occur after the commit.
E.g. all the big endians, where bytes order is swapped.
We should be covered in that respect. Don't know to what extent it's been tested in recent times (I'm a bit concerned about envvar name mismatch; @jdtournier?), but we've been diligent to cover it off where relevant.
Re "production" -- well, we already uploaded a "non production" version to debian unstable ;)
Yes, that's about as 'production' as we've managed so far. Although we do have actual tag releases (still as release candidates though). Trust me, we get a lot less 'production' than that...
I could outline potential benefits if you are interested ;)
No need, but you've caught my interest now... What are we talking about?
oy... ok - two main ones
Yes, I've heard about ARM chips being used for clusters - that's an interesting one to follow. The other reasons are a bit less appealing to me, but interesting nonetheless.
E.g. all the big endians, where bytes order is swapped.
We should be covered in that respect. Don't know to what extent it's been tested in recent times (I'm a bit concerned about envvar name mismatch; @jdtournier?), but we've been diligent to cover it off where relevant.
Well, we're covered in that we've thought about it, but we're not covered in that none of the testing operates on big-endian systems... While early versions of MRtrix were used on big-endian systems, it's been a (long) while. So there's every chance that bugs might be there and have gone undetected for quite some time.
On that note, I think you might be onto something with the envvar name mismatch... Guess that should be MRTRIX_BYTE_ORDER_IS_BIG_ENDIAN
, hey? Guess we'll find out pretty quickly if we get one of the big-endian builds to work...
Onto the version string issue:
How about we check for a file
build.version
or similar and use the content of that file as the version string in the absence of a working git repository?
The problem is if you want that file to store the commit SHA1, then you can't do that without knowing the commit SHA1 in the first place, since the SHA1 depends on the exact contents of the commit. So yes, we can store a version, but then it can't refer to the actual commit SHA1.
Which also means:
Better would be if a text file in the repo contained up-to-date version information, but that file was actually script-generated and CI verified correspondence. However I'm not sure that's possible given the generation of both commit hashes and tags occur after the commit.
Can't be done - same reason, as you guessed here. On top of that, the CI checks out a merge commit as would be pushed to the repo when the pull request is eventually merged, so it's not even the SHA1 of the commit you've pushed, but a different one again... This is why I ended up with the current strategy.
Which leaves @yarikoptic's suggestion:
since I will be preparing the source tarball out of the git repo, we could generate a file with versioning information which could be picked up during build.
That actually sounds like the best plan. This is the bit that updates the version file (core/version.cpp
). What we could do is make sure that part can run even without a valid config (currently, it would abort before that stage, but it's an easy fix - see #1516), so all you'd need to do is clone the repo, run ./build
(which would abort pretty quickly, but at least update the version file), delete the .git/
folder, and package it up. I've just checked, if there is no git repo to grab the version string from, the ./build
script will leave the current one in place, so that part should already work out of the box.
Does that sound do-able...?
Which leaves @yarikoptic's suggestion:
since I will be preparing the source tarball out of the git repo, we could generate a file with versioning information which could be picked up during build.
That actually sounds like the best plan. This is the bit that updates the version file (
core/version.cpp
). What we could do is make sure that part can run even without a valid config (currently, it would abort before that stage, but it's an easy fix - see #1516), so all you'd need to do is clone the repo, run./build
(which would abort pretty quickly, but at least update the version file), delete the.git/
folder, and package it up. I've just checked, if there is no git repo to grab the version string from, the./build
script will leave the current one in place, so that part should already work out of the box.
Well, my recommendation was somewhat of a hybrid/in addition to above. What if you do have "$Format:%H$"
etc placeholders in your build script, which if finds no $Format...$
which would mean that sources were exported by git archive locally like I would do or by github if someone downloads a zipball, then your ./build
just uses them instead of trying to run git
etc. This way, if the only way sources would be distributed is either from git archive, or with ./git - you would be all set with the precious git info.
Based on discussion, suggestion to handle this:
MRTRIX_BASE_VERSION
to the file src/exec_version.h
"unknown"
if git is not present or no git historygit describe --tags --abbrev=0
src/exec_version.h
Thoughts
closed with #1593
Debian supports (way too ;)) many non x32 architectures, but builds there fail: https://buildd.debian.org/status/package.php?p=mrtrix3&suite=sid and quite often simply due to demand of the following header file for SSE support (available only for i386 and amd64)
before we go ahead and just blindly restrict architectures, I wondered to ask -- do you see it feasible to support non-SSE ones? I could outline potential benefits if you are interested ;)