Closed riedelcastro closed 6 years ago
I'm not sure I'm a huge fan of this... for instance, [1, 2] do not enforce version constraints; there are also other sources of randomness, e.g. the nltk
and spacy
models being used.
I'm also afraid that updating version numbers in requirements.txt
(or setup.py
) can be a time sink
[1] https://github.com/tensorflow/tensor2tensor/blob/master/setup.py [2] https://github.com/fchollet/keras/blob/master/setup.py
I don't think it's a time sink. It has to be done once, and then we just need to be explicit about when dependencies change. And that's a good thing. For example, I think it's important to make the change from TF 1.13 to 1.14 explicitly, and not finding out because something obviously failed a test.
@riedelcastro what if the user wants the latest TF for their project(s), but jack
is a bit late and still requires an older version?
I can ask Wercker to use TF 1.x
for the overfit tests (so they don't fail for random reasons) and the latest TF
for all other tests - that should fix the problem!
Seems very weird to use different TF versions for different tests.
@pminervini a good reason for us to keep up and maintain jack
regularly :)
I think TF is stable enough such that we can fix a version and update it once in a while with the test results.
Indeed at some point we had this feature in Jack (because I grew bored of tests failing) - but then, I wanted to use the latest TF and jack
didn't, and I quickly reverted that change
The current setup means that fresh
pip install
calls might produce different installations in case any dependency version advances. This affects reproducibility and testing.