Closed murphyke closed 7 years ago
@bruth @aaron0browne P.S. I haven't fixed the tests yet.
The tests are no longer broken, although there are no tests for the new features.
@aaron0browne 👍 I agree with that.
I noticed that the original code was writing temp files to flask's instance_path
directory, so I changed the lock/cache code to use that also. The CI test had been failing when trying to write the lock file, so maybe this will fix that problem.
I also added a forgotten dependency.
@aaron0browne The CircleCI build is failing with No such file or directory: 'requirements.txt'
. Not sure what's going on.
Failing command: tox
Exit code: 1
Output:
GLOB sdist-make: /home/ubuntu/data-models-sqlalchemy/setup.py
python create: /home/ubuntu/data-models-sqlalchemy/.tox/python
python inst: /home/ubuntu/data-models-sqlalchemy/.tox/dist/dmsa-0.5.10.zip
ERROR: invocation failed (exit code 1), logfile: /home/ubuntu/data-models-sqlalchemy/.tox/python/log/python-1.log
ERROR: actionid: python
msg: installpkg
cmdargs: ['/home/ubuntu/data-models-sqlalchemy/.tox/python/bin/pip', 'install', '/home/ubuntu/data-models-sqlalchemy/.tox/dist/dmsa-0.5.10.zip']
Processing ./.tox/dist/dmsa-0.5.10.zip
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-gIFaHn-build/setup.py", line 11, in <module>
with open('requirements.txt') as f:
IOError: [Errno 2] No such file or directory: 'requirements.txt'
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-gIFaHn-build/
python installed:
___________________________________ summary ____________________________________
ERROR: python: InvocationError: /home/ubuntu/data-models-sqlalchemy/.tox/python/bin/pip install /home/ubuntu/data-models-sqlalchemy/.tox/dist/dmsa-0.5.10.zip (see /home/ubuntu/data-models-sqlalchemy/.tox/python/log/python-1.log)
tox returned exit code 1
OK, I see now that has nothing to do with CircleCI, per se. tox
at the command line behaves the same way.
@aaron0browne Adding requirements.txt
to MANIFEST.in fixes the tox problem. Man, I totally do not understand Python packaging. I'd better read up a bit.
CircleCI build works now, FWIW.
FWIW, I think Python packaging is a mess and I don't understand it either... Are you still working on more changes or is this ready to go?
@aaron0browne This is ready to go.
OK! Version update? I believe it's in dmsa/__init__.py
. Maybe increase the minor version and set the release level to beta. Then after we deploy and make sure its working we can increase to final directly on the master branch. Sound good?
Right, OK
@aaron0browne So how do you want to proceed with this? Should I merge this, and you will do a test deployment?
@burrowse What stage of the PEDSnet data cycle are you in right now? Deploying and testing would be easiest if we could cause a slight outage in the service without bothering anyone. Is that the case right now?
@aaron0browne This week is an active ETL week for sites as the submission is due next Monday. I would imagine if there was a way for the testing to be done after hours it would be okay or we can make an announcement for the service to be unavailable during a certain time this week.
Given that, maybe its worth the extra effort to test on a separate instance of the service. @murphyke can you think of anything that would stop us from running an integration test on a dummy data models repo using test instances of the data models service and data models sqlalchemy? I believe all the relevant configuration is easily modified. The test instances won't have SSL, but I don't think that will be a problem, do you?
No, I don't think that would be a problem.
@aaron0browne I think you suggested merging this PR and then building and pushing the Docker image based on that for integration testing. I will do that this morning, per your devops cookbook docs for DMS and DMSA. Then, assuming testing works, I'll change the DMSA version from beta to final on master, build and push the final image, and deploy into production ....
This is an implementation of option 3 in issue #51 (webhook from the data-models repo), with the addition of a periodic refresh.
The "Caching data from the data models service" section in
README.md
discusses shortcomings with this approach.It's an improvement over the status quo, but I'm not really happy with it because of the extra complexity of messing with the threading-based timer and the fact that prompt termination of threads during shutdown is not addressed. I tested code to call
cancel()
on the timers, but that doesn't help; greater thread foo would be required. I just hate to go down that rabbit-hole when the option of not caching anything at all would be simple and adequate for any imaginable usage load.Another thing that occurred to me is that the whole reason for the partial caching in the first place is that the data-models-service
/models?format=json
endpoint returns way more data than is returned by/models?format=html
. I guess that's water under the bridge now, but it does seem that there should be a json endpoint that returns what/models?format=html
does, i.e. just a list of models and versions.