Closed rlskoeser closed 1 month ago
Sorry for potential confusion, I should probably have put this https://github.com/Princeton-CDH/htr2hpc/issues/11#issuecomment-2402903627 on the PR instead of the issue
... this version of the code is now running on the test server - you can see the VM status info in the footer, and you can see from the hostname when the load balancer switches you between VMs.
@cmroughan great idea on the structured hidden data, I love it. I refactored slightly so we can use the json_script template tag to generate the json for us.
New version is on the test site, contents look like this:
<script id="vm-stats" type="application/json">
{"hostname": "cdh-test-htr2",
"cpu_count": 2,
"load_average": {"1": 0.20703125, "5": 0.576171875, "15": 0.419921875},
"total_memory": "15.6\u00a0GB",
"available_memory": "12.7\u00a0GB",
"used_memory": 18.9}
</script>
If this works for you, then let's merge this PR and close #11 (we might want to rename that issue, I re-scoped it a bit since you've taken on so much of the assessment work).
I forgot to answer your other questions - os.uname
changing would be fairly major, I agree we should track it separately.
Do you want to create another issue for the other things we're thinking of tracking? I'm not sure how easy to get them programmatically; might be possible to get them (or some of them) from the django settings or from ansible when I change the configuration there, but it could be that it will be simpler just to start a spreadsheet and keep track.
@rlskoeser Oh beautiful, hadn't encountered json_script
before. That works perfectly, and automatically grabbing it does as well.
I created an issue for determining how we want to handle the other stat tracking. Otherwise yes, let's grab the os.uname stats once to have them recorded and then come back to that one only when necessary.
@cmroughan thanks for reviewing and for creating the additional issue. I'm going to merge this and mark #11 as complete.
These updates are intended to help with reporting and tracking the status / configuration of the VM (related to #11 ).
Changes include:
This is what it looks like running on my machine:
I included the hostname because I thought it could help us tell if the load balancer "sticky" configuration is working or if it's possible/likely to switch between the two VMs during a session.
Questions/notes: