I don't remember how we most recently set this up, but it seems like we're using virtualenv
export VIRTUAL_ENV=/apps/dryad/python_venv/python3.7.9
export PATH=$VIRTUAL_ENV/bin:$PATH
export PYTHONPATH=$VIRTUAL_ENV
# pip install -r requirements.txt will install the python libraries we need for the Counter app (I believe)
# it may require some sqlite3 libraries installed on the OS, not 100% sure
Installing Node/NPM and Yarn for Webpack(er) and React
Delayed Job Daemon that accepts jobs to run on the dev and stage-2c and prod-2c servers. It runs with the rails environment but is a daemon that runs in the background. It is stopped and restarted as part of our application deployment.
The Notifier is a stand-alone ruby script that has a separate config and checks the OAI-PMH feed and notifies our application of completed items. It runs on a cron job every minute.
https://github.com/CDLUC3/counter-processor has too much info, but the main things to know is that counter-processor/config contains a config with (I think) only 1 secret. The directory counter-processor/state is where it maintains its state. It is deployed by checking out and pip install.
It may run for a few days and use a lot of memory when it runs weekly (pushes 1 core to 100% and may use 2-4GB memory).
We hope to get rid of it (maybe by the end of the year). 🤞
As for installing SOLR itself, I believe it's mostly just downloaded and extracted to run with Java. https://confluence.ucop.edu/pages/viewpage.action?pageId=174850144 is a very old document. We are currently running solr-8.1.1 but even that could probably be updated and we don't really have a deploy for it.
It contains some kind of storage of our search data (not sure where within its directories), but we can regenerate the search data from a Rails rake task if needed.
S3 storage for temporary files before uploading them to Merritt or Zenodo (managed by Dryad)
It doesn't require much automated setup, but if we change the domain names of the web application (visible to users) we need to update CORS information in the policies so that S3 can accept presigned uploads from our users.
OBSOLETE EFS mounted storage is where we stored files that users were uploading before submitting to Merritt. We need to go through whatever is still stored in it and verify there is nothing we need or copy it elsewhere before we decommission.
Current Configuration
We are currently using the Rails 5.2 secrets workflow and have the key stored in SSM for decrypting the secrets.
The notifier has configuration, but no secrets.
The counter processor has configuration and one secret that can be passed in as an environment variable or be written to a file on the server.
SOLR has configuration but no secrets.
Delayed Job shares the config and secrets with the Rails application since it runs under that environment.
Most of the configuration doesn't need to be secret and it pulls the secrets out of the credentials.yml.enc file into the YAML at runtime.
There are approximately 80 different secrets in that file since we're making account names and credentials secret. We can probably trim back those secrets to around 50 items since we have simplified UC campus configuration with the Merritt team.
I'll add some details here for the services running on our servers currently that we can use as a checklist for transitioning.
Base Technologies
Inventory of Services
counter-processor/config
contains a config with (I think) only 1 secret. The directorycounter-processor/state
is where it maintains its state. It is deployed by checking out and pip install.Current Configuration