Closed jbeakley closed 5 years ago
Progress Report
synopsysctl create blackduck native mybd
, synopsysctl create alert native myalert
, synopsysctl deploy native
, synopsysctl update opssight native <flags>
synopsysctl create blackduck native mybd
outputs blackduck resources, synopsysctl create blackduck native mybd --database
outputs database resources - NOTE: the flag names might change)fixed by #556 and #541
Currently, our instructions for how to install Black Duck manually are basically a big lie: https://synopsys.atlassian.net/wiki/spaces/BDLM/pages/65863959/Installing+Black+Duck+in+clusters+that+are+not+running+the+Synopsys+Operator
There are at least two problems here: 1) there is no PVC information in the current YML output, so the deployment fails if the customer has PVCs enabled. We omitted the PVC information from the YML because we don't want the customer to accidentally delete their persistent volumes when uninstalling Black Duck, but we need to find a solution here. 2) if the customer is using an internal database, then the customer needs to start and initialize the database first, then start the remaining pods. you can't just run create -f output.yml on the whole deployment - it will fail.
A solution for this might be to change --mock-kube to take the following parameters: --mock-kube internal-db --mock-kube blackduck --mock-kube blackduck-no-pvc
We could maybe find better names, but here's the idea... To install the hub manually, the customer would first:
create -f internal-db.yml this creates the internal database
then the customer runs the db init script
then the customer runs create -f blackduck.yml this would create blackduck and the PVCs.
If the customer wanted to delete black duck and preserve the pvcs, then could use the blackduck-no-pvc output.