This is a BOSH release for YugabyteDB.
TLS for server-to-server ("node-to-node", as in, traffic between tserver
and/or master
nodes) is on and required by default, i.e. allow_insecure_connections: false
by default. You can modify these properties using operator files.
We use BOSH's credhub integration to generate individual certificates for both master
and tserver
instance groups leveraging wildcard BOSH DNS values for the certificate SANs, meaning the actual hostname DNS values are handled automatically. Since they're both signed by the same CA (by default located in credhub under /services/tls_ca
, which is the CA for service instances which nearly all other service offerings in Cloud Foundry leverage for TLS), and each have the same common_name
, they should be compatible with one another.
It's a bit unclear to me how common_name
and alternative_names
should be configured. Is it completely arbitrary? Does the file name actually matter? Does it have to be related to the DNS hostname of each node instance? We'll all figure it out together 💖
For the moment we'll assume it's looking for the name to be the configured hostname of the individual host. We can assume this because of the following log line from /var/vcap/sys/log/yb-master/yb-master.INFO
:
tail yb-master.INFO
...
I0305 00:19:30.295537 6 secure.cc:102] Certs directory: /var/vcap/jobs/yb-master/config/certs, name: q-m90323n3s0.q-g88658.bosh
TLS for client-to-server (as in, from a client application using the universe) is on, but not required by default, i.e. allow_insecure_connections: true
by default for optional use of TLS from clients. You can modify these properties using operator files.
Note, YEDIS
does not support client-to-server TLS
You might see lines like this in current configurations:
--rpc_bind_addresses=<%= spec.address %>:<%= p("rpc_bind_addresses_port") %>
--server_broadcast_addresses=<%= spec.address %>:<%= p("rpc_bind_addresses_port") %>
Notice how --server_broadcast_addresses
is using an address with rpc_bind_addresses_port
as the port. This is because the differences between rpc_bind_addresses_port
and something like server_broadcast_addresses_port
are too small at the moment to really make a huge difference, so for the time being they're going to be collapsed into one, and only rpc_bind_addresses_port
will be referenced. Is it correct? Honestly, not 100% sure. Actually I'm 100% it isn't correct or ideal. But for the time being, it works, and you know what, we'll get there.
Certain flags (but not all) are defined as their own property with their own defaults, descriptions, opsfiles, etc. These properties are (somewhat arbitrarily) important enough to stand out. It's of my opinion that flags important enough to make a difference to a consumer of this release should receive their own property
, with reasonable defaults and a description, whereas gflags
acts as a backup and a catch-all.
There are many flags which should have reasonable defaults, which either are specific to this BOSH release (and thus aren't defined in upstream Yugabyte), or we feel should be different from the defaults selected from upstream Yugabyte. But if we don't put those configuration flags as their own property in the BOSH job spec
, and instead rely on gflags: {x: y}
to pass in everything, then there's no way (that I'm aware of?) for the maintainers of this BOSH release to set default gflags
in such a way that consumers could selectively override individual flags. For example: if someone wanted to override one flag, like placement_cloud
, then all the defaults we set in gflags
in the job spec
file would back off and deactivate. A consumer would have to define all the defaults we set (if they so chose) in their gflags
override in addition to the one flag they wanted to change.
There is a default YCQL superadmin with the credentials cassandra
/cassandra
. The password for the cassandra
user can be rotated in a two-step process. You'll need to configure the cassandra_password_old
property, which will be used while attempting to set the new password to cassandra_password
. Once the new password of cassandra_password
is set and in-use, you can remove the opsfile for cassandra_password_old
at your discretion.
Now, with that said, keep in mind--
The default manifest in manifests/yugabyte.yml
will automatically change the cassandra
user password to an autogenerated password of ((ycql_cassandra_password))
. The cassandra
user is then used for other internal administrative tasks, like provisioning other users, etc. It also provides a default "superuser" of admin
with a password of ((ycql_superuser_admin_password))
. The intent is that this user be used by consuming applications instead of cassandra
/cassandra
. That's the current ideal, at least.
In order to change the password of a user through ycql.databases.superusers[*].password: some_password
, just change the value of some_password
in-place. The root cassandra
user is used internally to ALTER
those superusers, so you don't need to worry about doing fancy swapouts of those passwords. Just change it in the deployment manifest, and it'll rotate on the next deploy.
Having a fully automated release process is a goal. But we want to make sure it's done well, and would like to have it done using github actions if possible. But until then, here's the general workflow. We're assuming any bosh add-blobs
and bosh upload-blobs
commands have been git commit
'ed if blobs are changing, and now we're on the release process.
NOTE: before cutting a new release, make sure that the contents of src/yugabyte-additional/post_install.sh
have proper values of ORIG_BREW_HOME
and ORIG_LEN
and such depending on the upstream version of yugabyte
being cut.
cd yugabyte-boshrelease
# first of all, your workspace needs to be up-to-date and clean of dirty commits,
# or else you'll commit something inadvertently to this release
git pull origin main
# to pull all blobs from s3 to local directory, if necessary
bosh sync-blobs
git checkout -b release-x.y.z
# place the release tgz in your /tmp dir in order to calculate a shasum on it, and to upload to a github release
bosh create-release --final --version=x.y.z --tarball=/tmp/yugabyte-x.y.z.tgz
# this will be used to update the versions.yml
shasum -a 1 /tmp/yugabyte-x.y.z.tgz
# use that shasum value to update the manifests/versions.yml
yugabyte_boshrelease_sha1: 582c112d4621361a031e530885f5653868f1bbd0
yugabyte_boshrelease_version: x.y.z
# git commit all of this to the branch
git add -A
git commit -m "release-x.y.z"
git push origin release-x.y.z
# squash 'n merge it into main
now for making the release available as an actual github release:
# after squashing and merging into main...
git checkout main
git pull origin main
# notice the lack of 'v' prefix. not a fan of it.
git tag x.y.z
git push origin --tags
then go to the github releases page, click on the release for the newly created tag, and configure the release with a title, release notes, and an asset copy of the tarball from /tmp/yugabyte-x.y.z.tgz
voila, you're set.
Ideas, feedback, bug reports, etc. are all welcome, but by no means guaranteed to be implemented, responded to, or merged.