Open raquelalegre opened 7 years ago
Does that restriction mean underscores aren't allowed?
I have only checked with -
and it didn't work on Terraform, although it is actually accepted by Azure. I'll check with _
now. Can't remember right now the error message.
I was wrong :) Those restrictions were not for all the resources, just the storage account and the storage container, which are not named by the user, so no need to check for symbols on resource names. There's probably a length minimum and maximum for some of the parameters though. I need to check in the Azure portal.
Here are the naming rules and restrictions for Azure:
https://docs.microsoft.com/en-us/azure/architecture/best-practices/naming-conventions
It doesn't have everything though - for example usernames, hostnames or passwords of VMs. I'll compile these when we get there.
See also comments on https://github.com/UCL-CloudLabs/user-portal/pull/8#pullrequestreview-51757162
It might be worth seeing if check constraints can represent this in the DB. Though we might want to check with the form validator anyway to give nicer error messages.
It's annoying that Terraform is more restrictive than Azure here :(
It's also annoying that Azure has different rules for different types of resource. The storage account is particularly trickier, since that needs to be a globally unique alphanumeric identifier. Their suggestion is to use a UUID generator for it. (So we could use uuid.uuid5(uuid.NAMESPACE_DNS, host.basic_url)
.)
We may want to consider being more flexible in what the user enters as (e.g.) DNS name, since the only real restriction there is what UCL's DNS allows, and then just strip disallowed characters when passing to Terraform/Azure in different places.
Related to this, we can think about whether we should restrict the max length of names allowed for the base name, now that the Azure hostname also includes the "ucl" prefix and a randomised component (after #50). This affects two places:
Semi-related: Since the maximum name for Windows VMs is 16 characters but only needs to be unique within the resource group, we can have all VMs have a fixed name e.g. "vm", and only pass the name specified in the form to the resource group. This is easy to do by changing the Terraform template.
Bit referred to in the previous comment done as part of #72. I didn't make the change directly on the TF template, but in names.py
. There are a few calls to vm_name()
from elsewhere that use names.py
which are not tested (star/stop/restart machine, I think) and I didn't want to risk it. names.py
needs refactoring, so we will get to this again in the future.
Another thing to maybe check in the repository field: If the user specifies the SSH-based connection string, the deployment might hang because of git (or ssh?) asking permission to add the github server's key when cloning the repository.
We could check whether the string is git@...
and, if so, flag it up or convert it to the HTTPS one. Alternatively, we could make sure that we have the right keys accepted, but that's not very robust. At the very least, some info about what form the string should have is probably a good idea.
azurerm_virtual_machine.vm (remote-exec): Cloning into 'repo'...
azurerm_virtual_machine.vm (remote-exec): The authenticity of host 'github.com (192.30.253.113)' can't be established.
azurerm_virtual_machine.vm (remote-exec): RSA key fingerprint is SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8.
azurerm_virtual_machine.vm (remote-exec): Are you sure you want to continue connecting (yes/no)?
Converting git@
to https
would only work if the repo is not private.
If the user doesn't configure the appropriate keys, I guess we should catch the error, report it in the log and show the deployment as failed. We can show in the error message some guidance and say they can ssh into the machine and do the deployment manually, or add the correct ssh keys in their user config and re deploy.
We don't support private repos at the moment, do we? Even if they do give the right keys (for GitHub to allow the connection), it will still fail because the CloudLabs server will be waiting for authorisation to connect to GitHub's server.
I meant to add the output to the previous comment, added now.
Oops. I forgot about the problem with the private repos. Ok, never mind what I said, private repos are not supported until we configure GH's OAuth settings for CloudLabs.
We can have in the input field the https://
part of the URL hardcoded, so users know they have to write the repo URL in that form. Not sure if other hosting services like SF or BitBucket have the same kind of URLs though, so we need to check that first not to make it too restrictive.
Just checking - if the repo url is wrong it will just make the deployment fail and show and error status, right?
Maybe it's worth considering on a separate issue the deployment might hang for other reasons (like something related to the user's deployment script which is out of our control) and have some sort of timeout check when we run terraform apply
. It'll be a bit hard to know when to timeout because it largely depends on whatever the user is deploying.
I'm not sure how much we have done on URL prefix validation. We should check several things:
.
, otherwise it might mess up the DNS mapping on UCL's side if they set up a wildcard certificate for *.cloudlabs.rc.ucl.ac.uk
, which would cover x.cloudlabs.rc.ucl.ac.uk
but wouldn't cover x.y.cloudlabs.rc.ucl.ac.uk
.-
and _
at the beginning of the DNS prefix. We could do this from the form so users don't have to wait for Terraform's error message.More details on DNS mapping here: https://github.com/UCL-RITS/CloudLabs/issues/34.
Terraform doesn't allow for resources with names that contain things other than letters or numbers.
Password restrictions:
@ # $ % ^ & * - _ ! + = [ ] { } | \ : ‘ , . ? / ~ “ ( ) ;
`