Closed rpc-dam closed 4 years ago
1) There need to be both running, one instance of the "node" on each node and a singleton of 'controller' somewhere. See into 'deploy/kubernetes-1.15/csi-hyperv' for demo k8s config The best approach is to use ENV to set the required config values.
2) I spend a lot of time on the gRPC-UDS-h2c stuff and got some time by skippingthe fulll configuration implementation. Currently a hyper converged-FailoverCluster is supported. windows is requiring to have the CSV mounted at 'c:\clusterstorage\MyCSV', but a customized setup would be possible, even a Share-Nothing Server-Array, where the vhdx-files could be transferred between servers. Pls describe your setup and i will make the necessary to the config structures for you.
3) yes u can use Any-UserName SSHd will impersonate it with its System-Rights.
SSHd has a different auth schema. It is over PubKey. The Public-Keys is registered in a central admin file '$env:ProgramData\ssh\administrators_authorized_keys' or under the local-windows-profile folder .ssh/...
. A shell opened with this authentication will run as the SSHd Process-User aka SYSTEM
and SSHd can only impersonate the local-windows-user without unlocking user-secrets (passwords, sessions).
There are few ways to enable CredSSP and/or Kerberos, but they are a pain to setup from a lnx-container.
If you need something or run into problems fill free u write.
Hi,
I'd planned to update the appsettings.json, build images from your code with that updated appsettings.json, then edit the image location in the yamls to use my private repo, but happy to use environmental variables in the deploy yamls instead.
My setup is a 3 node 2019 HV cluster, with a CSV mounted at c:\clusterstorage\csi\ on them for storage. I have the ed25519 public key in administrators_authorized_keys on the HV hosts and tested ssh works using localadminaccount@hvhost.test.local using the ed25519 private key that exists on all the Centos VMs. Could you possibly provide an example of how I would edit the deploy yamls (or another method to pass the env vars, if you recommend that) to pass in the storage path and local admin account name?
Thanks again :)
Only the controller need to have the ssh key. The ssh key is stored in a k8s-secret and mounted to the default ssh-client location, but because the ssh-client requires to have write access to the known_hosts file, a workaround over an initcontainer is required.
You can just overwrite the username with the ENV DRIVER__USERNAME
The double underscore is the default in dotnet core configuration extensions
and ENV values have prescience over config-values.
The current hardcoded magic path is:
C:\ClusterStorage\<Storage>\Volumes\<VolumeId>.vdhx
and tmp-hardcoded storage name is hv05.
https://github.com/Zetanova/hyperv-csi-driver/blob/488cbc6c709ff1a486b47fe294d3ac453db4f62f/src/hyperv-csi-driver/Infrastructure/HypervHost.cs#L565-L588
If u create a normal folder Volumes
Path: c:\clusterstorage\csi\Volumes
and specify the csi-parameter "Storage": "csi" it will create or find there the vhdx files.
I will make a PR now to change the hv05
magic string over a config value.
To import some disks from a single-docker host use:
apiVersion: v1
kind: PersistentVolume
metadata:
name: mssql-01
spec:
storageClassName: csi-hyperv-sc
accessModes: [ "ReadWriteOnce" ]
persistentVolumeReclaimPolicy: Delete
volumeMode: Filesystem
csi:
driver: eu.zetanova.csi.hyperv
volumeHandle: mssql-01
fsType: ext4
readOnly: false
volumeAttributes:
Id: 7F243007-D5E6-4F2A-9316-3B55D5F6513B
Path: C:\ClusterStorage\hv05\Volumes\mssql-01.vhdx
Storage: hv05
capacity:
storage: 10Gi
claimRef:
name: data-mssql-0
namespace: myNamespace
Fantastic, thanks; I will rename csi dir to hv05 on the CSV, put the env values into my controller.yaml as you specified, and let you know how the testing goes
I made now a small improvement.
use for the controller the new config option Driver.DefaultStorage
and dont forget to create a folder Volumes
in the root of your CSV
env:
- name: DRIVER__TYPE
value: "Controller"
- name: DRIVER__USERNAME
value: "YourUserName"
- name: DRIVER__DEFAULTSTORAGE
value: "csi"
- name: CSI_ENDPOINT
value: /csi/hyperv.sock
It is possible to change the default storage later. Old volumes will not get effected.
Sorry for the delayed response (and the 900 edits I have made to this post):
To test this, I built a 1.17.5 Kubernetes cluster, on Centos 7 VMs, on a Windows 2019 Hyper-V cluster.
I think there is a small typo in line 47 of Startup.cs at Startup.ConfigureServices() method, if I change line 47 to this:
if (string.IsNullOrEmpty(opt.UserName))
then my pvc gets created with the username from the env var. Without that change sshd on the Hyper-V hosts shows that the usename administrator is being used to authenticate.
The defaultstorage path change is working beautifully, thank you again for making it.
thx, is corrected
Is it working now for you?
Yeah, I have a pvc bound using the csi-hyperv-sc storageclass on the cluster, with the vhdx created on the csv.
At the moment I can only provision storage in the csi-hyperv namespace, but (1) that is a seperate issue and (2) there is every chance that is a problem of my own making, rather than with the driver.
So I'll close this issue, thanks once again for your work - let me know if there is any way I can donate to the project, I owe you at least a few beers for sure.
very good.
pv and storageclasses are k8s global (namespace less) only there claims (pvc's) have a namespace.
Hi, This looks great, I am about to test it out (thank you for creating it), but I have a some questions about the appsettings values: (1) Driver:Type Node or Controller - does it matter which is used? I am guessing it should be Controller but am not sure. (2) Driver:Type:Controller - is this the path to CSV root(s)? i.e should I be putting c:\clusterstorage\MyCSV or something here? (3) If the Administrator acount is disabled on my HV hosts, can I specify a local admin account name as a value for Username? Thanks