Closed jsfeutz closed 8 years ago
Figured it out , User vsadmin didnt have enough access/permissions
Cool. Was there enough information for you to hunt this down from the netappdvp.log?
(Or did you feel like you were flying blind and stumbled on it some other way?)
Never found a log. find / -name "netapp*"
Our storage guy gave full admin access as a last ditch effort to see if it would work. It did. Next week will be working through what permissions are actually needed. If could provide that, it would be helpful.
Some other questions, Is there a performance benefit using the driver vs mount the file system via NFS to the host. So our concerns with the driver is that it is daemon running on each host that needs to be configured and monitored. So looking for the best solution for a large scale deployment of things like ELK where we may have 200 hosts.
Thanks again, for responding, I work at large bank so starting to go through storage management and performance options for containers. J.
Huh, should be here: /var/log/netappdvp.log
The log filename might be different if you specified a volume driver name when you started netappdvp.
The DVP isn't in the data path, so one of the things it's doing is performing that mount for you under the covers. There should be no difference in performance, but a significant difference in ease of use/integration into the ecosystem. Perfectly appropriate for larger deployments.
Love to talk more about what you're doing. Hit me up over email and we can chat more: mueller@netapp.com.
Cheers!
Hi. Any chance you have some kind of instruction on which permissions are needed on the NetApp side for this to work? If giving admin is a problem that is. Thanks
@paveljeloudovski This set of permissions was produced around the 1.0 release. It may need updated (and we need to properly document them), but hopefully it helps some:
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname DEFAULT -access none
# grant common nDVP permissions
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "event generate-autosupport-log" -access all
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "network interface" -access readonly
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "version" -access readonly
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "vserver" -access readonly
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "vserver nfs show" -access readonly
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "volume" -access all
# grant iscsi nDVP permissions
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "vserver iscsi show" -access readonly
security login role create -vserver [VSERVER] -role ndvp_role -cmddirname "lun" -access all
# create a new nDVP user with nDVP role
security login create -vserver [VSERVER] -username ndvp_user -role ndvp_role -application ontapi -authmethod password```
Thanks @adkerr We got it working.
One note though that might come handy for anyone encountering a similar problem The permissions must be set on the cluster level for the plugin to work. security login role create -vserver [CLUSTER] - instead of [VSERVER]
@paveljeloudovski Glad to hear it's working for you. Yes, if your user is cluster-scoped then you must use the cluster name in the vserver argument. If you were creating a vserver-scoped user then you would use the vserver where the user was created. Sorry for not making that clear, will make sure it's spelled out clearly when we update the docs.
So trying determine if its a client issue or server issue.
Client Server Centos 7, Tried server with: Centos7, Docker version 1.11.1, build 5604cbe And also with: Centos7, Docker version 1.10.3, build 20f81ddconfig:
Config: { "version": 1, "storageDriverName": "ontap-nas", "managementLIF": "10.134.123.12", "dataLIF": "10.134.23.25", "svm": "o2-vm-01", "username": "vsadmin", "password": "**", "aggregate": "o2_data1" }