Closed noelmcloughlin closed 5 years ago
@leonwanghui
Is osdslet
the controller? I'm getting above error - what am I missing?
Yes
I think you just enabled keystone
authentication but didn't provide the valid username and password. For the first step, I suggest you configure OPENSDS_AUTH_STRATEGY=noauth
Like this?
[osdslet]
api_endpoint = 192.168.1.11:50040
graceful = True
socket_order = inc
log_file = /var/log/opensds/osdslet.log
##auth_strategy = keystone
auth_strategy=noauth
Yes, you can replace that using some commands like sed
Thanks. I executed again with noauth
set and service started.
Have you seen this error before?
vagrant@ubuntu1804:~$ export OPENSDS_ENDPOINT=http://192.168.1.12:50040
vagrant@ubuntu1804:~$ export OPENSDS_AUTH_STRATEGY=noauth
vagrant@ubuntu1804:~$ source /opt/opensds-linux-amd64-devstack/openrc admin admin
WARNING: setting legacy OS_TENANT_NAME to support cli tools.
vagrant@ubuntu1804:~$ osdsctl pool list
ERROR: List pools failed: context deadline exceeded
vagrant@ubuntu1804:~$ osdsctl profile create '{"name": "default", "description": "default policy"}'
ERROR: Create profile failed: rpc error: code = Unavailable desc = transport is closing
vagrant@ubuntu1804:~$ osdsctl volume create 1 --name=test-001
ERROR: Create volume failed: context deadline exceeded
vagrant@ubuntu1804:~$ osdsctl volume list
ERROR: List volumes failed: context deadline exceeded
Can you share your updated config file? Have you restarted osdslet service after modifying the field? I think it still has some auth issues.
$ cat /etc/opensds/opensds.conf
[keystone_authtoken]
auth_type = password
project_name = service
user_domain_name = Default
auth_url = http://192.168.1.12/identity
memcached_servers = 192.168.1.12:11211
project_domain_name = Default
[osdslet]
api_endpoint = 192.168.1.12:50040
graceful = True
socket_order = inc
log_file = /var/log/opensds/osdslet.log
auth_strategy = noauth
[osdsdock]
api_endpoint = 192.168.1.12:50050
dock_type = provisioner
log_file = /var/log/opensds/osdsdock.log
enabled_backends = lvm
enabled_backend = lvm
$ cat lvm.yaml
"tgtBindIp": "192.168.1.12"
"tgtConfDir": "/etc/tgt/conf.d"
"pool":
"opensds-volumes":
"storageType": "block"
"availabilityZone": "default"
"extras":
"dataStorage":
"provisioningPolicy": "Thin"
"isSpaceEfficient": "False"
"advanced":
"a": "b"
"latency": "5ms"
"diskType": "SSD"
"ioConnectivity":
"accessProtocol": "iscsi"
"maxBWS": "600"
"maxIOPS": "7000000"
I changed some things:
username
, password
and default_domain_name
to opensds.conf
?Now I see this.
$ osdsctl volume list
ERROR:
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<title>Not Found</title>
... etc ...
</head>
<body>
<div id="wrapper">
<div id="container">
<div class="navtop">
<h1>Not Found</h1>
</div>
<div id="content">
<br>The page you have requested has flown the coop.<br>Perhaps you are here because:<br><br><ul><br>The page has moved<br>The page no longer exists<br>You were looking for your puppy and got lost<br>You like 404 pages</ul>
<a href="/" title="Home" class="button">Go Home</a><br />
<br>Powered by beego 1.9.2
</div>
</div>
</div>
</body>
</html>
Note: I updated opensds-installer/salt
PR with fixes ...
opensds.conf
.Just to recap the osdslet
daemon is now running. The current issue is getting the CLI working.
[keystone_authtoken]
auth_type = password
username = opensdsv0.3.3
project_name = service
user_domain_name = Default
project_default_name = Default
auth_url = http://192.168.1.13/identity
memcached_servers = 192.168.1.13:11211
password = opensds@123
project_domain_name = Default
[osdslet]
api_endpoint = 192.168.1.13:50040
graceful = True
socket_order = inc
log_file = /var/log/opensds/osdslet.log
auth_strategy = noauth
[osdsdock]
api_endpoint = 192.168.1.13:50050
dock_type = provisioner
log_file = /var/log/opensds/osdsdock.log
enabled_backends = lvm
enabled_backend = lvm
CLI issue
vagrant@ubuntu1804:~$ osdsctl pool list --debug
DEBUG: 2018/12/18 14:54:07 receiver.go:75: GET http://192.168.1.13:50040/v1beta/e93b4c0934da416eb9c8d120c5d04d96/pools?offset=0&sortDir=desc&limit=50&sortKey=id
DEBUG: 2018/12/18 14:54:10 receiver.go:101:
StatusCode: 400 Bad Request
Response Body:
{"code":400,"message":"List pools failed: context deadline exceeded"}
ERROR: List pools failed: context deadline exceeded
vagrant@ubuntu1804:~$ ps -ef | grep osds
root 10685 10621 94 14:14 ? 00:37:12 /usr/bin/osdsdock
root 20394 1 99 12:31 pts/0 03:20:51 /opt/opensds-linux-amd64/bin/osdslet
vagrant@ubuntu1804:~$ netstat -tuplan | grep 50040
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 192.168.1.13:50040 0.0.0.0:* LISTEN -
tcp 0 0 192.168.1.13:38012 192.168.1.13:50040 TIME_WAIT -
tcp 0 0 192.168.1.13:55708 192.168.1.13:50040 TIME_WAIT -
vagrant@ubuntu1804:~$ netstat -tuplan | grep 11211
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:11211 0.0.0.0:* LISTEN -
tcp6 0 0 :::11211 :::* LISTEN -
That's weired, I can share the auth config info in ansible:
[keystone_authtoken]
memcached_servers = $HOST_IP:11211
signing_dir = /var/cache/opensds
cafile = /opt/stack/data/ca-bundle.pem
auth_uri = http://$HOST_IP/identity
project_domain_name = Default
project_name = service
user_domain_name = Default
password = $STACK_PASSWORD
username = $OPENSDS_SERVER_NAME
auth_url = http://$HOST_IP/identity
auth_type = password
And something I need to make sure is if you restarted osdslet
daemon service after updating opensds.conf
?
Thanks. I'll do some more troubleshooting. Maybe username is wrong in my config. I have confirmed that firewall is not the issue anyway.
My code is missing nginx reverse proxy - I have updated code to fix this and will test tomorrow.
I see this line in ansible/scripts/keystone.sh
but cannot locate the json file on filesystem?
cp "$OPENSDS_DIR/examples/policy.json" "$OPENSDS_CONFIG_DIR"
Where or what is this file?
It's located at here, you need to copy it into your project.
Thanks - found it.
Running osdslet as daemon fails with error logged