Closed glimchb closed 1 year ago
Well #133 had proposed following parameters for the device itself, host_nqn was one of them
message NvmeIniDevSpec {
// (mandatory) initiator device object key.
types.ObjectKey nvme_ini_dev_key = 1;
// (mandatory) unique NQN identifer for the host, used in fabric connect to
// the NVMe-oF target
bytes host_nqn = 2;
// (mandatory) unique host ID, used in fabric connect to the NVMe-oF target.
bytes host_id = 3;
}
But again, this is DPU device specific thing, not a backend property. I am not sure if #216 is a fix to this issue. What we may need is Device Initialization API vs. tucking in host_nqn in the backend proto
Device Initialization is not a good place if we want to support multiple NQNs per DPU and then use different hostNQNs when we connect to different targets so we need that in BackEnd APIs and not init
if we follow nvme-cli in linux, we see the same.
there are default /etc/nvme/hostnqn
and /etc/nvme/hostid
but you can also specify hostid
and hostnqn
during connect command to override default:
nvme connect --help
Usage: nvme connect <device> [OPTIONS]
Connect to NVMeoF subsystem
Options:
[ --transport=<LIST>, -t <LIST> ] --- transport type
[ --nqn=<LIST>, -n <LIST> ] --- nqn name
[ --traddr=<LIST>, -a <LIST> ] --- transport address
[ --trsvcid=<LIST>, -s <LIST> ] --- transport service id (e.g. IP
port)
[ --host-traddr=<LIST>, -w <LIST> ] --- host traddr (e.g. FC WWN's)
[ --hostnqn=<LIST>, -q <LIST> ] --- user-defined hostnqn
[ --hostid=<LIST>, -I <LIST> ] --- user-defined hostid (if default
not used)
[ --nr-io-queues=<LIST>, -i <LIST> ] --- number of io queues to use
(default is core count)
[ --nr-write-queues=<LIST>, -W <LIST> ] --- number of write queues to use
(default 0)
[ --nr-poll-queues=<LIST>, -P <LIST> ] --- number of poll queues to use
(default 0)
[ --queue-size=<LIST>, -Q <LIST> ] --- number of io queue elements to
use (default 128)
[ --keep-alive-tmo=<LIST>, -k <LIST> ] --- keep alive timeout period in
seconds
[ --reconnect-delay=<LIST>, -c <LIST> ] --- reconnect timeout period in
seconds
[ --ctrl-loss-tmo=<LIST>, -l <LIST> ] --- controller loss timeout period in
seconds
[ --duplicate_connect, -D ] --- allow duplicate connections
between same transport host and
subsystem port
[ --disable_sqflow, -d ] --- disable controller sq flow
control (default false)
[ --hdr_digest, -g ] --- enable transport protocol header
digest (TCP transport)
[ --data_digest, -G ] --- enable transport protocol data
digest (TCP transport)
in CSI, for example, we need DPU
host
NQN before DPU can establish remote backend connectivity to the storage targetsthis is needed so we can allow those hosts to connect to the target. assuming
allow_any
is not an option see examplenvmf_subsystem_add_host
here https://spdk.io/doc/jsonrpc.html#rpc_nvmf_subsystem_add_hostso we need new API that can either return us the generated by DPU hostnqn or a new API to set hostnqn to the DPU