namjaejeon / ksmbd

ksmbd kernel server(SMB/CIFS server)
https://github.com/cifsd-team/ksmbd
282 stars 64 forks source link

Does `ksmbd`'s SMB Direct compatiable with Windows Server's SMB-Direct? #466

Open LittleNewton opened 11 months ago

LittleNewton commented 11 months ago

Windows 10 Pro Workstation and Windows Server 2022 provide a feature named SMB Direct which can leverage RDMA technology to speed up network file I/O.

ksbmd has implemented SMB-Direct. I wonder to know its compatibility with Windows client.

namjaejeon commented 11 months ago

ksmbd supports smb-direct. Did you find any problems after running it?

besterino commented 3 weeks ago

ksmbd supports smb-direct. Did you find any problems after running it?

If I may: I also am trying to get smb-direct working between a linux server and windows 11 Pro for Workstation client.

So far, not sucessfully.

Tried first with Proxmox 8.2.2, then with Debian 12.7 and currently with Ubuntu 24.10 on the "server-side".

More details of current linux install: Ubuntu 24.10 6.11.0-9-generic ksmbd-tools 3.5.2-3 ConnectX-5 (HPE flavor)

Windows box: Windows 11 Pro for Workstation 23H2 (Build 22631.4317) ConnectX-4 (VPI)

What else?

My ksmbd.conf (quick & dirty share with guest write access):

; see ksmbd.conf(5) for details

[global] ; global parameters bind interfaces only = no deadtime = 0 guest account = nobody interfaces = ipc timeout = 0 kerberos keytab file = kerberos service name = map to guest = bad user max active sessions = 1024 max connections = 128 max open files = 10000 netbios name = KSMBD SERVER restrict anonymous = 0 root directory = server max protocol = SMB3_11 server min protocol = SMB3_11 server multi channel support = yes server signing = disabled server string = SMB SERVER share:fake_fscaps = 64 smb2 leases = no smb2 max credits = 8192 smb2 max read = 4MB smb2 max trans = 1MB smb2 max write = 4MB smb3 encryption = auto smbd max io size = 8MB tcp port = 445 workgroup = WORKGROUP durable handles = no

; default share parameters
browseable = yes
comment = 
create mask = 0775
crossmnt = yes
directory mask = 0755
;force create mode = 0000
;force directory mode = 0000
;force group = 
;force user = 
;guest ok = no
hide dot files = yes
inherit owner = no
invalid users = 
oplocks = yes
path = 
read list = 
read only = ; yes
store dos attributes = yes
valid users = 
veto files = 
vfs objects = 
write list = 

[example] ; share parameters comment = read only /tmp access path = /smbtmp writeable = yes public = yes guest ok = yes

root@ubuntu:/# grep RDMA /boot/config-6.11.0-9-generic CONFIG_CGROUP_RDMA=y CONFIG_RDS_RDMA=m CONFIG_NET_9P_RDMA=m CONFIG_NVME_RDMA=m CONFIG_NVME_TARGET_RDMA=m CONFIG_QED_RDMA=y CONFIG_INFINIBAND_ERDMA=m CONFIG_INFINIBAND_IRDMA=m CONFIG_INFINIBAND_OCRDMA=m CONFIG_INFINIBAND_VMWARE_PVRDMA=m CONFIG_INFINIBAND_RDMAVT=m CONFIG_RDMA_RXE=m CONFIG_RDMA_SIW=m CONFIG_SUNRPC_XPRT_RDMA=m root@ubuntu:/#

root@ubuntu:/# ibv_devices device node GUID


rocep33s0f0         9440c9ffff816074
rocep33s0f1         9440c9ffff816075

root@ubuntu:/# ibv_devinfo hca_id: rocep33s0f0 transport: InfiniBand (0) fw_ver: 16.35.4030 node_guid: 9440:c9ff:ff81:6074 sys_image_guid: 9440:c9ff:ff81:6074 vendor_id: 0x02c9 vendor_part_id: 4119 hw_ver: 0x0 board_id: HPE0000000009 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet

hca_id: rocep33s0f1 transport: InfiniBand (0) fw_ver: 16.35.4030 node_guid: 9440:c9ff:ff81:6075 sys_image_guid: 9440:c9ff:ff81:6074 vendor_id: 0x02c9 vendor_part_id: 4119 hw_ver: 0x0 board_id: HPE0000000009 phys_port_cnt: 1 port: 1 state: PORT_DOWN (1) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet

root@ubuntu:/#

How did I check whether RDMA works: when it's working properly between Windows Client and Windows Server, the taskmanager on the client (and the server) does not show any utilisation of the relevant ethernet device. I achieved that so hardware works.

rping on linux against ndrping on windows works in both directions: ping data: rdma-ping-4965: defghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWX ping data: rdma-ping-4966: efghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXY ping data: rdma-ping-4967: fghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ping data: rdma-ping-4968: ghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ[ ping data: rdma-ping-4969: hijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ[\ ping data: rdma-ping-4970: ijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ[] ping data: rdma-ping-4971: jklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ[]^ ping data: rdma-ping-4972: klmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ[]^

dmesg-logging: dmesg.txt

With previous installs (debian / proxmox) I also tinkered around with Mellanox drivers, ib_send_bw (does work both on linux and windows machine - when one acts both as client and server, but not BETWEEN windows and linux).

I am not an linux expert but happy to follow instructions for further testing, outputs etc. (not really familiar with debugging, logging etc.). Whatever you need.

namjaejeon commented 2 weeks ago

@besterino I don't have your setup and same HW. If you could provide me the same setup and HW, I could fix the problem more faster, but not everyone does that. I have checked smb-direct of ksmbd with ConnectX-3 NIC.