ceph / ceph-nvmeof

Service to provide Ceph storage over NVMe-oF/TCP protocol
GNU Lesser General Public License v3.0
82 stars 44 forks source link

ceph-nvmeof VS Rbd share over 200Gbit/s Network - Ope Debate #342

Open sarwanjassi opened 9 months ago

sarwanjassi commented 9 months ago

HI All,

This is an open debate for all the scientist and experts with storage solutions:

Hardware: 4 Node 48 CPU +256GB Memory + 200GB FIber Connectivity. Each Node has 6 NVME 900GB Drives.

Ceph Reef Version Installed on them with Vlan Network for internal connectivity.

  1. PG size 1024 for pool
  2. rbd to rbd share Read Iops for 4K is arond 750K/S
  3. NvmeofGw rbd share over fiber is hardly 100K/s

any advise of configuration we can change to get max performance here. Any advise is much appreciated.

pcuzner commented 9 months ago

@sarwanjassi what's your tgt_cmd_extra_args setting in the nvme conf file. By default the spdk reactors will only consume 1 core, which could be your bottleneck. If so you could try adding a -m 0x3 (2 cores), -m 0x7 (3 cores) or -m 0xF (4 cores)?

i.e tgt_cmd_extra_args = -m 0xF