rook / rook

Storage Orchestration for Kubernetes
https://rook.io
Apache License 2.0
12.28k stars 2.68k forks source link

pvc is pending status #10504

Closed LittleCadet closed 2 years ago

LittleCadet commented 2 years ago

Is this a bug report or feature request?

Deviation from expected behavior: pvc status is pending

sh-4.4$ ceph status
  cluster:
    id:     42c5b2f2-0efb-46b5-ad9d-3a3b14b43538
    health: HEALTH_WARN
            Degraded data redundancy: 44/66 objects degraded (66.667%), 11 pgs degraded, 48 pgs undersized
            OSD count 1 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum a (age 41m)
    mgr: a(active, since 40m)
    mds: 1/1 daemons up
    osd: 1 osds: 1 up (since 40m), 1 in (since 40m)

  data:
    volumes: 1/1 healthy
    pools:   2 pools, 48 pgs
    objects: 22 objects, 2.8 KiB
    usage:   6.0 MiB used, 20 GiB / 20 GiB avail
    pgs:     44/66 objects degraded (66.667%)
             37 active+undersized
             11 active+undersized+degraded

  progress:
    Global Recovery Event (0s)
      [............................] 

Expected behavior: pvc status is bound

How to reproduce it (minimal and precise):

1. kubectl create -f crds.yaml -f commons.yaml -f operator.yaml
2. kubectl create -f cluster.yaml 
       something different with origin example cluster.yaml : 
         mon : 3 => 1
         mgr: 2 => 1 
      reason : i just want to do a demo , but i have not enough machines.  only 2 work nodes. 
4.  kubectl create -f filesystem.yaml
5. kubectl create -f storageclass.yaml 
6. kubectl create -f kube-registry.yaml

yesterday : everything is fine , but , today : i re-create cephCluster. then , pvc status is pending .

File(s) to submit:

  1. rook-ceph-mon-a-54c944bfc9-p6hm9 :

    debug 2022-06-24T07:33:15.672+0000 7f4bb753b700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.674467) [db/db_impl/db_impl_compaction_flush.cc:2402] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
    debug 2022-06-24T07:33:15.672+0000 7f4bb753b700  4 rocksdb: [db/flush_job.cc:338] [default] [JOB 29] Flushing memtable with next log file: 52
    debug 2022-06-24T07:33:15.672+0000 7f4bb753b700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1656055995674563, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 642, "num_deletes": 251, "total_data_size": 394313, "memory_usage": 407288, "flush_reason": "Manual Compaction"}
    debug 2022-06-24T07:33:15.672+0000 7f4bb753b700  4 rocksdb: [db/flush_job.cc:367] [default] [JOB 29] Level-0 flush table #53: started
    debug 2022-06-24T07:33:15.677+0000 7f4bb753b700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1656055995679820, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 299681, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 296760, "index_size": 846, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7489, "raw_average_key_size": 19, "raw_value_size": 290652, "raw_average_value_size": 741, "num_data_blocks": 39, "num_entries": 392, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1656055955, "oldest_key_time": 1656055955, "file_creation_time": 1656055995, "db_id": "2d00c104-1a06-4082-818f-2fde37d2a90b", "db_session_id": "9XFB1FWWMEJCO11YXDJ8"}}
    debug 2022-06-24T07:33:15.678+0000 7f4bb753b700  4 rocksdb: [db/flush_job.cc:431] [default] [JOB 29] Level-0 flush table #53: 299681 bytes OK
    debug 2022-06-24T07:33:15.678+0000 7f4bb753b700  4 rocksdb: [db/version_set.cc:3459] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
    debug 2022-06-24T07:33:15.681+0000 7f4bb753b700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.679953) [db/memtable_list.cc:451] [default] Level-0 commit table #53 started
    debug 2022-06-24T07:33:15.681+0000 7f4bb753b700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.683262) [db/memtable_list.cc:631] [default] Level-0 commit table #53: memtable #1 done
    debug 2022-06-24T07:33:15.681+0000 7f4bb753b700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.683294) EVENT_LOG_v1 {"time_micros": 1656055995683288, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
    debug 2022-06-24T07:33:15.681+0000 7f4bb753b700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.683315) [db/db_impl/db_impl_compaction_flush.cc:235] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
    debug 2022-06-24T07:33:15.681+0000 7f4bb753b700  4 rocksdb: [db/db_impl/db_impl_files.cc:420] [JOB 29] Try to delete WAL files size 390804, prev total WAL file size 390804, number of live WAL files 2.
    debug 2022-06-24T07:33:15.682+0000 7f4bb753b700  4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
    debug 2022-06-24T07:33:15.685+0000 7f4bae529700  4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1616] [default] Manual compaction starting
    debug 2022-06-24T07:33:15.685+0000 7f4bb7d3c700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.687011) [db/db_impl/db_impl_compaction_flush.cc:2721] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:20 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
    debug 2022-06-24T07:33:15.685+0000 7f4bb7d3c700  4 rocksdb: [db/compaction/compaction_job.cc:1884] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
    debug 2022-06-24T07:33:15.685+0000 7f4bb7d3c700  4 rocksdb: [db/compaction/compaction_job.cc:1888] [default] Compaction start summary: Base version 30 Base level 0, inputs: [53(292KB)], [51(5098KB)]
    debug 2022-06-24T07:33:15.685+0000 7f4bb7d3c700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1656055995687064, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 5520431}
    cluster 2022-06-24T07:33:14.462727+0000 mgr.a (mgr.4231) 981 : cluster [DBG] pgmap v984: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    audit 2022-06-24T07:33:14.952844+0000 mon.a (mon.0) 1083 : audit [DBG] from='client.? 172.17.0.2:0/2186385964' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
    debug 2022-06-24T07:33:15.713+0000 7f4bb7d3c700  4 rocksdb: [db/compaction/compaction_job.cc:1521] [default] [JOB 30] Generated table #54: 3571 keys, 4578051 bytes
    debug 2022-06-24T07:33:15.713+0000 7f4bb7d3c700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1656055995715556, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 4578051, "file_checksum": "", "file_checksum_func_name": "Unknown", "table_properties": {"data_size": 4554975, "index_size": 13064, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9029, "raw_key_size": 86319, "raw_average_key_size": 24, "raw_value_size": 4493028, "raw_average_value_size": 1258, "num_data_blocks": 549, "num_entries": 3571, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "rocksdb.BuiltinBloomFilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; ", "creation_time": 1656053968, "oldest_key_time": 0, "file_creation_time": 1656055995, "db_id": "2d00c104-1a06-4082-818f-2fde37d2a90b", "db_session_id": "9XFB1FWWMEJCO11YXDJ8"}}
    debug 2022-06-24T07:33:15.714+0000 7f4bb7d3c700  4 rocksdb: [db/compaction/compaction_job.cc:1598] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 4578051 bytes
    debug 2022-06-24T07:33:15.714+0000 7f4bb7d3c700  4 rocksdb: [db/version_set.cc:3459] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
    debug 2022-06-24T07:33:15.716+0000 7f4bb7d3c700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.718641) [db/compaction/compaction_job.cc:830] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 192.9 rd, 159.9 wr, level 6, files in(1, 1) out(1) MB in(0.3, 5.0) out(4.4), read-write-amplify(33.7) write-amplify(15.3) OK, records in: 4086, records dropped: 515 output_compression: NoCompression
    debug 2022-06-24T07:33:15.716+0000 7f4bb7d3c700  4 rocksdb: (Original Log Time 2022/06/24-07:33:15.718682) EVENT_LOG_v1 {"time_micros": 1656055995718670, "job": 30, "event": "compaction_finished", "compaction_time_micros": 28625, "compaction_time_cpu_micros": 15443, "output_level": 6, "num_output_files": 1, "total_output_size": 4578051, "num_input_records": 4086, "num_output_records": 3571, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
    debug 2022-06-24T07:33:15.721+0000 7f4bb7d3c700  4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
    debug 2022-06-24T07:33:15.721+0000 7f4bb7d3c700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1656055995723765, "job": 30, "event": "table_file_deletion", "file_number": 53}
    debug 2022-06-24T07:33:15.723+0000 7f4bb7d3c700  4 rocksdb: [file/delete_scheduler.cc:73] Deleted file /var/lib/ceph/mon/ceph-a/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
    debug 2022-06-24T07:33:15.723+0000 7f4bb7d3c700  4 rocksdb: EVENT_LOG_v1 {"time_micros": 1656055995725186, "job": 30, "event": "table_file_deletion", "file_number": 51}
    debug 2022-06-24T07:33:15.723+0000 7f4bae529700  4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1616] [default] Manual compaction starting
    debug 2022-06-24T07:33:15.723+0000 7f4bae529700  4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1616] [default] Manual compaction starting
    debug 2022-06-24T07:33:15.723+0000 7f4bae529700  4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1616] [default] Manual compaction starting
    debug 2022-06-24T07:33:15.723+0000 7f4bae529700  4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1616] [default] Manual compaction starting
    debug 2022-06-24T07:33:15.723+0000 7f4bae529700  4 rocksdb: [db/db_impl/db_impl_compaction_flush.cc:1616] [default] Manual compaction starting
    debug 2022-06-24T07:33:15.940+0000 7f4bb939d700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
    debug 2022-06-24T07:33:15.940+0000 7f4bb939d700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
    audit 2022-06-24T07:33:15.942074+0000 mon.a (mon.0) 1084 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
    audit 2022-06-24T07:33:15.942200+0000 mon.a (mon.0) 1085 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
    cluster 2022-06-24T07:33:16.463335+0000 mgr.a (mgr.4231) 982 : cluster [DBG] pgmap v985: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    cluster 2022-06-24T07:33:18.464002+0000 mgr.a (mgr.4231) 983 : cluster [DBG] pgmap v986: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:33:20.660+0000 7f4bb5537700  1 mon.a@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
    cluster 2022-06-24T07:33:20.464686+0000 mgr.a (mgr.4231) 984 : cluster [DBG] pgmap v987: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    cluster 2022-06-24T07:33:22.465179+0000 mgr.a (mgr.4231) 985 : cluster [DBG] pgmap v988: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:33:25.662+0000 7f4bb5537700  1 mon.a@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
    cluster 2022-06-24T07:33:24.465833+0000 mgr.a (mgr.4231) 986 : cluster [DBG] pgmap v989: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:33:25.961+0000 7f4bb939d700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
    debug 2022-06-24T07:33:25.961+0000 7f4bb939d700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
    audit 2022-06-24T07:33:25.962059+0000 mon.a (mon.0) 1086 : audit [DBG] from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
    audit 2022-06-24T07:33:25.962236+0000 mon.a (mon.0) 1087 : audit [DBG] from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
    cluster 2022-06-24T07:33:26.466546+0000 mgr.a (mgr.4231) 987 : cluster [DBG] pgmap v990: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    cluster 2022-06-24T07:33:28.467264+0000 mgr.a (mgr.4231) 988 : cluster [DBG] pgmap v991: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:33:30.664+0000 7f4bb5537700  1 mon.a@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
    cluster 2022-06-24T07:33:30.467979+0000 mgr.a (mgr.4231) 989 : cluster [DBG] pgmap v992: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    cluster 2022-06-24T07:33:32.468579+0000 mgr.a (mgr.4231) 990 : cluster [DBG] pgmap v993: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:33:35.666+0000 7f4bb5537700  1 mon.a@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
    cluster 2022-06-24T07:33:34.469078+0000 mgr.a (mgr.4231) 991 : cluster [DBG] pgmap v994: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:33:35.921+0000 7f4bb939d700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
    debug 2022-06-24T07:33:35.921+0000 7f4bb939d700  0 log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
  2. rook-ceph-mgr-a-ddc449956-r9lfx :

    debug 2022-06-24T07:34:10.480+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1012: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:12.480+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1013: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:14.481+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1014: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:16.483+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1015: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:18.483+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1016: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:20.484+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1017: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:22.485+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1018: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:24.485+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1019: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:26.486+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1020: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:28.486+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1021: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:30.488+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1022: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:32.489+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1023: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:34.489+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1024: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:36.167+0000 7f74c0285700  0 [volumes INFO mgr_util] scanning for idle connections..
    debug 2022-06-24T07:34:36.167+0000 7f74c0285700  0 [volumes INFO mgr_util] cleaning up connections: []
    debug 2022-06-24T07:34:36.241+0000 7f74bb0fb700  0 [volumes INFO mgr_util] scanning for idle connections..
    debug 2022-06-24T07:34:36.241+0000 7f74bb0fb700  0 [volumes INFO mgr_util] cleaning up connections: []
    debug 2022-06-24T07:34:36.264+0000 7f74b506f700  0 [volumes INFO mgr_util] scanning for idle connections..
    debug 2022-06-24T07:34:36.264+0000 7f74b506f700  0 [volumes INFO mgr_util] cleaning up connections: []
    debug 2022-06-24T07:34:36.490+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1025: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:38.490+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1026: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
    debug 2022-06-24T07:34:40.491+0000 7f74d6571700  0 log_channel(cluster) log [DBG] : pgmap v1027: 48 pgs: 37 active+undersized, 11 active+undersized+degraded; 2.8 KiB data, 6.0 MiB used, 20 GiB / 20 GiB avail; 44/66 objects degraded (66.667%)
  3. rook-ceph-operator-785cc8f794-pdpg7:

    2022-06-24 07:11:15.485681 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:12:15.485954 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:13:15.485690 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:14:15.486028 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:15:15.486271 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:16:15.485641 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:17:15.486453 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:18:15.485910 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:19:15.485415 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:20:15.486174 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:21:15.485941 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:22:15.486157 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:23:15.486446 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:24:15.485770 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:25:15.485513 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:26:15.485729 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:27:15.485766 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:28:15.486254 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:29:15.486498 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:30:15.485845 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:31:15.486012 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:32:15.486224 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:33:15.485501 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:34:15.486278 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
    2022-06-24 07:35:15.486337 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
  4. csi-cephfsplugin-provisioner-86d7c46746-7vrkt :

    I0624 06:58:54.431179       1 csi-provisioner.go:138] Version: v3.0.0
    I0624 06:58:54.431282       1 csi-provisioner.go:161] Building kube configs for running in cluster...
    I0624 06:58:55.437373       1 common.go:111] Probing CSI driver for readiness
    I0624 06:58:55.441162       1 csi-provisioner.go:277] CSI driver does not support PUBLISH_UNPUBLISH_VOLUME, not watching VolumeAttachments
    I0624 06:58:55.442809       1 leaderelection.go:248] attempting to acquire leader lease rook-ceph/rook-ceph-cephfs-csi-ceph-com..
  5. csi-cephfsplugin-kgtfj : no logs

Environment:

CENTOS_MANTISBT_PROJECT="CentOS-8" CENTOS_MANTISBT_PROJECT_VERSION="8" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="8



* Kernel (e.g. `uname -a`):
`Linux master01 4.18.0-193.28.1.el8_2.x86_64 #1 SMP Thu Oct 22 00:20:22 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux`

* Rook version (use `rook version` inside of a Rook Pod):
`rook: v1.8.0-alpha.0.146.gb69719ae7
go: go1.16.12`
* Storage backend version (e.g. for ceph do `ceph -v`):
`ceph version 16.2.7 (dd0603118f56ab514f133c8d2e3adfc983942503) pacific (stable)`

* Kubernetes version (use `kubectl version`):
`v1.23.5`
LittleCadet commented 2 years ago

kubectl describe pvc -n kube-system:

Name:          cephfs-pvc
Namespace:     kube-system
StorageClass:  rook-cephfs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       kube-registry-5b677b6c87-86kmp
               kube-registry-5b677b6c87-btpsq
Events:
  Type     Reason                Age                  From                                                                                                              Message
  ----     ------                ----                 ----                                                                                                              -------
  Warning  ProvisioningFailed    33m                  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-jxkgn_e1687ee9-24d3-40c8-a254-d42cde49edfa  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  ProvisioningFailed    14m (x13 over 33m)   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-jxkgn_e1687ee9-24d3-40c8-a254-d42cde49edfa  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-ecbc3246-e9cf-4e52-a36e-580891aef6e1 already exists
  Normal   Provisioning          4m1s (x17 over 35m)  rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-jxkgn_e1687ee9-24d3-40c8-a254-d42cde49edfa  External provisioner is provisioning volume for claim "kube-system/cephfs-pvc"
  Normal   ExternalProvisioning  40s (x142 over 35m)  persistentvolume-controller                                                                                       waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator

what should i do now, someone help ?

Madhu-1 commented 2 years ago

@LittleCadet looks like you have 1 OSD. please paste the filesystem yaml. if you are planning to test Rook ,please choose Replica value as 1 or use filesystem-test.yaml

LittleCadet commented 2 years ago

@Madhu-1 sorry , i am a newer ,use the filesystem-test.yaml : not fixed :

some description about pvc in kube-system namespace:

[root@master01 cephfs]# kubectl describe pvc -n kube-system
Name:          cephfs-pvc
Namespace:     kube-system
StorageClass:  rook-cephfs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       kube-registry-5b677b6c87-87sgc
               kube-registry-5b677b6c87-klqdh
Events:
  Type     Reason                Age                    From                                                                                                              Message
  ----     ------                ----                   ----                                                                                                              -------
  Warning  ProvisioningFailed    50m                    rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-jxkgn_e1687ee9-24d3-40c8-a254-d42cde49edfa  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  ProvisioningFailed    27m (x14 over 50m)     rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-jxkgn_e1687ee9-24d3-40c8-a254-d42cde49edfa  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-06316dd3-5387-40a7-984b-a3b9178e4802 already exists
  Normal   ExternalProvisioning  3m21s (x203 over 53m)  persistentvolume-controller                                                                                       waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          2m27s (x22 over 53m)   rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-jxkgn_e1687ee9-24d3-40c8-a254-d42cde49edfa  External provisioner is provisioning volume for claim "kube-system/cephfs-pvc"

then ceph status :

  sh-4.4$ ceph status
  cluster:
    id:     42c5b2f2-0efb-46b5-ad9d-3a3b14b43538
    health: HEALTH_WARN
            Degraded data redundancy: 32 pgs undersized
            1 pool(s) have no replicas configured
            OSD count 1 < osd_pool_default_size 3

  services:
    mon: 1 daemons, quorum a (age 112m)
    mgr: a(active, since 111m)
    mds: 1/1 daemons up
    osd: 1 osds: 1 up (since 111m), 1 in (since 112m)

  data:
    volumes: 1/1 healthy
    pools:   2 pools, 48 pgs
    objects: 22 objects, 2.8 KiB
    usage:   6.9 MiB used, 20 GiB / 20 GiB avail
    pgs:     32 active+undersized
             16 active+clean

  progress:
    Global Recovery Event (107m)
      [=========...................] (remaining: 3h)

what should i do next ? i would like to do some tests.

LittleCadet commented 2 years ago

i re-craete rook , now the ceph status :

[rook@rook-ceph-tools-d6d7c985c-mqt2r /]$ ceph status
  cluster:
    id:     f9e7a676-eaeb-4f24-957d-9399f09a5720
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum a (age 3h)
    mgr: a(active, since 3h)
    mds: 1/1 daemons up
    osd: 1 osds: 1 up (since 3h), 1 in (since 3h)

  data:
    volumes: 1/1 healthy
    pools:   3 pools, 96 pgs
    objects: 25 objects, 465 KiB
    usage:   29 MiB used, 20 GiB / 20 GiB avail
    pgs:     96 active+clean

but still not fixed the question : pvc is pending .

i need some help , please.

travisn commented 2 years ago

What does ceph osd pool ls detail show in the toolbox? It probably shows replica 3, which means you need at least 3 OSDs on different hosts by default. For a test where there is only one OSD, please create filesystem-test.yaml as @Madhu-1 mentioned, which only requires a single OSD.

LittleCadet commented 2 years ago

@travisn now , I create rook in this method:

kubectl create -f crds.yaml -f common.yaml -f operator.yaml .
kubectl create -f cluster-test.yaml
kubectl create -f filesystem-test.yaml
kubectl create -f storageclass.yaml
kubectl create -f kube-registry.yaml

ceph osd pool ls detail :

sh-4.4$ ceph osd pool ls detail
pool 1 '.mgr' replicated size 1 min_size 1 crush_rule 1 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 33 lfor 0/0/22 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd
pool 2 'myfs-metadata' replicated size 1 min_size 1 crush_rule 2 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 52 lfor 0/0/36 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs
pool 3 'myfs-replicated' replicated size 1 min_size 1 crush_rule 3 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode on last_change 38 lfor 0/0/36 flags hashpspool stripe_width 0 application cephfs

and found something in csi-cephfsplugin-provisioner-86d7c46746-87cmf :

E0626 00:45:21.807327       1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0626 00:45:39.748254       1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)
E0626 00:46:12.243067       1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0626 00:46:16.996278       1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)
E0626 00:46:51.594086       1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotClass: failed to list *v1.VolumeSnapshotClass: the server could not find the requested resource (get volumesnapshotclasses.snapshot.storage.k8s.io)
E0626 00:47:14.984594       1 reflector.go:138] github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)

and the logs of rook-ceph-operator-785cc8f794-bhbmt:

2022-06-26 00:37:00.706796 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:38:00.706645 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:39:00.707938 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:40:00.707030 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:41:00.706926 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:42:00.709996 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:43:00.707759 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:44:00.709079 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:45:00.706893 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:46:00.707395 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:47:00.707031 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated
2022-06-26 00:48:00.707187 I | op-osd: waiting... 1 of 2 OSD prepare jobs have finished processing and 1 of 1 OSDs have been updated

and describe pvc/cephfs-pvc:

[root@master01 cephfs]# kubectl describe pvc/cephfs-pvc
Name:          cephfs-pvc
Namespace:     default
StorageClass:  rook-cephfs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
               volume.kubernetes.io/storage-provisioner: rook-ceph.cephfs.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type     Reason                Age                     From                                                                                                              Message
  ----     ------                ----                    ----                                                                                                              -------
  Normal   Provisioning          50m (x546 over 35h)     rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-stq62_a7824160-1233-4311-b870-51a729c70e19  External provisioner is provisioning volume for claim "default/cephfs-pvc"
  Warning  ProvisioningFailed    24m (x99 over 48m)      persistentvolume-controller                                                                                       storageclass.storage.k8s.io "rook-cephfs" not found
  Warning  ProvisioningFailed    15m                     rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-gdjqw_9970df94-db32-4f87-a6e7-31842f641586  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Normal   ExternalProvisioning  4m24s (x8426 over 35h)  persistentvolume-controller                                                                                       waiting for a volume to be created, either by external provisioner "rook-ceph.cephfs.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          8s (x5 over 17m)        rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-gdjqw_9970df94-db32-4f87-a6e7-31842f641586  External provisioner is provisioning volume for claim "default/cephfs-pvc"
  Warning  ProvisioningFailed    8s (x4 over 11m)        rook-ceph.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-86d7c46746-gdjqw_9970df94-db32-4f87-a6e7-31842f641586  failed to provision volume with StorageClass "rook-cephfs": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-e820eb04-7e91-4c31-a779-4acdc69af2d5 already exists

some questions here :

  1. use cluster-test.yaml , ceph status is HEALTH_OK , but the logs of operator show 1 of 2 OSD , should it be ?
  2. what should i do about this problem ?
    github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)
  3. and this problem :
    an operation with the given Volume ID pvc-e820eb04-7e91-4c31-a779-4acdc69af2d5 already exists
LittleCadet commented 2 years ago

check crd, it is ok

[root@master01 ~]# kubectl get crd -n rook-ceph | grep volumesnapshotclasses.snapshot.storage.k8s.io
volumesnapshotclasses.snapshot.storage.k8s.io    2022-06-10T09:31:32Z

i have no idea about this question:

github.com/kubernetes-csi/external-snapshotter/client/v4/informers/externalversions/factory.go:117: Failed to watch *v1.VolumeSnapshotContent: failed to list *v1.VolumeSnapshotContent: the server could not find the requested resource (get volumesnapshotcontents.snapshot.storage.k8s.io)
LittleCadet commented 2 years ago

now, solve the problem !!! actually , everything is fine in rook and in ceph . but my problem is special . the true reason is my k8s network not right.

my k8s cluster do not use any network plugin. so the kubelet log like this :

Jun 26 14:24:30 a-slave01 kubelet[1586]: I0626 14:24:30.793784    1586 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for rook-ceph/csi-cephfsplugin-provisioner-659bf8dfcb-rxmrw through plugin: in>
Jun 26 14:24:31 a-slave01 kubelet[1586]: I0626 14:24:31.028303    1586 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for rook-ceph/csi-cephfsplugin-provisioner-659bf8dfcb-rxmrw through plugin: in>

finally : the method is : re-create k8s cluster and install the network plugin : flannel . then . the pvc in bound status:

[root@master01 cephfs]# kubectl get pvc -A
NAMESPACE     NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
kube-system   cephfs-pvc   Bound    pvc-ed13a5b9-9ecc-4cb3-aebc-767a3aa23529   1Gi        RWX            rook-cephfs    6s

haha

ira-gordin-sap commented 3 months ago

@travisn @Madhu-1 @LittleCadet Can you please say me whether db_session_id is a secret or just identifier? Thanks in advance!

Madhu-1 commented 3 months ago

@ira-gordin-sap can you be more specific what db_session_id is you are talking about and where did you find it :)

ira-gordin-sap commented 3 months ago

@ira-gordin-sap can you be more specific what db_session_id is you are talking about and where did you find it :)

Madhu-1 in the logs.

Madhu-1 commented 3 months ago

@ira-gordin-sap can you be more specific what db_session_id is you are talking about and where did you find it :)

Madhu-1 in the logs.

@ira-gordin-sap in which logs, is it in csi logs, if yes which pod and which container? without that am not sure which session_id you are referring to

ira-gordin-sap commented 3 months ago

@Madhu-1 in the logs attached to this issue for example

ira-gordin-sap commented 3 months ago

@Madhu-1 in the logs attached to this issue for example

@Madhu-1 in addition we had in this pod: image the following message: image

Madhu-1 commented 3 months ago

@ira-gordin-sap its in ceph pod not in csi pod, Thank you, @travisn @BlaineEXE might know about it.

travisn commented 2 months ago

@ira-gordin-sap The db_session_id is just an identifier, not a secret