Closed rbo closed 2 years ago
Initial Setup can not proceed because of different kernel version:
FI-A:
switch(boot)# config t
Enter configuration commands, one per line. End with CNTL/Z.
switch(boot)(config)# interface mgmt0
switch(boot)(config-if)# ip address 10.32.104.102 255.255.240.0
switch(boot)(config-if)# no shutdown
switch(boot)(config-if)#
switch(boot)(config-if)#
switch(boot)(config-if)# exit
switch(boot)(config)# exit
switch(boot)# dir bootflash:installables/switch
37000192 May 02 2018 13:11:28 ucs-6100-k9-kickstart.5.0.3.N2.3.23a.bin
257187211 May 02 2018 13:11:40 ucs-6100-k9-system.5.0.3.N2.3.23a.bin
3850988 May 02 2018 14:13:42 ucs-catalog.3.2.3b.T.bin
424633068 May 02 2018 13:11:59 ucs-manager-k9.3.2.3a.bin
6245087 Mar 31 2022 11:21:20 ucsfi-connector.bin
switch(boot)# copy bootflash:installables/switch/ucs-6100-k9-system.5.0.3.N2.3.23a.bin scp://root@10.34/
root@10.32.96.47's password:
FI-B:
connect local-mgmt
reboot
Unforantly I deleted all on FI-B / peer farbic and lost the new version. I will apply an update later
Initial Setup done. FI-A/B are "empty"
Server 3, the second big one has a fault RAM dim. Changed 3 - 2 new and orange an old one.
Reset Cisco UCS switches in single mode, we do not need HA on the switch side and don't want to handle a propper switch or os configuration.
The Complete memory subchannel CPU2 N1-3 is not working. I switched the DIMMs between N1-3 and J1-3 and the error stays at N1-3.
Changing the RAM Configuration to 4 DIMMS per CPU to avoid usage of N* slots: Source: Spec Sheet Page 14 found on Cisco UCS B460 M4 Blade Server
I can install RHEL 8.5 on Server 3
[root@dhcp180 ~]# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 120
..
Model name: Intel(R) Xeon(R) CPU E7-4890 v2 @ 2.80GHzModel name: Intel(R) Xeon(R) CPU E7-4890 v2 @ 2.80GHz
..
[root@dhcp180 ~]# free -h
total used free shared buff/cache available
Mem: 125Gi 898Mi 123Gi 13Mi 354Mi 123Gi
Swap: 4.0Gi 0B 4.0Gi
[root@dhcp180 ~]#
UCS is running not perfect but running we have still a network problem: #80