clincha-org / clincha

Configuration and monitoring of clinch-home infrastructure
https://clinch-home.com
1 stars 1 forks source link

Hawkfield site visit (September 2022) #21

Closed clincha closed 1 year ago

clincha commented 2 years ago

I'm visiting one of the sites in September and I need to do some planning so that I can get the most out of the trip.

What are the goals for the trip?

What do I need to take there?

What do I need to prepare before the trip?

clincha commented 2 years ago

What do I need to take?

PXL_20220825_222850765

Not pictured

clincha commented 2 years ago

Downloaded ISO from Proxmox site

Downloaded Rufus to write to the USBs

image

Now I need the Gannt chat. I was also thinking that a diagram of the desired final state would be good. Its induces thinking about the architecture I'm going for and begs some pretty simple questions.

clincha commented 2 years ago

I've installed Proxmox on one of the servers now and put in the HBA for the two SSDs. I've booted up the node successfully but when I check for the drives using lsblk I can't see them.

root@bri-s-01:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   3.6T  0 disk
sdb                  8:16   0   3.6T  0 disk
sdc                  8:32   0   3.6T  0 disk
nvme0n1            259:0    0 119.2G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0   512M  0 part
└─nvme0n1p3        259:3    0 118.7G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  29.5G  0 lvm  /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  64.5G  0 lvm
  └─pve-data_tdata 253:3    0  64.5G  0 lvm
    └─pve-data     253:4    0  64.5G  0 lvm

Looks like others have had this issue... I tried one of the comments which suggested disabling VT-d in the BIOS. However, I couldn't find that setting in the BIOS. The most similar setting seems to be SVM, so I disabled that. Still have the same issue.

The next suggestion is to change GRUB settings.

root@bri-s-01:~# lsblk
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                  8:0    0   3.6T  0 disk
sdb                  8:16   0   3.6T  0 disk
sdc                  8:32   0   3.6T  0 disk
sdd                  8:48   0 111.8G  0 disk
sde                  8:64   0 111.8G  0 disk
nvme0n1            259:0    0 119.2G  0 disk
├─nvme0n1p1        259:1    0  1007K  0 part
├─nvme0n1p2        259:2    0   512M  0 part
└─nvme0n1p3        259:3    0 118.7G  0 part
  ├─pve-swap       253:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       253:1    0  29.5G  0 lvm  /
  ├─pve-data_tmeta 253:2    0     1G  0 lvm
  │ └─pve-data     253:4    0  64.5G  0 lvm
  └─pve-data_tdata 253:3    0  64.5G  0 lvm
    └─pve-data     253:4    0  64.5G  0 lvm

Wow that actually worked. The new drives are coming up now. I needed to make sure the following lines in the /etc/default/grub file existed:

GRUB_CMDLINE_LINUX_DEFAULT="quiet intel_iommu=on"
GRUB_CMDLINE_LINUX="iommu=soft"

Then I needed to run this command:

update-grub

Then everything came up as expected. Many thanks to Dwain who found the answer and posted it in this thread

clincha commented 1 year ago

I got pretty much everything that I wanted to do finished. I couldn't fit my old machine into one of the cases so I decided to bring that back to Edinburgh with me. That might have been stupid because I now have a PC sitting here with a bunch of data on it that I want to be in Bristol. The internet connection I have at home in Edinburgh is not big enough to do the transfer so I'll need a better plan there. On the other hand I now have a gaming PC up here which is a big bonus.