billimek / homelab-infrastructure

Infrastructure info and setup for homelab
Apache License 2.0
47 stars 12 forks source link

proxmox hardware? #3

Closed jp83 closed 5 years ago

jp83 commented 5 years ago

I didn't see it on your wiki diagram, just curious, what's the hardware for your proxmox hosts? I keep re-thinking my infrastructure trying to balance between redundancy, storage, and power/heat (cost).

Also, are you running many other VMs on proxmox or mostly just the k8s nodes? I've been using a single free ESXi host, and tried proxmox with the hope of terraforming it, but ran into lots of issues/lack of docs and abandoned. If I can settle on storage and general understanding/stability, I keep thinking I might go all-in on bare metal k8s nodes (but that might still run into the same ceph problem I suppose).

billimek commented 5 years ago

Hi @jp83, this is what I currently run:

host proxmox

host proxmox-b

host proxmox-c

host k8s-4

On the proxmox in addition to running the 3 k8s master VMs and 3 k8s 'worker' VMs, I also run:

billimek commented 5 years ago

I've been thinking a lot lately about compute nodes and solutions, so your question is quite timely!

  1. Is it better to have fewer bigger compute nodes?
  2. Is it better to have more smaller/less power consuming compute nodes?

Concerns I have about one or two huge compute nodes is that they are doing a lot of work and probably draw a lot of power, have a bigger 'blast radius' when something breaks, possible challenges around too much IO in one kernel space. They also require a much more significant upfront cost.

Right now I'm thinking really hard about the second one. You can procure smaller compute nodes as needed (maybe something like a DeskMini, iBox-R1000, or even a RPi 4). They should all be low-power consuming and solve some of the challenges I consider with a larger compute node. If most of the work is done via kubernetes, it is handling the scheduling for you and you don't need to concern yourself as much with distributing workload across the nodes.

At least this is what I've been thinking about a lot. #43 was created this morning as part of that thought experiment.

jp83 commented 5 years ago

That makes me feel better, for some reason I thought you were on NUCs but wasn't sure how you were getting larger file storage. This is my main box that I run all the time is...

Dell 12-bay R510 running free ESXi 2x L5640 hexcore CPUs 128GB RAM 6x4TB and 4x8TB WD Red HDDs on H200 HBA passed through for FreeNAS (zfs mirrored pairs) 480GB PM953 NVMe for ESXi datastore 960GB PM953 NVMe (tried passing through to FreeNAS, now just extra datastore for VM disks) 10G (old CX4) to switch

It's got plenty of processing and storage, about 250W, just not redundant. It's in my office (extra bedroom) closest. I leave the doors cracked and have a fan that vents to media room upstairs., but it still gets my office toasty in the summer, don't mind as much in winter. I keep the rest of these off right now unless I'm testing something else.

I've got another almost identical R510 as backup but not outfitted with drives. I've flipped flopped between them occasionally when I upgrade or try something new.

I also have a Dell R210ii (I believe 4 cores, 16GB RAM, and < 50W I think) that I was going to use for pfsense, but ended up sticking with Edgerouter Lite and probably moving to Mikrotik.

I also picked up an HP dl360e Gen8 with dual E5-2450L and lots of RAM. It ran about 120W and last year could almost pay for itself CPU mining monero, not so much anymore and most of that's prob gone now anyways. It's only got 4 LFF drive bays though and not as many PCIe slots.

Oh ya, and I have an HP Microserver G7 directly running FreeNAS for occasional offline backups (everything but my media) and another synced offsite at my parents.

I'm considering getting the 2U dl380e 12LFF instead, and I think the power savings could actually pay for itself in 2 years. There's a local ebay seller (garland computer) that has them for about $150 local pickup. Swapping processors and adding drive trays for a total of about $300. I like that it still uses DDR3 which I have and is much cheaper than DDR4. I've looked at the NUCs and various thin clients guys from servethehome are using, but they'd cost same or way more and don't have the bulk storage. RPis just don't seem to have the power for Plex transcoding. I saw and like the Odroid H2 for distributed storage, but at $54 each for every disk it's still gets pretty expensive.

I also found that my HP Procurve 2910al switches are 70-80W if I remember right, I had them set up redundantly with server NICs to each, but hopefully decided it's less likely to fail and not worth it.

I could just as easily fire everything up and get the redundancy I want, but then I'd be stuck and hard to scale especially my storage back down if I move on to other things. I'm an electrical engineer, so this doesn't directly apply to my day job, which makes it harder to justify but I've been toying with the concepts at work.

My ideal case is to keep using the R510 with the identical one as a semi-live backup, power it on occasionally to sync up, and then power off, but don't really think that exists. Most all HA type modes would go into cripple mode without quorum, so I know 3 is really the magic number. I've thought about biting the bullet with the 2x 12-bay servers bare-metal k8s master+workers and going ceph for disks across them, and maybe putting an extra k8s master on the R210ii virtualized with anything else random I want to run. Right now I do timemachine and veam backups to the big FreeNAS and store my photos, so I don't quite trust myself with those yet on top of k8s since I keep tearing it down and rebuilding.

Sorry that got long winded, appreciate all that you're doing putting this out there. I've managed to migrate about half of my stuff over to k8s following your lead. Right now I'm virtualizing 1 master + 2 workers (across the 2 diff NVMe datastores) on the all-in-one host, playing with OpenEBS but really need the 3rd node and then the double/triple write penalties are at least conceptually bothering me without gaining any real redundancy. Also each PVC replica is requesting like 1-2G RAM resources (even for .5G PVC), so that's kinda starting to eat it up since about half was already devoted to FreeNAS. Ugh, I just wanted to get it setup to run itself, but I'm learning so much more along the way, and keep rethinking the fundamental architecture.

billimek commented 5 years ago

closing via #5

carpenike commented 4 years ago

Been thinking about this the last few days and wanted to see where you were at on it.

The Odroid-H2s have two SATA ports on them. I’ve been stocking up on some 1TB Samsung Enterprise SSDs over the last few years and have 9 now. Thinking I could get 5x H2s whenever they become available and replace all my Proxmox nodes with rook-Ceph. Does that sound reasonable to you or do you have concerns with the performance you’ve seen with the Odroids? Any concerns with only 1GB networking and CEPH?