utkuozdemir / pv-migrate

CLI tool to easily migrate Kubernetes persistent volumes
Apache License 2.0
1.57k stars 76 forks source link

NodePort type service #243

Open BonzTM opened 1 year ago

BonzTM commented 1 year ago

I'm surprised to not have seen this request yet, but my request would be to implement a NodePort type service.

Is your feature request related to a problem? Please describe. LoadBalancer type service is great, if you have IPs to play around with or a L2 implementation in-cluster like MetalLB. Local is great, but unstable and slow with large PVs. A NodePort service implementation would be easier to implement and possibly solve a missing piece between these solutions.

Describe the solution you'd like A NodePort option that creates a service of type NodePort, therefore opening a high-port on each node for incoming traffic (40000 for instance) and forwarding SSH from that port to 22 on the pod.

utkuozdemir commented 11 months ago

What would you prefer the rsync target to be? <ip-of-a-random-node>:<nodeport> or something else?

BonzTM commented 11 months ago

What would you prefer the rsync target to be? <ip-of-a-random-node>:<nodeport> or something else?

Yes. I think pulling one from kubectl would be good, but requiring the user to input a node IP would also be fine

pavlovnicola commented 11 months ago

Hi,

I also was looking on how to setup NodePort.

I suppose this flag should do it:

--helm-set sshd.service.type=NodePort

utkuozdemir commented 11 months ago

@pavlovnicola It wouldn't work unfortunately, it needs explicit support. Setting some helm values can prevent pv-migrate to work correctly, and this is one of the cases - the whole logic depends on service type being LoadBalancer there.

pavlovnicola commented 11 months ago

@utkuozdemir I tried it anyway and you are right. It does not work. Thanks for clarifying.

santimar commented 10 months ago

Any update on this? I am trying to move data between two self hosted clusters (there isn't any LB on both)

I would use local strategy but I am facing the same error reported on #236