Closed karampok closed 5 months ago
comments:
works only for bash not fish shell
local oc binary version/install (do we require any version)?
That seems it was not needed
[kka@f-t14s hypershift-lab]$ oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig patch provisioning/provisioning-configuration -p '{"spec":{"watchAllNamespaces":true}}' --type merge
provisioning.metal3.io/provisioning-configuration patched (no change)
LVM operator should be visible or stated as requirement
As user experience, I cannot see what I am coping (split in multi lines)
Should the pull secret comes from the user (some links on creating read-only pull secret) Is the cp of pull secret recommended? is the pull secret recycled per lab?
When using BMH went to error/deprovision but it worked.
Data plane vs compute nodes
screen shots are too hard to read , when deploying the hosted 10.While waiting maybe console view option?
IMO hosted worker is confusing name (they are not hosted they are worker far from the cp)
At this point one of the workers will be cordoned and workloads will be evicted.
here a note about budget disruption that can block in real cases
Upgrading the Hosted Cluster Data Plane from the Web Console yes I look the UI to show when you proceed but it would be nice to have a way to see upgrade to hosted control plane pods is finished
The versson 4.13.1 is important and is not the latest on purpose. I by mistake thought this the doc is not updated on that and picked the latest
example with a performanceprofile?
if I have many clusters, can I tell which tuned profiles a node in cluster X has? (-o yaml on nodepool, see configmaps?)
Networking is unclear how it works for the apps behind a route in the hosted cluster
comments:
- works only for bash not fish shell Added a note on this.
- local oc binary version/install (do we require any version)? Yup. It's documented here https://labs.sysdeseng.com/hypershift-baremetal-lab/4.13/hcp-deployment.html
Before continuing, make sure you have the following tooling installed in your workstation:
- That seems it was not needed
[kka@f-t14s hypershift-lab]$ oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig patch provisioning/provisioning-configuration -p '{"spec":{"watchAllNamespaces":true}}' --type merge provisioning.metal3.io/provisioning-configuration patched (no change)
Not in our lab, but it's still worth mentioning just in case.
- LVM operator should be visible or stated as requirement
Added a comment on the lab using LVMO
As user experience, I cannot see what I am coping (split in multi lines) Will address this.
Should the pull secret comes from the user (some links on creating read-only pull secret) Is the cp of pull secret recommended? is the pull secret recycled per lab?
This will be time-consuming for the user, as well as out of the scope. The PS should change between runs.
- When using BMH went to error/deprovision but it worked.
Known issue, it will work right away most of the time, but sometimes it may error. Added a note on this.
- Data plane vs compute nodes
Commented on the review.
- screen shots are too hard to read , when deploying the hosted
We cannot do anything about that... the only thing would be for the user to open the image in a new tab and zoom it.
10.While waiting maybe console view option?
please, elaborate. I believe we show both views, console and cli.
- IMO hosted worker is confusing name (they are not hosted they are worker far from the cp)
I cannot see where we say hosted worker. Could you put the exact phrase? Thanks.
- At this point one of the workers will be cordoned and workloads will be evicted.
here a note about budget disruption that can block in real cases I'm not sure we want to comment on that. I mean, this is a regular OCP cluster so eviction works just as it works on any other cluster.
Upgrading the Hosted Cluster Data Plane from the Web Console yes I look the UI to show when you proceed but it would be nice to have a way to see upgrade to hosted control plane pods is finished
What do you mean? like comparing pod images? I believe the CO are shown which shows how it goes from one version to another.
- The versson 4.13.1 is important and is not the latest on purpose. I by mistake thought this the doc is not updated on that and picked the latest
Correct. We expect users to follow the docs.
- example with a performanceprofile?
We can add one if you have it. But not sure how relevant this is for the lab at this point (since it's not telco related).
- if I have many clusters, can I tell which tuned profiles a node in cluster X has? (-o yaml on nodepool, see configmaps?)
You can, by looking at the NodePool.
- Networking is unclear how it works for the apps behind a route in the hosted cluster It works the same as in a regular OCP cluster. OCP Routers will run on the worker nodes. You have a LoadBalancer services that uses MetalLB to publish the router service in a public IP. After that, your DNS resolution should point to this IP.
10.While waiting maybe console view option? please, elaborate. I believe we show both views, console and cli.
I mean a tip if something is not working then you have to access the BMC console and make sure iso is mounted/started/output is correct.
IMO hosted worker is confusing name (they are not hosted they are worker far from the cp) I cannot see where we say hosted worker. Could you put the exact phrase? Thanks. when I do
oc get --kubeconfig=hosted get nodes
the name is hosted-worker, but for the demo is just find,We can add one if you have it. But not sure how relevant this is for the lab at this point (since it's not telco related). I thought there was about telco but sure no need
if I have many clusters, can I tell which tuned profiles a node >in cluster X has? (-o yaml on nodepool, see configmaps?) You can, by looking at the NodePool.
Asking the apply something and actually been applied is not the same thing, for example is in one node is failing how this error is visible to the user?
Thoughts while reading the doc, not necessary need changes.
Great job!