Open F-Fx opened 6 months ago
You can expose the api of cluster 2 to cluster 1 and configure it as external cluster:
Docs: https://kyverno.github.io/policy-reporter/guide/helm-chart-core#external-clusters
Auth will be supported in v2 of the UI, an alpha release is available here:
@fjogeleit yeah i know that i can expose the api of cluster 2 to cluster 1, but i cant understand how to install only UI on cluster 1
my values
rest:
enabled: true
ui:
enabled: true
plugins:
kyverno: true
clusterName: cluster1
clusters:
- name: cluster2
api: http://kyverno.local/
kyvernoApi: http://kyverno.local/
kyvernoPlugin:
enabled: true
The helm chart is not intended to install only the UI without the core app right now.
also the Kyverno api needs to be the url of the Kyverno plugin app, but this is optional
@fjogeleit i try to install the alpha version https://github.com/kyverno/policy-reporter/tree/3.x
but get error
Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: unknown object type "nil" in Secret.data.config.yaml
my values set as:
plugin:
kyverno:
enabled: true
rest:
enabled: true
api:
logging: true
I take a look
@fjogeleit i think problem somewhere .Files.Get "kyverno-plugin.tmpl"
I already pushed a fix. Will release it now
Should be available soo, can you please update and try it again?
@fjogeleit now there is no problems with deployment, thx, but kyverno-plugin api not work (as i think)
values
plugin:
kyverno:
enabled: true
rest:
enabled: true
api:
logging: true
log_from_kyverno-plugin:
1.712140075207587e+09 info cmd/run.go:103 server starts {"port": 8080}
no errors, but when i open port-forward on kyverno-plugin pod and try to curl http://localhosts:8080/ready or http://localhosts:8080/healthz (404 error)
The Service has only 3 APIs
http://localhost:8080/api/v1/policies http://localhost:8080/api/v1/policies/{name} http://localhost:8080/api/v1/policies/exception
The http://localhost:8080/api/v1/policies
is used for health checks.
@fjogeleit ok, thx On another cluster i also install policy reporter from 3.0.0 branch with such values
ui:
enabled: true
clusters:
- name: cluster2
host: http://kyverno.local/policy-reporter/ # As i understand this path to policy-reporter API
plugins:
- name: kyverno
host: http://kyverno.local/ # As i understand this path to kyverno-plugin API
I open ui. and cant find how can i change cluster (as it was in previous) version
This config overrides the default, so you have to add the default cluster above
ui:
enabled: true
clusters:
- name: Default
secretRef: policy-report-ui-default-cluster
- name: cluster2
host: http://kyverno.local/policy-reporter/ # As i understand this path to policy-reporter API
plugins:
- name: kyverno
host: http://kyverno.local/ # As i understand this path to kyverno-plugin API
@fjogeleit ok, now i can switch, but when i switch to cluster2, i dont see any results by pod ( i check all kinds) on Default(local) cluster all is well
I'll take a look
@fonru can you add the following config to your values:
ui:
server:
overwriteHost: true
It will be default true in the next release, I forgot to set the default value to true
You can also update to the latest release which sets this value to true by default.
@fjogeleit sry for long answer, i fetched repo and install...now there is no problem...i will continue testing)))
thanks a lot for helping
@fjogeleit one more problem
after i configured ingress to access web-ui by kyverno.local/ui
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: policy-reporter-ui
namespace: policy-reporter
spec:
ingressClassName: nginx
rules:
- host: kyverno.local
http:
paths:
- backend:
service:
name: policy-reporter-ui
port:
number: 8080
path: /ui/(.*)
pathType: Prefix
but the page not loaded, cause of problem with some js (404 error)
what i doing wrong?
Let me take a look
@fjogeleit
logs in ui pod
1.7126716913341775e+09 info auth/middleware.go:91 abort request {"path": "/", "err": "missing session key"}
values
ui:
enabled: true
ingress:
className: nginx
enabled: true
hosts:
- host: kyverno.mydomain.mydomain.tech
paths:
- path: /
pathType: ImplementationSpecific
openIDConnect:
callbackUrl: https://kyverno.mydomain.mydomain.tech
clientId: kyverno.mydomain.mydomain.tech
clientSecret: XXXXXXXXXXXXXXXX
when i auth with keycloak it redirect loop me to https://kyverno.mydomain.mydomain.tech/login but as i understand nothing is listen to this path? and in see 307 status
You need to redirect to the /callback route of the ui
The latest version should support a subpath configuration
@fjogeleit thx
1.71272961207395e+09 info auth/middleware.go:91 abort request {"path": "/", "err": "missing session key"}
1.712729612074096e+09 error auth/middleware.go:41 profile not found
i configured /call, auth is ok. Is that normal, that logs like upper always generated in UI pod?
Yeah but I can check again if they still needed. Most of them were necessary during the development
@fjogeleit and one more question, are external-secrets like Vault normally work in secret_ref
key?
Currently it supports secrets with a predefined set of keys. How would a vault secret look like? Is it a json or something similar?
@fjogeleit i will look
and about openidconnect? can i restrict access to policy reporter ui ? Not found anything in docs and values...
Right now it supports only authentication in general, no authorization via roles or similar. Thats on my todo list but I have to check how I can implement a generic way for the different providers.
@fjogeleit thx a lot
have one more problem....i deploy PR on two cluster.
cluster1_values
ui:
enabled: true
server:
overwriteHost: true
openIDConnect:
enabled: true
discoveryUrl: 'xxxxxxxxx'
callbackUrl: https://cluster1.mydomain.tech/callback
clientId: "cluster1.mydomain.tech"
clientSecret: "xxxxxxxxx"
clusters:
- name: Default
secretRef: policy-report-ui-default-cluster
- name: cluster2
host: https://cluster2.mydomain.tech/policy-reporter/
plugins:
- name: kyverno
host: https://cluster2.mydomain.tech/plugin/
ingress:
enabled: true
className: "nginx"
hosts:
- host: cluster1.mydomain.tech
paths:
- path: /
pathType: ImplementationSpecific
plugin:
kyverno:
enabled: true
rest:
enabled: true
api:
logging: true
cluster2_values
rest:
enabled: true
ingress:
enabled: true
className: "nginx"
hosts:
- host: cluster2.mydomain.tech
paths:
- path: /
pathType: ImplementationSpecific
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
plugin:
kyverno:
enabled: true
ingress:
enabled: true
className: "nginx"
hosts:
- host: cluster2.mydomain.tech
paths:
- path: /plugin
pathType: ImplementationSpecific
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
ui:
enabled: true
ingress:
enabled: true
className: "nginx"
hosts:
- host: cluster2.mydomain.tech
paths:
- path: /
pathType: ImplementationSpecific
I open web-ui on https://cluster1.mydomain.tech/ (auth with OIDC work fine) and then when i switch in web-ui to cluster2, page open without any info.
And in logs of ui pod cluster1 have such errors log
│ 1.712840284955799e+09 error api/handler.go:176 failed to call core API {"error": "json: cannot unmarshal number into Go value of type []core.SourceCategoryTree"} ││ 1.712840285036027e+09 error api/handler.go:176 failed to call core API {"error": "json: cannot unmarshal number into Go value of type []core.SourceCategoryTree"} │
│ 1.7128402850471404e+09 error api/handler.go:234 failed to call core api {"error": "json: cannot unmarshal number into Go value of type []string"} ││ 1.7128402851375322e+09 error api/handler.go:234 failed to call core api {"error": "json: cannot unmarshal number into Go value of type []string"}
thanks for reporting, I will take a look on it
does the Cluster switch work without authentication enabled?
You configured:
ingress:
enabled: true
className: "nginx"
hosts:
- host: cluster2.mydomain.tech
paths:
- path: /
pathType: ImplementationSpecific
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
but you use https://cluster2.mydomain.tech/policy-reporter/
. I don't see the subpath in your ingress config
@fjogeleit i fix (forgot .*) and redeploy this ingress annotations. it`s work fine with auth.
1.7129048219484746e+09 error api/handler.go:271 failed to load policies from plugin {"cluster": "cluster2", "plugin": "kyverno", "error": "EOF"}
1.7129048249718714e+09 error service/service.go:67 failed to load policy details from plugin {"cluster": "cluster2", "source": "kyverno", "policy": "podsecurity-subrule-restricted"
when i switch to cluster2 and open some policies in kyverno_plugin there is error (above) in UI pod of cluster1. Is that normal?? As i see all work fine, but can`t understand why some errors showing in ui pod(.
I continue testing)))
Oh yeah, you need to add /api
to your cluster.plugin config. The UI is also working without the plugin but it will not show the policy details.
So it should be like this:
clusters:
- name: Default
secretRef: policy-report-ui-default-cluster
- name: cluster2
host: https://cluster2.mydomain.tech/policy-reporter/
plugins:
- name: kyverno
host: https://cluster2.mydomain.tech/plugin/api
@fjogeleit nice, fix it
I continue testing)))
Hi @fonru have you tested multi-cluster deployment? Could you please provide the configuration you used to get it to run?
What issue or question do you have?
a basic setup would be to add a new item to the cluster list:
clusters:
- name: Default
secretRef: policy-report-ui-default-cluster
- name: Cluster 2
host: https://policy-reporter-api.com # URL to the REST API of the Policy Reporter instance of your second cluster
the first item is the default Policy Reporter API in the same cluster. Its configured as secret which has a host
with the Core API URL and optional additional config like plugin URLs, HTTPBasic Auth credentials, etc.
Hi @devang704 this is my two clusters values
#Policy reporter with GUI
ui:
enabled: true
server:
overwriteHost: true
openIDConnect:
enabled: true
discoveryUrl: 'https://kk.mydomain.local/realms/Common'
callbackUrl: https://kyverno-ui.mydomain.local/callback
clientId: "kyverno-ui.mydomain.local"
clientSecret: "xxxxxxxxxxxxxxxxxx"
clusters:
- name: Default
secretRef: policy-report-ui-default-cluster
- name: second-cluster
host: https://mydomain.local/policy-reporter/
plugins:
- name: kyverno
host: https://mydomain.local/plugin/api
ingress:
enabled: true
className: "nginx"
hosts:
- host: kyverno-ui.mydomain.local
paths:
- path: /
pathType: ImplementationSpecific
plugin:
kyverno:
enabled: true
rest:
enabled: true
api:
logging: true
#Policy reporter without GUI
rest:
enabled: true
ingress:
enabled: true
className: "nginx"
hosts:
- host: kyverno.mydomain.local
paths:
- path: /policy-reporter/(.*)
pathType: ImplementationSpecific
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
plugin:
kyverno:
enabled: true
ingress:
enabled: true
className: "nginx"
hosts:
- host: kyverno.mydomain.local
paths:
- path: /plugin/(.*)
pathType: ImplementationSpecific
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
it`s work fine, continue testing
Hi @fjogeleit ,
I am using policy-reporter/policy-reporter helm chart with version 2.22.5, as part of this helm chart I can see two secrets are getting created 1) policy-reporter-kyverno-plugin-config 2) policy-reporter-config.
Which Secret needs to be configured in secretRef for Default cluster?
This issue relates to the new UI v2 which is currently in an alpha state.
If you refer to the stable chart you can check: https://kyverno.github.io/policy-reporter/guide/helm-chart-core/#external-clusters
In this version the default cluster is not defined as secret and you only need to add additional external clusters without the default one.
ok - I will test it. Thanks!
Hi, I am getting below error in multi cluster configuration.
error app/main.go:110 failed to configure api proxies {"name": "Default", "error": "secrets \"policy-report-ui-default-cluster\" not found"} error app/main.go:110 failed to configure api proxies {"name": "cluster2", "error": "missing core api configuration"}
`ui: enabled: true server: overwriteHost: true clusters:
as I wrote in the last comment, you are using the current stable v1 UI which has a total different configuration structure. Please check the posted documentation link which shows the setup for Policy Reporter UI v1.
Hi @fjogeleit and @fonru I was able to test multi-cluster deployment. Thanks!
@fjogeleit hi, pls help...
i use secrete to define telegram target (chatID and token)
target:
telegram:
chatID:
- secretRef: "policy-reporter-tg"
token:
- secretRef: "policy-reporter-tg"
minimumPriority: "warning"
skipExistingOnStartup: true
in "policy-reporter-tg" secret there are two key-value "chatID" and "token", but target not working(
what i doing wrong? if i use without secret all work fine
target:
telegram:
chatID: {my_chat_id}
token: {my_token}
minimumPriority: "warning"
skipExistingOnStartup: true
and logs in PR pod:
1.7156132601547685e+09 error Telegram: PUSH FAILED {"statusCode": 404, "body": "{\"ok\":false,\"error_code\":404,\"description\":\"Not Found\"}"}
target:
telegram:
chatID: {my_chat_id}
secretRef: "policy-reporter-tg"
minimumPriority: "warning"
skipExistingOnStartup: true
should be the correct structure. The secret needs the token key, ChatID is currently not supported as secret value.
@fjogeleit i try such values:
target:
telegram:
chatID: "-XXXXXXXX" #my chat id starts with minus)
secretRef: "policy-reporter-tg"
minimumPriority: "warning"
skipExistingOnStartup: true
and it`s not work...
telegram policy reporter config in secrete looks like:
telegram:
config:
chatID: "-XXXXXXX"
token: ""
webhook:
certificate: ""
skipTLS: false
name:
path:
secretRef: "policy-reporter-tg"
mountedSecret: ""
minimumPriority: "warning"
skipExistingOnStartup: true
is that normal that token is empty??
and errors in PR pod now changed:
1.7156680474722931e+09 error Telegram: PUSH FAILED {"error": "Post \"https://api.telegram.org/bot6317557123:AAG2_JcepYUCqjlQ_lVM522a311Kfl0gAdw/sendMessage\": dial tcp 149.154.167.220:443: i/o timeout"}
if i delete secrete-ref and put token all works...((
apiVersion: v1
kind: Secret
metadata:
name: policy-reporter-tg
data:
token: dG9rZW4=
this is how your secret needs to look like, only a token key with the related value
first i think that problem is with external-secrets, but external-secret create secrete equvalient to secret, that u mension above.
i try to create secrete manually
apiVersion: v1
kind: Secret
metadata:
name: policy-reporter-tg
namespace: kyverno-pr
data:
token: {base64 decoded value of my token}
describe of my created secret
│ apiVersion: v1 │
│ data: │
│ token: {my token in base64} │
│ kind: Secret │
│ metadata: │
│ annotations: │
│ kubectl.kubernetes.io/last-applied-configuration: | │
│ {"apiVersion":"v1","data":{"token":"{my token in base64}"},"kind":"Secret","metadata":{"annotations":{},"name":"policy-reporter-tg","namespace":"kyverno-pr"}} │
│ creationTimestamp: "2024-05-14T10:12:54Z" │
│ name: policy-reporter-tg │
│ namespace: kyverno-pr│
│ resourceVersion: "434105395" │
│ uid: 19c9b36b-edc0-4ee4-9d11-1ace1cb7086e │
│ type: Opaque
my values:
target:
telegram:
chatID: "-4196593432"
secretRef: "policy-reporter-tg"
minimumPriority: "warning"
skipExistingOnStartup: true
but it`s not working (((((
Hello, i have test env with two clusters "cluster1" "cluster2".
Can i install only UI on cluster1 wich will connect to cluster2?
I need only UI on cluster1 which will show reports from other clusters.
And one more question, if there any ideas about authentication in UI?