Open oburd opened 7 months ago
Version Infinispan quay.io/infinispan/server:14.0
There is an error
Secret: identities-batch: user create user_administrator -p -g admin user create user_keycloak -p -g application user create user_monitor -p --users-file metrics-users.properties --groups-file metrics-groups.properties password: password username: user_monitor
There's an additional _
in your user create commands which isn't valid. Are you sure this content is being used? It should be in the form user create user administrator -p -g admin
.
Also, this doesn't seem to match the screenshot you provided which says that you're logged in as the user "admin" but such a user doesn't exist in your batch file.
Secret: identities-batch: user create user_administrator -p -g admin user create user_keycloak -p -g application user create user_monitor -p --users-file metrics-users.properties --groups-file metrics-groups.properties password: password username: user_monitor
There's an additional
_
in your user create commands which isn't valid. Are you sure this content is being used? It should be in the formuser create user administrator -p -g admin
.Also, this doesn't seem to match the screenshot you provided which says that you're logged in as the user "admin" but such a user doesn't exist in your batch file.
1) i took away password so with password it looks user create user administrator -p blablablala -g admin
2) Screenshot is example of error for visualisation of problem
Add for you my scressnshot with error
remind: administrator user has a role = admin
@oburd I just tested and I was able to create a cache as expected.
I used the values.yaml included with the chart, but added deploy.secretName: 'identities'
. The 'identities' Secret contained the following identities-batch
value user create administrator -p test -g admin
.
Can you do a network inspect when you click the "create" button so that we can see the exact request/response received from the server. Also, can you check the server logs to see if any exceptions/logs are reported.
@oburd I just tested and I was able to create a cache as expected.
I used the values.yaml included with the chart, but added
deploy.secretName: 'identities'
. The 'identities' Secret contained the followingidentities-batch
valueuser create administrator -p test -g admin
.Can you do a network inspect when you click the "create" button so that we can see the exact request/response received from the server. Also, can you check the server logs to see if any exceptions/logs are reported.
@ryanemerson There is my values.yml for infinispan
images:
# [USER] The container images for server pods.
server: quay.io/infinispan/server:14.0
initContainer: registry.access.redhat.com/ubi8-micro
deploy:
# [USER] Specify the number of nodes in the cluster.
replicas: 2
clusterDomain: cluster.local
container:
extraJvmOpts: ""
libraries: ""
# [USER] Define custom environment variables using standard K8s format
# env:
# - name: STANDARD_KEY
# value: standard value
# - name: CONFIG_MAP_KEY
# valueFrom:
# configMapKeyRef:
# name: special-config
# key: special.how
# - name: SECRET_KEY
# valueFrom:
# secretKeyRef:
# name: special-secret
# key: special.how
env:
storage:
size: 1Gi
storageClassName: ""
# [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
ephemeral: true
resources:
# [USER] Specify the CPU limit and the memory limit for each pod.
limits:
cpu: 1000m
memory: 1024Mi
# [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
requests:
cpu: 1000m
memory: 1024Mi
security:
secretName: ispn-connect-secret
batch: ""
expose:
# [USER] Specify `type: ""` to disable network access to clusters.
type: Route
nodePort: 0
host: dummy
annotations:
- key: kubernetes.io/ingress.class
value: alb
- key: alb.ingress.kubernetes.io/group.name
value: dummy
- key: alb.ingress.kubernetes.io/group.order
value: 'dummy'
- key: alb.ingress.kubernetes.io/scheme
value: internal
- key: alb.ingress.kubernetes.io/target-type
value: ip
- key: alb.ingress.kubernetes.io/listen-ports
value: '[{"HTTP": 80}, {"HTTPS":443}]'
- key: alb.ingress.kubernetes.io/certificate-arn
value: dummy
- key: alb.ingress.kubernetes.io/ssl-redirect
value: '443'
- key: alb.ingress.kubernetes.io/healthcheck-path
value: /rest/v2/cache-managers/default/health/status
monitoring:
enabled: false
logging:
categories:
# [USER] Specify the FQN of a package from which you want to collect logs.
- category: com.arjuna
# [USER] Specify the level of log messages.
level: warn
# No need to warn about not being able to TLS/SSL handshake
- category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
level: error
makeDataDirWritable: false
nameOverride: ""
resourceLabels: []
podLabels:
- key: microservice
value: infinispan
svcLabels: []
tolerations: []
nodeAffinity: {}
nodeSelector: {}
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
statistics: "true"
replicatedCacheConfiguration:
name: "replicated-template"
mode: "ASYNC"
statistics: "true"
encoding:
mediaType: "application/x-protostream"
memory:
storage: HEAP
security:
authorization:
enabled: true
roles:
- application
caches:
realms:
replicatedCache:
configuration: "replicated-template"
users:
replicatedCache:
configuration: "replicated-template"
sessions:
replicatedCache:
configuration: "replicated-template"
authenticationSessions:
replicatedCache:
configuration: "replicated-template"
offlineSessions:
replicatedCache:
configuration: "replicated-template"
clientSessions:
replicatedCache:
configuration: "replicated-template"
offlineClientSessions:
replicatedCache:
configuration: "replicated-template"
loginFailures:
replicatedCache:
configuration: "replicated-template"
authorization:
replicatedCache:
configuration: "replicated-template"
work:
replicatedCache:
configuration: "replicated-template"
keys:
replicatedCache:
configuration: "replicated-template"
actionTokens:
replicatedCache:
configuration: "replicated-template"
# [USER] Specify `security: null` to disable security authorization.
security:
authorization: {}
transport:
cluster: ${infinispan.cluster.name:cluster}
node-name: ${infinispan.node.name:}
stack: kubernetes
server:
endpoints:
# [USER] Hot Rod and REST endpoints.
- securityRealm: default
socketBinding: default
connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
hotrod:
hotrodConnector:
# [MEMCACHED] Uncomment to enable Memcached endpoint
# memcached:
# memcachedConnector:
# socketBinding: memcached
# [METRICS] Metrics endpoint for cluster monitoring capabilities.
- connectors:
rest:
restConnector:
authentication:
mechanisms: BASIC
securityRealm: metrics
socketBinding: metrics
interfaces:
- inetAddress:
value: ${infinispan.bind.address:127.0.0.1}
name: public
security:
credentialStores:
- clearTextCredential:
clearText: secret
name: credentials
path: credentials.pfx
securityRealms:
# [USER] Security realm for the Hot Rod and REST endpoints.
- name: default
# [USER] Comment or remove this properties realm to disable authentication.
propertiesRealm:
groupProperties:
path: groups.properties
groupsAttribute: Roles
userProperties:
path: users.properties
# [METRICS] Security realm for the metrics endpoint.
- name: metrics
propertiesRealm:
groupProperties:
path: metrics-groups.properties
relativeTo: infinispan.server.config.path
groupsAttribute: Roles
userProperties:
path: metrics-users.properties
relativeTo: infinispan.server.config.path
socketBindings:
defaultInterface: public
portOffset: ${infinispan.socket.binding.port-offset:0}
socketBinding:
# [USER] Socket binding for the Hot Rod and REST endpoints.
- name: default
port: 11222
# [METRICS] Socket binding for the metrics endpoint.
- name: metrics
port: 11223
# [MEMCACHED] Uncomment to enable Memcached endpoint
# - name: memcached
# port: 11221
Description of secret (all encode)
apiVersion: v1
data:
identities-batch: dX2342342354235dsasdasdasd
password: sdasdasdasd
username: adssadasdasdasd
kind: Secret
metadata:
creationTimestamp: "2023-11-16T09:32:07Z"
name: ispn-connect-secret
namespace: keycloak
resourceVersion: "19488680"
uid: 813cc5d6-252c-40dc-8db2-354ef8be8ef4
type: Opaque
There is my secret after decode
identities-batch: user create administrator -p blalbalblbl-g admin
user create keycloak -p blalblbblala-g application
user create monitor -p blalalalala--users-file metrics-users.properties --groups-file metrics-groups.properties
password: blalalalalala
username: monitor
I have the same error
Interesting thing that, if i wont have access i coudn't enter to console
In your values.yaml you have defined the secret as deploy.security.secretName: ispn-secret-users
but in your example Secret you have the name "ispn-connect-secret". Are you referencing the correct Secret, is this just an error in your example?
In your values.yaml you have defined the secret as
deploy.security.secretName: ispn-secret-users
but in your example Secret you have the name "ispn-connect-secret". Are you referencing the correct Secret, is this just an error in your example?
I have changed on correct in my post That's only for example, i want to show that i use the secret nothing more
No any logs of error in pods infinispan
UPD: P.S. I have created secret manually, and nothing work, error same.
Error still present
This error mean that authentication has passed by user administrator but Authorization can passed, and i dont know why
Should i something add here deploy.infinispan.security.authorization
?
infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security:
authorization: {}
@oburd would you mind telling me please in the network call in the browser, which is the result of the REST API http://THE HOST:11222/rest/v2/security/user/acl
?
The call is done when we connect to the console, you can see it pass in the browser
@oburd should return a payload similar to this one:
{
"subject": [
{
"name": "admin",
"type": "NamePrincipal"
},
{
"name": "admin",
"type": "GroupPrincipal"
}
],
"global": [
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
"caches": {
"indexed-cache": [
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
"___protobuf_metadata": [
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
"default": [
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
...
}
}
This will help debugging, since the error is coming from the server
@oburd I would also need to know which is the result of http://YOURHOST:11222/rest/v2/security/roles
@karesti okay, i will do and back with answers FYI: we expose infinityspan through AWS ALB that we have access external link https://DNS_NAME_OF_INFINITY
@karesti There is json from administrator user with role ADMIN which i logged https://dns_name/rest/v2/security/user/acl
{
"subject":[
{
"name":"administrator",
"type":"NamePrincipal"
},
{
"name":"admin",
"type":"RolePrincipal"
}
],
"global":[
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
"caches":{
"realms":[
],
"authenticationSessions":[
],
"sessions":[
],
"___protobuf_metadata":[
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
"keys":[
],
"clientSessions":[
],
"work":[
],
"loginFailures":[
],
"users":[
],
"authorization":[
],
"offlineClientSessions":[
],
"___script_cache":[
"LIFECYCLE",
"READ",
"WRITE",
"EXEC",
"LISTEN",
"BULK_READ",
"BULK_WRITE",
"ADMIN",
"CREATE",
"MONITOR",
"ALL",
"ALL_READ",
"ALL_WRITE"
],
"offlineSessions":[
],
"actionTokens":[
]
}
}
If you see empty names like realms, sessions, this is caches which i have installed in config file which i provided upper
https://dns_name/rest/v2/security/roles
[
"observer",
"application",
"admin",
"monitor",
"deployer"
]
@oburd would you mind creating a user called "admin" with admin role and try?
@karesti let's try
I would like to see also
http://YOURHOST:11222/rest/v2/server/config
please
@oburd would you mind creating a user called "admin" with admin role and try? @karesti I have created admin, and received same error
https://dns_name/rest/v2/server/config There is server_confing
<server>
<interfaces>
<interface name="public">
<inet-address value="0.0.0.0"/>
</interface>
</interfaces>
<socket-bindings port-offset="0" default-interface="public">
<socket-binding name="default" port="11222" interface="public"/>
<socket-binding name="metrics" port="11223" interface="public"/>
</socket-bindings>
<security>
<credential-stores>
<credential-store name="credentials" path="credentials.pfx">
<clear-text-credential credential="***"/>
</credential-store>
</credential-stores>
<security-realms>
<security-realm name="default">
<properties-realm groups-attribute="Roles">
<user-properties digest-realm-name="default" path="users.properties"/>
<group-properties path="groups.properties"/>
</properties-realm>
</security-realm>
<security-realm name="metrics">
<properties-realm groups-attribute="Roles">
<user-properties digest-realm-name="metrics" path="metrics-users.properties"/>
<group-properties path="metrics-groups.properties"/>
</properties-realm>
</security-realm>
</security-realms>
</security>
<endpoints>
<endpoint socket-binding="default" security-realm="default">
<hotrod-connector name="hotrod-default" socket-binding="default">
<authentication security-realm="default">
<sasl mechanisms="SCRAM-SHA-512 SCRAM-SHA-384 SCRAM-SHA-256 SCRAM-SHA-1 DIGEST-SHA-512 DIGEST-SHA-384 DIGEST-SHA-256 DIGEST-SHA CRAM-MD5 DIGEST-MD5"/>
</authentication>
</hotrod-connector>
<rest-connector name="rest-default" socket-binding="default">
<authentication mechanisms="BASIC" security-realm="default"/>
</rest-connector>
</endpoint>
<endpoint socket-binding="metrics" security-realm="metrics">
<rest-connector name="rest-metrics" socket-binding="metrics">
<authentication mechanisms="BASIC" security-realm="metrics"/>
</rest-connector>
</endpoint>
</endpoints>
</server>
so, it's normal that caches admin can't manipulate the caches.
the config you set up in the replicated cache template states only "application" roles can actually access the caches. if you want "admin" to access caches too, you need to add it to the config
replicatedCacheConfiguration:
name: "replicated-template"
mode: "ASYNC"
statistics: "true"
encoding:
mediaType: "application/x-protostream"
memory:
storage: HEAP
security:
authorization:
enabled: true
roles:
- application
- admin
i'm going to try some this config locally
so, it's normal that caches admin can't manipulate the caches.
the config you set up in the replicated cache template states only "application" roles can actually access the caches. if you want "admin" to access caches too, you need to add it to the config
replicatedCacheConfiguration: name: "replicated-template" mode: "ASYNC" statistics: "true" encoding: mediaType: "application/x-protostream" memory: storage: HEAP security: authorization: enabled: true roles: - application - admin
I know about this, but, i want that admin can Create Caches and dont have problems
@oburd would you mind testing with a single replica please? instead of 2 nodes
@oburd would you mind testing with a single replica please? instead of 2 nodes
well we can, but we need 2 replicas anyway FYI: my colleagues will be write here too, instead of me, so if you see user different from it's okay Thank you
@oburd I understand you need two replicas, I'm asking just for debugging reasons
Hello! @ryanemerson I have an error Unexpected error creating the cache with the provided configuration. "Unauthorized action." What can be ? I have added secret with creation user
in doc https://infinispan.org/docs/helm-chart/main/helm-chart.html#adding-multiple-credentials_configuring-authentication written next:
Secret: identities-batch: user create user_administrator -p -g admin user create user_keycloak -p -g application user create user_monitor -p --users-file metrics-users.properties --groups-file metrics-groups.properties password: password username: user_monitor
Maybe i miss something ?