Closed hnicke closed 5 years ago
That 0.0.0.0 value is fine. We should probably also add the following in the values and statefulset yaml
- name: EVENTSTORE_INT_HTTP_PREFIXES
value: "http://*:2114/"
- name: EVENTSTORE_EXT_HTTP_PREFIXES
value: "http://*:2113/"
The eventstore-configmap already contains these configurations:
## The prefixes that the internal HTTP server should respond to.
EVENTSTORE_INT_HTTP_PREFIXES: "http://*:{{ .Values.intHttpPort }}/"
## The prefixes that the external HTTP server should respond to.
EVENTSTORE_EXT_HTTP_PREFIXES: "http://*:{{ .Values.extHttpPort }}/"
Is there a reason why EVENTSTORE_EXT_IP
and EVENTSTORE_INT_IP
is configured in the statefulset and not in the configmap?
I believe that we can use eventstore-configmap also for EVENTSTORE_INT_IP, EVENTSTORE_EXT_IP That way will be clear where all the env settings are [EDIT] these 2 settings are using status.podIP and not a user defined setting, they probably need to stay in statefulset? to be tested
I've tried putting the EVENTSTORE_INT_IP
and EVENTSTORE_EXT_IP
into the configmap and using 0.0.0.0
also as internal ip. Didn't work, the post install script in the pipeline failed.
I reverted the changes. Ready for review from my side.
@ameier38 what do you think of changing from EVENTSTORE_EXT_IP=status.podIP to EVENTSTORE_EXT_IP=0.0.0.0 ?
I've stumbled across the EVENTSTORE_ADVERTISE_AS
option in the docs. AFAIK it might be possible to set EVENTSTORE_EXT_IP
and EVENTSTORE_INT_IP
to 0.0.0.0
and EVENTSTORE_ADVERTISE_AS
to $podIp
. Maybe I'll try this later
@hnicke I don't think that the use of ADVERTISE_AS is required in this scenario. I suggest to keep both EXT_IP and INT_IP as they are using the status.podIP . Can you just map an external port to the internal podIP or use a proxy service that does the routing during development?
I think it's important that this chart works out of the box with tools like kubectl port-forward
or kubefwd. Every chart I know does support this workflow; it enables convenient in-cluster development.
While we could add a proxy service to this chart, it adds complexity. Binding EVENTSTORE_EXT_IP
to 0.0.0.0
seems to be a legit solution and easy fix.
@ameier38 what do you think of changing from EVENTSTORE_EXT_IP=status.podIP to EVENTSTORE_EXT_IP=0.0.0.0 ?
deferring to @jhinds
@riccardone I think binding to 0.0.0.0
is fine. I looked a couple other charts (postgres, mongodb) and they are binding all ips by default.
This is also the default for the Event Store docker container https://github.com/EventStore/eventstore-docker/blob/master/eventstore.conf#L2
I did some testing based on this comment.
@hnicke we should set the EXT_IP
to 0.0.0.0
like you said, and also advertise the node address. If we just update the EXT_IP
then the port forward will not work if the cluster size is > 1. I also think we should give the option for people to change it so that it is not reachable outside the cluster. I think in a separate PR we should add a network policy similar to postgres
Could you add the following to the env:
section of the eventstore-statefulset.yaml
env:
- name: EVENTSTORE_INT_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: EVENTSTORE_EXT_IP
{{- if .Values.extIp }}
value: {{ .Values.extIp }}
{{ else }}
valueFrom:
fieldRef:
fieldPath: status.podIP
{{- end }}
- name: EVENTSTORE_EXT_IP_ADVERTISE_AS
valueFrom:
fieldRef:
fieldPath: status.podIP
and then add the following to the values.yaml
##
## External IP Address. Remove or set to `null` to disable connections from outside the cluster.
extIp: 0.0.0.0
@ameier38 ok, sounds good to me. @hnicke as soon as these changes are applied to the PR we will be ready to merge. Thanks.
Applied all changes. Thank you!
@hnicke it seems that there is a problem in values.yaml "error trailing spaces (trailing-spaces)" that is failing the test
Fixed.
PR Checklist
./release
)What this PR does / why we need it: When using
kubectl port-forward
for local development, the ext-tcp-port is not accessible. When trying to connect with a client, following error is shown:After some investigation it turns out that currently, the eventstore application binds to
<podIP>:<ext-tcp-port>
.After this patch gets applied, eventstore binds to
*:<ext-tcp-port>
. Pod is reachable both via in-cluster dns and via port-forward.Special notes for your reviewer: