Closed neetra closed 4 years ago
Hey @neetra If both containers are running in the same pod (as in the example you specified), that means that they share the same IP address. So if appA needs to interact with appB, you should just use local host (127.0.0.1 and the relevant port 5000). Give that a try and let me know if that doesn’t work. Good luck!
@MoShitrit thanks for the response, but I am aware about this I want to know if there is any way to access?
Hey @neetra
So backing up, I just want to make sure I understand the issue correctly.
By access- do you mean that you wish to access appB directly using a service, and not sure how to do that? If that's the case, then all you have to do is either add another port to your existing service (kubectl edit service appaappbservice
) and under spec.ports
just add another object for appB, like this for example:
- name: appB
port: 5000
protocol: TCP
targetPort: 5000
Or, you can also create an additional service by running the same expose
command you specified earlier, only give it a different name and a different targetPort
, like this: kubectl expose pod appaappb-deployment-8656cfcdff-wd4hv --name=appbservice --type="NodePort" --target-port 5000 --port 5000
Another thing to bare in mind- once the service is updated, you'll need to find the actual node port that k8s allocated to your service, since (unless you configured it differently) the default range is a higher-number port, between 30000-32767, so you'll need to run kubectl get svc appaappbservice
and get the mapping between the ports (i.e. you'll see something like that:
$ kubectl get svc appaappbservice
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
appaappbservice NodePort 100.71.239.74 <none> 80:32588/TCP 610d
In the above example, you'll need to access your minikube instance using port 32588.
Hey, @neetra ! I was just curious if you had a chance to try that yet? Is everything good now?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/close
@MoShitrit: Closing this issue.
Problem: I have two applications say applicationA, applicationB.
Using Docker compose.yml: version: '3.4'
In dockerfile of applicationa (Exposed port 80), applicationb(exposed port 5000) Exposed port of applicationB is not mapped to external port thus it is not accessible directly ApplicationA access applicationB via http://containername:containerexposedport i.e http://applicationb:5000
Using Minikube: Deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: appaappb-deployment labels: app: appaappb spec: replicas: 1 selector: matchLabels: app: appaappb template: metadata: labels: app: appaappb spec: containers:
imagePullPolicy: Never containers:
imagePullPolicy: Never
Service (Map external port)
kubectl expose pod appaappb-deployment-8656cfcdff-wd4hv --name=appaappbservice --type="NodePort" --target-port 80 --port 80
Now as appA and appB are not in same docker network appA cannot access appB via container name and container internal port
So how to access appB in appA ?
information:
Minikube version:v1.9.2 Docker version: 19.0.3