gjovanov / mongo-swarm

Mongo ReplicaSet Cluster with Sharding (optional) deployed in Docker Swarm automated with Vagrant and Ansible
21 stars 9 forks source link

i cant connect to mongo #1

Closed nguyentanloc518091 closed 5 years ago

nguyentanloc518091 commented 5 years ago

i had run "vagrant up" and successfully. but i can't connect mongo by studio 3T tool. please help me!!!

connect URI: 10.100.195.201:27117

image

services had start successfully.

image

gjovanov commented 5 years ago

The cluster by default doesn't expose ports outside itself. If your mongodb client (Studio 3T) is outside the cluster network, it won't be able to reach the cluster. This is by design. As your app will probably run in the same cluster, it will have access to the Mongo Instances.

Outside you will need to open up the ports.

Use either UFW or IPTABLES for that. Then you can test with TELNET if you can reach the cluster from your machine.

Let me know if you still have issue with accessing the cluster.

nguyentanloc518091 commented 5 years ago

@gjovanov do you have the tutorial about this topic? i don't know the config how to run vagrant. thanks you!!!

image

image

gjovanov commented 5 years ago

@nguyentanloc518091 - This sounds like a firewall issue and you can use ufw to open them up.

Logon on mongo-01 and mongo-02 VM using ssh:

vagrant ssh mongo-01
...
vagrant ssh mongo-02

Check the UFW status ufw status - to check the status of ubuntu firewall

if it's disabled, enable it.

ufw enable

to open the ports, you can run:

ufw allow 27117/tcp
ufw allow 27217/tcp

Also you will need to enable port forwarding on those two VMs (mongo-01 and mongo-02): check this

Something among this lines:

(1..2).each do |i|
    config.vm.define "mongo-#{i.to_s.rjust(2, "0")}" do |d|
      d.vm.box = "ubuntu/xenial64"
      d.vm.network "private_network", ip: "10.100.195.2#{i.to_s.rjust(2, "0")}", netmask: "255.255.0.0"
      config.vm.network "forwarded_port", guest: #{ 27116 + i}, host: #{ 27116 + i} # NEW LINE
      d.vm.hostname = "mongo-#{i.to_s.rjust(2, "0")}"
      d.vm.provision :shell, inline: "sudo apt-get install -y python"
      d.vm.provider "virtualbox" do |v|
        v.customize ["modifyvm", :id, "--natdnshostresolver1", "on"]
        v.memory = 1024
      end
    end
  end

And you should be good to go.

Keep in mind this is not recommended approach as it's not a good idea to have your mongo cluster publicly exposed. A better approach is to keep your app within the cluster and you can always ssh in the mongo VMs from inside to query and analyze the data.

nguyentanloc518091 commented 5 years ago

i had added the code config.vm.network "forwarded_port", guest: #{ 27116 + i}, host: #{ 27116 + i} # NEW LINE but vagrant not run.

image

gjovanov commented 5 years ago

Hi @nguyentanloc518091, I have made new version that:

  1. Exposes the ports from mongo-01:27017 to host 27018 and from mongo-02:27017 to host 27019
  2. Deploys a demo app (mongo-log-api)

Please note that you will need to recreate the whole Mongo Cluster (with cleanup previous data), as there are some breaking changes due to the Vagrant Group names that don't support - (dash) anymore.

The following ports are available on the Host where you deploy these VMs:

host port: 8091 mapped to mongo-01:8080 (mongo log API app) host port: 8092 mapped to mongo-02:8080 (mongo log API app) host port: 27018 mapped to mongo-01:27017 host port: 27019 mapped to mongo-02:27017

Test Write to and Read from the Mongo Cluster You can test the Write/Read to the Mongo Cluster, you can use the deployed Mongo Log API NodeJS app from the docker repository gjovanov/mongo-log-api, also available in the /app folder.

This API app is deployed in the cluster on VMs mongo-01 and mongo-02 via the same docker stack file: /ansible/roles/mongo-swarm/docker_stack.yml

Write curl -d '{ }' -H "Content-Type: application/json" -X POST http://localhost:8091/api/log/write

Read curl http://localhost:8091/api/log/read?size=20

Let me know if you encounter any issue or you have questions.

nguyentanloc518091 commented 5 years ago

i use the postman and curl get them same error. image

image

duycuong87vn commented 5 years ago

@nguyentanloc518091 , What's happening with you ? Can you try destroy all nodes of vagrant and try up again ? mongo_swarm_issue

gjovanov commented 5 years ago

Guys, you need to destroy the whole cluster and recreate it again due to those breaking changes I mentioned.

Till the mid of next week I'm in a biz trip with limited internet connection.

In case you encounter the same issue after recreating the whole cluster, pls let me know, I can have look next week.

nguyentanloc518091 commented 5 years ago

@duycuong87vn you're using window, macos or linux? do you have a nick skype? my skype: nguyentanloc.vn

i see you use ip 192.168.1.200. in @gjovanov 's source is 10.100.198.200?

gjovanov commented 5 years ago

HI @duycuong87vn , did it work after restarting the VMs and recreating the whole mongo cluster?

Will close for now, feel free to reopen if the issue still persists.

gjovanov commented 5 years ago

Hi @nguyentanloc518091, just for infoI've pushed some changes that will enable connecting to Mongos outside of the Mongo Cluster e.g. from the Host.

Pls pull the latest changes and redeploy the mongo stack:

user@HOST: vagrant ssh manager-01
vagrant@manager-01: docker stack rm mongo
vagrant@manager-01: cd /vagrant/ansible/roles/mongo-swarm
vagrant@manager-01: docker stack deploy --compose-file docker_stack.yml mongo

Also on the HOST check if the subnet masks are old (255.255.255.0) and fix them accordingly to 255.255.0.0:

user@HOST: ifconfig

If Mask of any vboxnet network is 255.255.255.0 (e.g. vboxnet0), fix it to 255.255.0.0 run this command:

vboxmanage hostonlyif ipconfig vboxnet0 --ip 10.100.198.1 --netmask 255.255.0.0

Pls keep an eye on the IP address. In my case vboxnet0 is 10.100.198.1. In your case might be a different one.

After you have made sure the subnets have proper masks, you can connect to mongos from you host like this (assuming u have installed mongo client on your host): user@HOST: mongo "mongodb://mongo-01:27017,mongo-02:27017/log"

jaschaio commented 4 years ago

I needed to append --bind_ip_all to the mongos command in order to connect from any other container, another mongos instance or outside the cluster to it.