Closed oli-g closed 9 years ago
I'm not sure if I understand. Is the question, how to set/change the erlang cookie?
If so, similarly to replacing the config file (described in README):
-v `pwd`/ejabberd.cfg:/opt/mongooseim/rel/mongooseim/etc/ejabberd.cfg`
by analogy sth like this:
-v `pwd`vm.args:/opt/mongooseim/rel/mongooseim/etc/vm.args`
As you see, mongoose reads cookie from the /opt/mongooseim/rel/mongooseim/vm.args
file. The default cookie is ejabberd
: reference.
Ah, I see the problem now. This line expects mongooseim@localhost
in vm.args
, to replace it with the correct hostname.
All in all, if you want it quickly, you can prepare the vm.args file ,but remember to set sname mongooseim@localhost
, I guess It should work. Alternatively you can wait a couple of hours.
I will make it configurable via an environment variable, so it will be something like this:
docker run -d -t -h mim2 --name mim2 --link mim1:mim1 -e CLUSTER_WITH=mim1 -e ERLANG_COOKIE=mongooseisthebest
.
Does it address your issue or I got it completely wrong :smile: ?
Wow! Thanks, you've been fast! :smile:
I think I misunderstood the MongooseIM documentation, I thought the cookie was a file that every node would have to share. But it's not a file, it's just a string parameter in vm.args
file.
The thing is that I cannot make a cluster work, so I thought the problem was the different cookie files. But it seems that the problem is somewhere else, because all the nodes already use the same cookie (the default ejabberd
).
Thanks for your support! I will keep trying to setup a cluster (I'm running the Docker containers on different virtual machines, not on the same host, so --link
cannot work in this case).
At the end I think that having the ERLANG_COOKIE
variable is pretty useless. I mean, why would you want to change its default value? But if it could be useful for some use cases (which I ignore at the moment)... you could think to add it.
Why make it configurable? - maybe some kind of "security" reasons, but I hope that nobody is so "brave" to expose distributed erlang on public interface :).
Regarding --link
, docker has something called ambassador pattern to make it work across multiple hosts, but IMO it is ugly and a bit hacky + I haven't tried it with Mongoose.
Going back to your problem, you can try to use --add-host
instead of the link option. I didn't try it, but I think it will add the given ip to /etc/hosts
located in the container filesystem. This is what we need and what the link option does. Then docker run -d -t -h mim2 --name mim2 --add-host mim1:127.0.0.1 -e CLUSTER_WITH=mim1
might work. Let me know if it works or give more details about you problem, maybe I will be able to help.
One more thing, have you forwarded the following ports:
They are required to establish the distributed erlang connection(cluster). It is not needed on one machine, because containers can talk to each other - no ports/firewall restrictions in containers network, unless you specify a different strategy.
Yep, I was already using the --add-host
parameter. The thing was that I did not exposed all the required ports on the master node. Now, after you pointed out that, it worked! :smile:
As a reference, I attach the Vagrantfile
I used. Maybe you could add it to the project or into the README.md
, it could help other people setting up a cluster using multiple hosts.
# -*- mode: ruby -*-
# vi: set ft=ruby :
HOSTS = [
{ name: "mongooseim-1", ip: "192.168.200.2", master: true },
{ name: "mongooseim-2", ip: "192.168.200.3" }
]
Vagrant.configure(2) do |config|
config.vm.box = "ubuntu/trusty64"
HOSTS.each do |host|
config.vm.define host[:name] do |node|
node.vm.hostname = "#{host[:name]}-host"
node.vm.network "private_network", ip: host[:ip]
node.vm.provider "virtualbox" do |v|
v.name = "#{host[:name]}-vm"
v.cpus = 1
v.memory = 512
end
if host[:master]
node.vm.provision "docker", version: "1.7.1" do |d|
d.run "mongooseim/mongooseim-docker",
daemonize: true,
auto_assign_name: false,
args: "-t -p 5222:5222 -p 5280:5280 -p 5269:5269 -p 4369:4369 -p 9100:9100 -h #{host[:name]} --name #{host[:name]}"
end
end
if !host[:master]
master = HOSTS.find { |h| h[:master] }
node.vm.provision "docker", version: "1.7.1" do |d|
d.run "mongooseim/mongooseim-docker",
daemonize: true,
auto_assign_name: false,
args: "-t -p 5222:5222 -p 5280:5280 -p 5269:5269 -p 4369:4369 -p 9100:9100 -h #{host[:name]} --name #{host[:name]} --add-host #{master[:name]}:#{master[:ip]} -e CLUSTER_WITH=#{master[:name]}"
end
end
node.vm.provision "shell", inline: %q{usermod -a -G docker vagrant}
node.vm.provision "shell", inline: %q{ps aux | grep 'sshd:' | awk '{print $2}' | xargs kill}
end
end
end
After running this, I got:
vagrant ssh mongooseim-2
vagrant@mongooseim-2-host:~$ docker exec -it mongooseim-2 /bin/bash
root@mongooseim-2:/# mongooseimctl mnesia info | grep "running db nodes"
running db nodes = ['mongooseim@mongooseim-1','mongooseim@mongooseim-2']
This means it worked, doesn't it?
Thanks for your precious help!!!
Yes, it seems to work.
Thanks a lot for the Vagrantfile
! I will definitely include it.
Hi @ppikula
Thank you for maintaining this Docker image.
I have a question, which would be your strategy to share the same cookie among all the cluster nodes?