Closed mushroomwithegg closed 4 years ago
@iwillflytothemoon The mongo url is wrong, fix it like this:
persistence: {
name: 'mongodb',
options: {
url: 'mongodb://mongo/aedes' // <---- THIS
}
},
Mongodb is not running on localhost but it's on a separete container, mongo
is the container name and it's used in docker network as a reference to it
Woah I'm dumb. LOL. Thank you for that. Totally forgot about that.
Question though regarding the setup.
For instance that I make the it a scalable in kubernetes, the min brokers are 3, then for example that the cpu usage max up and it scales up to 7 brokers.
so the messages will go to 4th - 7th brokers also, while the traffic are high. After 3 hours, cpu usage goes back to normal and the number of brokers will go back to the min, which is 3 brokers.
What will happen to the messages/queues that were stored in 4th - 7th brokers?
What do you mean with messages/queues? All messages are stored in persistences and if you use mongodb/redis persistence they will be there.
For instance I have 7 brokers and all the messages where going through that 7 brokers, what will happen to the published messages, with retain = true, that are stored in broker 4 - 7 if I downscale it to 3?
Since mq is enabled, will the retained messages be also in the remaining 3 brokers?
I'm going to test it either way to find out what will happen. Do I need to close this one? BTW, thanks for your help.
@iwillflytothemoon If you plan to use aedes with clusters you must use mongodb and/or redis persistences/mqemitters and the reason is the answer to your question, with them you have an on disk persistrence that will not 'die' with broker when it ends.
Default mqemitters and persistences store data in memory, they are fast but all data are lost when the process ends
@robertsLando I don't know if what I did is right, but here is my test scenario:
My configuration for persistence and mqemitters is:
persistence: {
name: 'mongodb',
options: {
url: 'mongodb://localhost/aedes'
}
},
mq: {
name: 'mongodb',
options: {
url: 'mongodb://localhost/aedes'
}
},
I run 2 replicas of aedes+mongo in one pod, then I published a message with retain=true with a topic of testtopic in broker 0, I run subscribed with topic of testtopic in broker 0 and it shows the message that I published, which is great. But when I tried to run subscribe in broker 1 with topic of testtopic, it didn't show.
Nope, the message should be received also from from the client that has subscribed to broker 1, they just need to share the same mongodb as retained are stored there
Daniel
On 30 Jul 2020, at 01:35, Raymond Saga notifications@github.com wrote:
@robertsLando I don't know if what I did is right, but here the test scenario:
My configuration for persistence and mqemitters is:
persistence: { name: 'mongodb', options: { url: 'mongodb://localhost/aedes' } }, mq: { name: 'mongodb', options: { url: 'mongodb://localhost/aedes' } },
I run 2 replicas of aedes+mongo in one pod, then I published a message with retain=true with a topic of testtopic in broker 0, I run subscribed with topic of testtopic in broker 0 and it shows the message that I published, which is great. But when I tried to run subscribe in broker 1 with topic of testtopic, it didn't show.
will it work if I use mongodb for both? Is my scenario right or I have wrongly understood how mqemitters works? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
I just tried this setup with 2 brokers connected to 1 mongodb and it works. Thanks a lot @robertsLando
System Information
Describe the bug
To Reproduce Steps to reproduce the behavior:
Expected behavior Aedes container should be running ok with Mongodb connection.
Additional context config.js
docker-compose.yaml
I created my own image since the image from docker-hub is not working.