moscajs / aedes-cli

Run Aedes MQTT Broker from the CLI
MIT License
53 stars 19 forks source link

[bug] Persistence MongoDB is not working #34

Closed mushroomwithegg closed 4 years ago

mushroomwithegg commented 4 years ago

System Information

Describe the bug image

To Reproduce Steps to reproduce the behavior:

  1. Run the docker-compose.yaml file
  2. See error

Expected behavior Aedes container should be running ok with Mongodb connection.

Additional context config.js

module.exports = {
  // SERVERS
  protos: ['tcp'],
  host: '0.0.0.0',
  port: 1883,
  wsPort: 3000,
  wssPort: 4000,
  tlsPort: 8883,
  key: null,
  cert: null,
  rejectUnauthorized: false,
  // AUTHORIZER
  credentials: '/data/credentials.json',
  // AEDES
  brokerId: 'aedes-cli',
  concurrency: 100,
  queueLimit: 42,
  maxClientsIdLength: 23,
  heartbeatInterval: 60000,
  connectTimeout: 30000,
  stats: true,
  statsInterval: 5000,
  // PERSISTENCES
  persistence: {
    name: 'mongodb',
    options: {
      url: 'mongodb://127.0.0.1/aedes'
    }
  },
  mq: null,
  // LOGGER
  verbose: true,
  veryVerbose: false,
  noPretty: false
}

docker-compose.yaml

version: '3.7'
services:
 aedes:
   container_name: aedes
   image: i/aedes:latest
   restart: always
   stop_signal: SIGINT
   networks:
     - mqtt
   command: --config /data/config-persistent.js --credentials /data/credentials.json # add here the options to pass to aedes
   volumes:
     - ./:/data # map the local folder to aedes
   ports:
     - '1884:1883'
     - '3001:3000'
     - '4001:4000'
     - '8884:8883'
 mongo:
   container_name: mongo
   networks:
     - mqtt
   logging:
     driver: none
   image: mvertes/alpine-mongo
   volumes:
     - db-data:/data/db
   ports:
     - "27018:27017"
volumes:
 db-data:
   name: db-data
networks:
 mqtt:

I created my own image since the image from docker-hub is not working.

robertsLando commented 4 years ago

@iwillflytothemoon The mongo url is wrong, fix it like this:

persistence: {
    name: 'mongodb',
    options: {
      url: 'mongodb://mongo/aedes' // <---- THIS
    }
  },

Mongodb is not running on localhost but it's on a separete container, mongo is the container name and it's used in docker network as a reference to it

mushroomwithegg commented 4 years ago

Woah I'm dumb. LOL. Thank you for that. Totally forgot about that.

Question though regarding the setup.

For instance that I make the it a scalable in kubernetes, the min brokers are 3, then for example that the cpu usage max up and it scales up to 7 brokers.

so the messages will go to 4th - 7th brokers also, while the traffic are high. After 3 hours, cpu usage goes back to normal and the number of brokers will go back to the min, which is 3 brokers.

What will happen to the messages/queues that were stored in 4th - 7th brokers?

robertsLando commented 4 years ago

What do you mean with messages/queues? All messages are stored in persistences and if you use mongodb/redis persistence they will be there.

mushroomwithegg commented 4 years ago

For instance I have 7 brokers and all the messages where going through that 7 brokers, what will happen to the published messages, with retain = true, that are stored in broker 4 - 7 if I downscale it to 3?

Since mq is enabled, will the retained messages be also in the remaining 3 brokers?

I'm going to test it either way to find out what will happen. Do I need to close this one? BTW, thanks for your help.

robertsLando commented 4 years ago

@iwillflytothemoon If you plan to use aedes with clusters you must use mongodb and/or redis persistences/mqemitters and the reason is the answer to your question, with them you have an on disk persistrence that will not 'die' with broker when it ends.

Default mqemitters and persistences store data in memory, they are fast but all data are lost when the process ends

mushroomwithegg commented 4 years ago

@robertsLando I don't know if what I did is right, but here is my test scenario:

My configuration for persistence and mqemitters is:

      persistence: {
        name: 'mongodb',
        options: {
          url: 'mongodb://localhost/aedes'
        }
      },
      mq: {
        name: 'mongodb',
        options: {
          url: 'mongodb://localhost/aedes'
        }
      },

I run 2 replicas of aedes+mongo in one pod, then I published a message with retain=true with a topic of testtopic in broker 0, I run subscribed with topic of testtopic in broker 0 and it shows the message that I published, which is great. But when I tried to run subscribe in broker 1 with topic of testtopic, it didn't show.

  1. will it work if I use mongodb for both?
  2. Is my scenario right or I have wrongly misunderstood how mqemitters works?
robertsLando commented 4 years ago

Nope, the message should be received also from from the client that has subscribed to broker 1, they just need to share the same mongodb as retained are stored there


Daniel

On 30 Jul 2020, at 01:35, Raymond Saga notifications@github.com wrote:

 @robertsLando I don't know if what I did is right, but here the test scenario:

My configuration for persistence and mqemitters is:

  persistence: {
    name: 'mongodb',
    options: {
      url: 'mongodb://localhost/aedes'
    }
  },
  mq: {
    name: 'mongodb',
    options: {
      url: 'mongodb://localhost/aedes'
    }
  },

I run 2 replicas of aedes+mongo in one pod, then I published a message with retain=true with a topic of testtopic in broker 0, I run subscribed with topic of testtopic in broker 0 and it shows the message that I published, which is great. But when I tried to run subscribe in broker 1 with topic of testtopic, it didn't show.

will it work if I use mongodb for both? Is my scenario right or I have wrongly understood how mqemitters works? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

mushroomwithegg commented 4 years ago

I just tried this setup with 2 brokers connected to 1 mongodb and it works. Thanks a lot @robertsLando