Closed ido43210 closed 8 years ago
To add to the above, the questions I have regarding scalability are:
I suggest that you move this question to the Stack Overflow and Server Fault communities as you may find better answers than what we can provide here.
For the background queues etc... I don't believe we'll add another dependency to it as you can very easily manage it yourself inside your cloud code code by importing the module you like.
There are so many background queue implementations that choosing one I believe will create a disservice to the community. Some users would want to use AWS SQS, others redis, others rabbitMQ or zeroMQ etc...
For the clustering, you can use PM2 or another process manager that would be responsible for it, but there is no plan to add it to parse-server.
Again, those are valid questions but the Stack Exchange communities are more suited.
Could you at least comment on what happens in Parse when you use the 'scale' slider?
I believe this it toggles the billing and a software limiter on API requests, but that don't seem relevant here as there is no such sliders on parse-server
hmmm....but how is actual scaling takes place in parse.com? Do you know? If the slider creates threads to handle more requests on a single vm, then somewhere in parse-server there is a throng like feature that picks up on number of threads. If not then somewhere it adds a vm behind nginx (or similar). Thoughts?
Again, you can discuss that on stackoverflow as there is multiple technics and strategies to handle scaling. One that applied to parse.com may not apply to your use case
@flovilmart As a GCP user on app engine, in regards to scalability, I'm under the assumption that App Engine will automatically to handle any load i'm hit with. Am I right with this assumption (I have no server-side knowledge).
Hi, I have several concerns and questions (more of a discussion then an issue).
The first is a question about the proper way to scale up the parse server. Is using the nodejs vanila cluster module will suffice, or you would suggest using cloud base scalability feature like the heroku apps. (In the LiveQuery wiki you suggested a diagram that uses redis as the intermediate). Link: [(https://devcenter.heroku.com/articles/node-concurrency)]
The second is more of a suggestion to add a background task scheduler like node-resque [https://github.com/taskrabbit/node-resque] natively to the parse server. And make it accessible to the cloud code. The benefit for using a redis base background queue with the parse server is incredible, especially with the new LiveQuery feature. The use case I had in mind is incorporating with the parse server a high-cpu task that the client needs like image-processing, now the process I had in mind is:
P.S. really looking forward using parse server in production, the android+ios native SDKs really speeds us developing native apps.