Closed jd7352 closed 2 years ago
You can verify that both encore instances are connected to the same redis instances by verifying that they both list the same jobs, ie check http://192.168.0.1:8080/encoreJobs
and http://192.168.0.2:8080/encoreJobs
If both encore instances are connected to the same redis instance, they will share the job queue. However, if the concurrency
config parameter is greater than 1 (meaning each encore instance can process more then one job in paralell), there is no guarantee that jobs are distributed evenly across the instances. This will depend on priority of the jobs among other things. For instance if you have concurrency
set to 3, and you post two jobs with high priority, they may both be processed on the same encore instance. You could try posting more jobs than the concurrency
setting and see if both nodes then start processing jobs.
@grusell
Thank you for your prompt reply and help. Following your guidance, I tried to modify the concurrency
value to 1 and publish multiple tasks at the same time. Finally, I saw that the task queue was evenly distributed between the two instances, which is great and the result I wanted.
There is a suggestion, whether the server weight item can be considered in the running configuration in the future, allowing administrators to configure different weight values according to the server performance, and consider it when assigning tasks, which is conducive to assigning tasks to devices with higher performance first.
In addition, different Encore instances connect to share redis, does it support configuring account authentication, and if so, how do I configure it? Thanks again!
need help: I deployed two Encores, namely
test_server
andtest_node
, in which test_server (192.168.0.1) hasredis
, and pointed the running configuration item redis of test_node (192.168.0.2) to the server, and the two servers share the input and output directories through nfs , when submitting test tasks to both servers, they both work fine. The problem now is that when I submit multiple tasks only through the test_server node, these tasks are always queued in thetest_server
, and thetest_node
does not get any tasks, which causes the load to be unbalanced. Is my configuration incorrect or something is missing? How can we achieve load balancing or polling of tasks among nodes? Can redis configure account authentication?The
test_node
run configuration file is as follows: