Closed VincentSC closed 2 years ago
Can you look in the directory you have mounted on /data
(./volumes/data
) and get the file size for flows.json
so we can see just how big the file is please.
While the other issue talks about number of nodes, that's not actually a fair representation of the true size of the data uploaded as it can be greatly inflated by function/template type nodes.
157kb
$ ls ./volumes/data/ -lh
total 532K
-rw-r--r-- 1 vincent vincent 157K feb 15 15:29 flows.json
-rw-r--r-- 1 vincent vincent 668 feb 15 14:39 flows_cred.json
drwxr-xr-x 3 vincent vincent 4,0K nov 23 19:53 lib
drwxr-xr-x 691 vincent vincent 20K feb 11 13:35 node_modules
-rw-r--r-- 1 vincent vincent 321K feb 11 13:35 package-lock.json
-rw-r--r-- 1 vincent vincent 1,5K feb 11 13:35 package.json
-rw-r--r-- 1 vincent vincent 14K feb 15 14:58 settings.js
OK, so assuming a stock settings.js
then it's not hitting the apiMaxLength (which defaults to 5mb) for working with the size of the flow.
VincentSC, for your information, I did not use a reverse proxy on my setup. Since I moved away from Docker I have never seen this issue anymore.
edit: added extra info
Yes, doubled it to 10mb. No influence. Was the first thing I tried. Also it happens with both a full deployment and partial deployment.
Cannot easily move away from Docker, as I have just set up everything with it. :( That you do not use a reverse proxy, is a hint we should not look there.
Did not succeed yet. I have tried updating to latest version, but still the same problem. As I had some critical work to do, I had to delay researching this problem.
I thought there were a few hundred, but there were around 110 and now there are 104. When I hit the limit again and have done my urgent tasks, I'm going to look deeper into it with full logging. I'm all ears for suggestions where to keep an eye out for
I am curious, back then I did not find the solution fast enough, if you are able to setup a Linux (VM) or LXC, you should be able to switch very easy in case it is necessary (just export the Node Red configuration in the current setup and import in the new setup). Took me less then an hour.
Our whole infrastructure is on Docker. So (partly) moving away from Docker, will create new problems. For instance there is not something like a LXC hub/repo - this means I would need to create "cloud-inits" for every piece of software we have, while we now have Dockerfiles that just make a few changes to provided Docker recipes from the hub. One hour for a quickfix, many more for the full fix - it makes sense for me to solve it.
On my list to try is to build my own docker and then make small changes there. https://github.com/node-red/node-red-docker/blob/master/.docker/Dockerfile.alpine and for instance use "current-buster" as base instead of alpine.
I'm not aware on any size limit to number of nodes imposed by docker. I would be more suspicious of a rogue node that maybe trying to do something on the network (install extra files) - that is maybe causing a timeout due to the container not have the access that node requires... but again pure speculation.
Thanks for also thinking with me. If not Docker, I move back to the original issue. So I first focus on that. I reacted to your suggestion in the other issue.
Goal of my mini-project is to get something in the logs that is helpful for anybody who gets into the same situation, like ("Err: could not save flow. <explanation>")
.
Oops, accidentally fixed it, but I have no clue what it was. :(
The main things I did:
If it has to be one of the above, I think it's the last one, as I still had problems after doing the first two. I also restarted the server, which could also have done something. Super-vague!
Yes, I should have kept a copy of the previous version of my flow, but unfortunately I did not check that in. As I expect the number of nodes to double this year, and will pay better attention when it happens again.
This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further.
I ported node-red-docker
from Alpine to Debian, and that seems to have solved the problem.
There seems to be a maximum number of nodes that can be deployed, and it seems to only happens in Docker.
This is an issue that was originally reported by @hspjanssen. https://github.com/node-red/node-red/issues/3050 - please read that issue to understand the problem - it also has a few videos that explain the problem quite well. He migrated away from Docker and the problem disappeared. This is the main reason I put this issue here now.
For me personally, there are not just 47 nodes, but a few hundred nodes. Difference is that I have a more recent version. I also found that the problem is there when doing only a partial deployment - maybe a hint?
When I do
docker logs -f nodered
I see literally nothing when doing a deployment.From Portainer:
Dockerfile:
docker-compose:
There is jwilder's nginx-proxy to handle https, with
client_max_body_size 0;
- it is tested to work with other dockers. Not sure if @hspjanssen had a reverse proxy in place, but I don't think this is the cause of the problem.