Closed Isaksson closed 11 months ago
Tried with new local node-red docker container I did that already
I've since figured out what's wrong.
The name of my user is NodeRed
and the username is nodered
somehow now, I must log-in using NodeRed and not the username. very puzzling
Ahh okey, so the login is case sensitive then.
it's always been case sensitive, but somehow this new update
I must login using NodeRed
but if I rename the username to nodered2
then I need to login as expected as nodered2
somehow, now if only the casing between the username and the name is different then the name has precedence over the username
doesn't make sense
That's strange, I have to admit that. But good that you found the solution.
It took a while to find out what was wrong. It would have been much easier if rather than showing a red notice below the component, something had been output on debug with a clear explanation on what failed like: “login failed”
I think I've figured out what was going on if the module is used too often/quickly.
It happened again today, and I tried to login using the credentials used by nodered's unifi and I saw this.
It seems that unifi OS 3.1.15 added a rate limiter as to how often you can login and that this plugin doesn't use a persistent connection. So if you have to many rules access gets blocked pretty quickly.
put a comment there: https://community.ui.com/releases/UniFi-OS-Dream-Machines-3-1-15/80dbf52b-6fa5-4679-8e4b-b41a8fcbf526#comment/982c3f10-0e4f-4cd2-8326-6b59be9a1d25
But the way to properly deal with this is have the nodered plugin log in once and re-use that authentication over time.
Ahh I see, have you verified how many login that is making this "soft block"?
not many. it pretty much locks up as soon as I restart nodered.
It seems that on start it issues a raft of login. Why the UDM would lock with valid passwords I don't know.
But I can typically simulate three workflow before it locks
If you tell me how I can enable the logs to see exactly what the nodered unifi plugin does, I could tell you
Thanks for the information. I will modify the Node so that it tries to reuse and keep the login session between requests.
I have published a new version. Please try and see if that fixes your issue.
It started well. But after I called my HA switch that triggered the nodered action, it failed again. At least now the message on the unifi node is a bit clearer to understand, it shows the actual error which is great:
I can login with those same credentials at http://unifi.local (192.168.10.1) In the node configuration I had to change the port from 8443 to 443 like so:
Waiting after 2 minutes:
it works again.
and now repeating all my tests it works. Very confusing. will report again if it fails.
Thanks for testing. When connecting to Unifi OS then the port should always be 443, 8443 is for older controller. One thing to keep in mind here is that the login information is not shared between multiple instances of the Unifi Node, so try to keep the number of running nodes as few as possible and instead dynamically change what command you will send to the node.
oh okay. To keep things nice and neat I made as many unifi node as I have actions...
that would explain
Like I have a few of those:
I have one flow following the change of a switch (to change the internet rules) And the other checks every 30s for the status of a rule and mirror in home assistant the sensor value.
While I could do the first one easily connecting to a single unifi node; the 2nd case is a bit more annoying. must send to the right HA sensor
I see, but maybe you could use the payload from the result of the Unifi node to know what that node has done and based on that information you know what HA sensor that should have the update? and use function node with multiple outputs
Tried with new local node-red docker container
Can't figure out exactly what is wrong with it unfortunately, no idea on how to debug.
Originally posted by @jyavenard in https://github.com/Isaksson/node-red-contrib-unifi/issues/104#issuecomment-1650851545