twrecked / pyaarlo

Asynchronous Arlo Component for Python
GNU Lesser General Public License v3.0
50 stars 30 forks source link

authentication error #121

Open vzlgdu opened 1 year ago

vzlgdu commented 1 year ago

Hi, starting from this morning I'm receiving authentication errore in my python scritp. It's running with no problem starting from many months ago...

What's happening?

Thanks

Guido

twrecked commented 1 year ago

Which version are you on?

vzlgdu commented 1 year ago

0.8.0a5

twrecked commented 1 year ago

Can you try the latest release. Arlo changed their back end and broke our login code.

vzlgdu commented 1 year ago

With the latest release I receive this errors:

body-error=JSONDecodeError body-error=JSONDecodeError body-error=JSONDecodeError authentication failed

twrecked commented 1 year ago

Can you try these 2 things:

If that doesn't work can you turn on verbose debug, I need more information on where it's failing.

njaouen commented 1 year ago

Hi vzlgdu, Any update on your side ?

m0urs commented 1 year ago

Seems that I do have the same issue here also. Can no longer login with latest code. Enclosed my encrypted debug.log. It would be great if you have any chance to look at it and maybe find out what the issue could be. Let me know if I should test something. Thanks a lot. P.S: I already set "mqtt_hostname_check=False" but with no success.

Looks a bit like a Cloudflare issue? Think we had something similar also in the past?

debug.log

twrecked commented 1 year ago

It's definitely a cloud flare thing. If you leave for a few hours it might recover. I think the old code made cloudflare angry and they blocked it temporarily.

twrecked commented 1 year ago

You connection is stopping straight away. A couple of things to try:

I did notice I'm sending in an extra header - the transaction id one - but I don't think that is causing the issue, I can still connect from here with it.

vzlgdu commented 1 year ago

Hi, sorry for the delay... with your suggest (mqtt_hostname_check=False) it was ok for some days. Today I received the same error.

Adding "user_agent='linux'" the result is the same.

Accessing from web interface doesn't work and ends with "timeout request"

Accessing from ios app everything work.

m0urs commented 1 year ago

Even after waiting some days I am still unable to login from pyarlo. I can login via browser from the same IP address. Maybe we need to set some more values toegether with the request. Here is a sample what my browser is sending:

:authority: ocapi-app.arlo.com :method: POST :path: /api/startAuth :scheme: https accept: application/json, text/plain, / accept-encoding: gzip, deflate, br accept-language: de authorization:XXXXXXXXXXXXXXXXXXXXXXX content-length: 184 content-type: application/json cookie: _XXXXXXXXXXXXXXXXXXXXXXXXXXX dnt: 1 origin: https://my.arlo.com referer: https://my.arlo.com/ sec-ch-ua: "Microsoft Edge";v="113", "Chromium";v="113", "Not-A.Brand";v="24" sec-ch-ua-mobile: ?0 sec-ch-ua-platform: "Windows" sec-fetch-dest: empty sec-fetch-mode: cors sec-fetch-site: same-site source: arloCamWeb user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.42 x-user-device-automation-name: QlJPV1NFUg== x-user-device-id: 615cdfd1-0a03-4a91-91b3-12d21aeba713 x-user-device-type: BROWSER

What about these ones:

sec-ch-ua: "Microsoft Edge";v="113", "Chromium";v="113", "Not-A.Brand";v="24"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
sec-fetch-dest: empty
sec-fetch-mode: cors
sec-fetch-site: same-site
vzlgdu commented 1 year ago

Hi, I'm trying some test (now accessing using amail as 2fa) and I'm receiving this error:

/usr/local/lib/python3.7/dist-packages/urllib3/connection.py:362: SubjectAltNameWarning: Certificate for myapi.arlo.com has no subjectAltName, falling back to check for a commonName for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for details.) SubjectAltNameWarning request-error=ValueError session start failed body-error=JSONDecodeError body-error=JSONDecodeError body-error=JSONDecodeError authentication failed

I'm using this in my script:

import ssl ssl.SSLContext.verify_mode = ssl.VerifyMode.CERT_OPTIONAL

This are my logon string:

arlo = pyaarlo.PyArlo( username='xxx',password='yyy', tfa_type='email',tfa_source='imap', synchronous_mode=True,save_state=False, tfa_host='imap.gmail.com', tfa_username='xxx', tfa_password='zzz')

Thanks

m0urs commented 1 year ago

Do we have any new information? Is it working again for somebody? I still have the same issue with Cloudflare although I did not try it now for severral days. So it does not seem to be reset by its own :-(

twrecked commented 1 year ago

Sorry, it's working for me here - both standalone and on home assistant.

Did you remove the session files? Or are you still stuck at the cloud flare piece?

m0urs commented 1 year ago

Yes, the error seems still to be the Cloudflare error as far as I can see in the debug file. Where should I find the session / pickle files? Did not find anything which looks like that? And, @vzlgdu , is it working again for you?

twrecked commented 1 year ago

Look for a file with this name session.pickle. It's where we store the authentication credentials.

m0urs commented 1 year ago

There is no such file. Maybe because I am not currently authenticated?

 $ grep -ir "session.pickle" .
./pyaarlo/cfg.py:        return self.storage_dir + "/session.pickle"
Binary file ./pyaarlo/__pycache__/cfg.cpython-38.pyc matches
Binary file ./pyaarlo/backup/__pycache__/cfg.cpython-38.pyc matches
twrecked commented 1 year ago

So. A few more things to try/check:

  user_agent='!Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.42'
ninjafella commented 1 year ago

Hello there,

I'm having the same issue of having cloudflare blocking me. I've tried using different 'user_agents' and making sure my cloudscraper was up to date but no luck. I've attached my logs bellow.

Whats perculiar is that home assistant using aarlo can connect fine and i'm having no issues with it on there.

arlo_debug.log

m0urs commented 1 year ago
 user_agent='!Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.42'

I updated cloudscraper and tried this user agent, but with no luck :-( I am using pyarlo with my own script, not sure if there is an issue with my script using pyarlo (although it worked fine since some weeks ago): https://github.com/m0urs/arlo-fhem/blob/master/arlo-fhem.py

shissam commented 1 year ago
 user_agent='!Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Safari/537.36 Edg/113.0.1774.42'

I updated cloudscraper and tried this user agent, but with no luck :-( I am using pyarlo with my own script, not sure if there is an issue with my script using pyarlo (although it worked fine since some weeks ago): https://github.com/m0urs/arlo-fhem/blob/master/arlo-fhem.py

overnight - my script continued to fail for approx 7 hours, after manual intervention (removed session.pickle and aarlo.pickle) it failed once, and then was successful.

I was comparing my script to your script and noted that ours are pretty much the same (args to PyArlo specifically using the sse backend and my agent is still linux). There is one other thing that my system has... in /etc/hosts (based on recommendations from past issues dealing with cloudfare) i have this entry:

104.18.30.98 ocapi-app.arlo.com
#104.18.31.98 ocapi-app.arlo.com

also cloudscraper is

pip list|grep -i scrap
cloudscraper      1.2.58 
m0urs commented 1 year ago

"ocapi-app.arlo.com" is already resolved here to 104.18.30.98 so an additional hosts entry should not do anything else. My cloudscraper is now updated to 1.2.71. I think it was 1.2.58 before updating this morning.

However I still have no pickle files. Where should these files be created??

shissam commented 1 year ago

"ocapi-app.arlo.com" is already resolved here to 104.18.30.98 so an additional hosts entry should not do anything else. My cloudscraper is now updated to 1.2.71. I think it was 1.2.58 before updating this morning.

However I still have no pickle files. Where should these files be created??

the command you want to run to manually find those pickle files (and remove them) is:

sudo find / -mount -iname \*.pickle -type f

these files are created in a default location relative to the current working dir for the process unless specified on the PyArlo call (see storage_dir=)

shissam commented 1 year ago

"ocapi-app.arlo.com" is already resolved here to 104.18.30.98 so an additional hosts entry should not do anything else. My cloudscraper is now updated to 1.2.71. I think it was 1.2.58 before updating this morning. However I still have no pickle files. Where should these files be created??

the command you want to run to manually find those pickle files (and remove them) is:

sudo find / -mount -iname \*.pickle -type f

these files are created in a default location relative to the current working dir for the process unless specified on the PyArlo call (see storage_dir=)

to be clear remove both pickle files, mine are:

~/iot/aarlo/session.pickle
~/iot/aarlo/aarlo.pickle
m0urs commented 1 year ago

I am unable to find these files ... so I guess it will not be created as Cloudflare does not let me go so far and refuses with the error before?

I did a downgrade to cloudscraper 1.2.58 but this did not change anything.

I still get this Cloudflare issue:

You do not have access to ocapi-app.arlo.com.

The site owner may have set restrictions that prevent you from accessing the site.

...

Error code: 1020

Ray ID: 7cff7a4e2b7a37d8

Country: DE

Data center: fra12

IP: 176.199.xx.yy

Timestamp: 2023-05-31 13:14:03 UTC

twrecked commented 1 year ago

Also look in /tmp/.aarlo but I suspect you are seeing other problems.

m0urs commented 1 year ago

Also look in /tmp/.aarlo but I suspect you are seeing other problems.

Thanks, that was the right location :-) I did delete both files there but still no login possible. And the files are NOT recreated ....

shissam commented 1 year ago

in my world... those files persist between logins, so i'd expect those files would be around from the last successful login. for those files to not be on your system anywhere would lead me to think there was never a successful login -- but that cannot certainly be the case for you. so "strange"

one other thought would be to move back to the mqtt backend - but perhaps that would not make a difference.

if it were me, i'd first try the mqtt backend, if that failed, i'd jump to another ocapi-app.arlo.com, and if that failed, I'd remove the ocapi-app.arlo.com completely from my /etc/hosts.

i'll also say this - i had this same problem on 18 MAY, and my resolution on that day was to git pull from pyaarlo which brought me up to the current version of pyaarlo. So far, in all cases, removing those pickle files was the common denominator in resolving my issues then and this morning.

twrecked commented 1 year ago

@shissam That's pretty much what happened to me!

The pickle files mean we can skip the whole Cloud Flare authentication and continue with a previously authenticated session, the they are good for up to 14 days, after which we log in again.

They work until they don't. And I think (but I don't have much evidence to back this up) when Arlo makes server changes they will invalidate the current ones and the cache sessions will fail. I thought the code would recover but I'll double check the logic, it could also be at that point we've made too many attempts and tripped something.

@m0urs If you're comfortable with an editor you can add the missing header lines and try that. Just look for x-user-device-type in the backend.py file. It appears twice (the code is in need of a tidy up).

m0urs commented 1 year ago

If you're comfortable with an editor you can add the missing header lines and try that. Just look for x-user-device-type in the backend.py file. It appears twice (the code is in need of a tidy up).

You mean I should add these headers?

sec-ch-ua: "Microsoft Edge";v="113", "Chromium";v="113", "Not-A.Brand";v="24" sec-ch-ua-mobile: ?0 sec-ch-ua-platform: "Windows" sec-fetch-dest: empty sec-fetch-mode: cors sec-fetch-site: same-site

If yes: I tried it but I got some issues because there are " and ; characters inside these header values which I guess, need to be masked? I did it this way:

            "x-user-device-type": "BROWSER",
            "sec-ch-ua": "\"Microsoft Edge"\;v=\"113\", \"Chromium\"\;v=\"113\", \"Not-A.Brand\"\;v=\"24\"",
            "sec-ch-ua-mobile": "?0",
            "sec-ch-ua-platform": "\"Windows\"",
            "sec-fetch-dest": "empty",
            "sec-fetch-mode": "cors",
            "sec-fetch-site": "same-site",
        }

So I tried this but I do get the following error:

May 31 16:12:29 raspberrypi02 arlo-fhem[6133]:   File "/opt/fhem/arlo-fhem/pyaarlo/backend.py", line 681
May 31 16:12:29 raspberrypi02 arlo-fhem[6133]:     "sec-ch-ua": "\"Microsoft Edge"\;v=\"113\", \"Chromium\"\;v=\"113\", \"Not-A.Brand\"\;v=\"24\"",
May 31 16:12:29 raspberrypi02 arlo-fhem[6133]:                                                                                                    ^
May 31 16:12:29 raspberrypi02 arlo-fhem[6133]: SyntaxError: unexpected character after line continuation character

If I do not mask the ; but only the " characters then I do get this error:

May 31 16:11:09 raspberrypi02 arlo-fhem[5837]:   File "/opt/fhem/arlo-fhem/pyaarlo/backend.py", line 681
May 31 16:11:09 raspberrypi02 arlo-fhem[5837]:     "sec-ch-ua": "\"Microsoft Edge";v=\"113\", \"Chromium\";v=\"113\", \"Not-A.Brand\";v=\"24\"",
May 31 16:11:09 raspberrypi02 arlo-fhem[5837]:                                    ^
May 31 16:11:09 raspberrypi02 arlo-fhem[5837]: SyntaxError: invalid syntax

What would be the correct syntax here?

twrecked commented 1 year ago

You could try single quotes on the outside.

            "sec-ch-ua": '"Microsoft Edge";v="113", "Chromium";v="113", "Not-A.Brand";v="24"',
            "sec-ch-ua-mobile": "?0",
            "sec-ch-ua-platform": '"Windows"',
            "sec-fetch-dest": "empty",
            "sec-fetch-mode": "cors",
            "sec-fetch-site": "same-site",
m0urs commented 1 year ago

Thanks, that worked.

However still the same error even with these additional headers. I really have no idea how Cloudflare detects the difference bet ween my Edge browser session on my desktop PC and the PyArlo connect ... I also tried another IP address for the Cloudflare target, but also with no success.

twrecked commented 1 year ago

I might have found something. I think I'm supplying the wrong device id, I'll look into it.

twrecked commented 1 year ago

@m0urs I pushed a fix, can you try b8 or the latest.

m0urs commented 1 year ago

@m0urs I pushed a fix, can you try b8 or the latest.

Just tried the b8 but unfortunately I still have the same issue. Using my original settings now. Or should I change anything?

twrecked commented 1 year ago

Use you original. Can you upload some debug and I'll take a look?

vzlgdu commented 1 year ago

Hi I't running without problem for a week using this configuration:

arlo = pyaarlo.PyArlo( username='xxxx@gmail.com',password='yyyy, mqtt_hostname_check=False, tfa_type='email',tfa_source='imap', synchronous_mode=True,save_state=False, tfa_host='imap.gmail.com', tfa_username='xxxxx, tfa_password='zzzz', tfa_nickname='xxxx@gmail.com')

this morning, after an internet failure, I'm receiving this error:

general-error=gaierror Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/pyaarlo/backend.py", line 519, in _mqtt_main self._event_client.connect(self._arlo.cfg.mqtt_host, port=443, keepalive=60) File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 937, in connect return self.reconnect() File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 1071, in reconnect sock = self._create_socket_connection() File "/usr/local/lib/python3.7/dist-packages/paho/mqtt/client.py", line 3522, in _create_socket_connection return socket.create_connection(addr, source_address=source, timeout=self._keepalive) File "/usr/lib/python3.7/socket.py", line 707, in create_connection for res in getaddrinfo(host, port, 0, SOCK_STREAM): File "/usr/lib/python3.7/socket.py", line 748, in getaddrinfo for res in _socket.getaddrinfo(host, port, family, type, proto, flags): socket.gaierror: [Errno -2] Name or service not known

Please, let me know.

Thanks

m0urs commented 1 year ago

Use you original. Can you upload some debug and I'll take a look?

browser_request-encrypted.txt debug-encrypted.txt

Besides the debug log with the failed request, I also send you the request headers of my (working) browser session. The only difference I see, which could be relevant, would be

content-length: 127
content-type: application/json; charset=UTF-8

which will not be sent by PyArlo (or I maybe missed it)

If you need to see the code which I am using, check https://github.com/m0urs/arlo-fhem/tree/42-cannot-login-any-more and here arlo-fhem.py

ninjafella commented 1 year ago

@m0urs I pushed a fix, can you try b8 or the latest.

I've been having the same issue as m0urs. I've tried b8 and it made no difference. The weird thing is the exact same setup and credentials running on Home Assistant using your hass-aarlo integration and it works perfectly. Just for some reason when I use pyaarlo by itself it runs into cloudflare issues. Is there any differences between the pyaarlo used with your ha itegration and the one here?

twrecked commented 1 year ago

@vzlgdu It looks like Arlo moved (or took offline) the mqtt servers. Can you try the sse back end?

@ninjafella b8 is ahead by one set of changes but b7 should be identical to hass-aarlo. Can you upload debug from both?

@m0urs Thanks, I'll take a look.

vzlgdu commented 1 year ago

Hi, using backend='sse' the login works fine without errors, but in a simple script like this

the new mode was set correctly, but the script ends after some minutes generating this errors:

attempt 1: error in response when setting mode= None attempt 2: error in response when setting mode= None

attempt 3: error in response when setting mode= None Failed to set mode. <RequestsCookieJar[Cookie(version=0, name='__cf_bm', value='67v7vxgl0n9En_2UWcIEYuGHqsZ9ZIBE_L0ruHMor3w-1685645690-0-ARx4g/Zegzr5NIB8OZ23NQy6SYEhG9lUcKaWO6CuLYv+tiZrjlLKQjIX8U0scAw1tW45Wqi3ae5oeSp9QPu7ZDg=', port=None, port_specified=False, domain='.myapi.arlo.com', domain_specified=True, domain_initial_dot=True, path='/', path_specified=True, secure=True, expires=1685647490, discard=False, comment=None, comment_url=None, rest={'HttpOnly': None, 'SameSite': 'None'}, rfc2109=False), Cookie(version=0, name='JSESSIONID', value='910082F82BEA4F9DC242B68B25C3D139', port=None, port_specified=False, domain='myapi.arlo.com', domain_specified=False, domain_initial_dot=False, path='/hmsweb', path_specified=True, secure=True, expires=None, discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False)]>

What do you think about?

Thanks

ninjafella commented 1 year ago

@twrecked Here are my log files from both pyaarlo b7 and b8 running outside of HA, the HA logs from hass-aarlo and the setup that I am using for HA. ha_logs.txt aarlo HA setup.txt pyaarlo_b8.log pyaarlo_b7.log

Pete118 commented 1 year ago

Any news here? I have the sam issue like ninjafella and m0urs

twrecked commented 1 year ago

@ninjafella Thanks for the logs but there wasn't enough information in them, I need to see what is being put in the packets.

On HA you can add the following:

  verbose_debug: True

And on PyArlo you need to add the following to pyaarlo.PyArlo:

  verbose_debug=True

You can use the webpage talked about here to encrypt the logs so only I see them.

vzlgdu commented 1 year ago

Hi, any news about this issue?

Hi, using backend='sse' the login works fine without errors, but in a simple script like this

istuctions for login using backend='sse' base = arlo.base_stations[0] base.mode = 'Giorno' the new mode was set correctly, but the script ends after some minutes generating this errors:

attempt 1: error in response when setting mode= None attempt 2: error in response when setting mode= None

attempt 3: error in response when setting mode= None Failed to set mode. <RequestsCookieJar[Cookie(version=0, name='__cf_bm', value='67v7vxgl0n9En_2UWcIEYuGHqsZ9ZIBE_L0ruHMor3w-1685645690-0-ARx4g/Zegzr5NIB8OZ23NQy6SYEhG9lUcKaWO6CuLYv+tiZrjlLKQjIX8U0scAw1tW45Wqi3ae5oeSp9QPu7ZDg=', port=None, port_specified=False, domain='.myapi.arlo.com', domain_specified=True, domain_initial_dot=True, path='/', path_specified=True, secure=True, expires=1685647490, discard=False, comment=None, comment_url=None, rest={'HttpOnly': None, 'SameSite': 'None'}, rfc2109=False), Cookie(version=0, name='JSESSIONID', value='910082F82BEA4F9DC242B68B25C3D139', port=None, port_specified=False, domain='myapi.arlo.com', domain_specified=False, domain_initial_dot=False, path='/hmsweb', path_specified=True, secure=True, expires=None, discard=True, comment=None, comment_url=None, rest={'HttpOnly': None}, rfc2109=False)]>

What do you think about?

Thanks

vzlgdu commented 1 year ago

hi, it seems that the support is ended...

KoolLSL commented 1 year ago

I had the same issues since weeks, I tried all versions from v0.7.4b12 to v0.7.4b16 and other tips, but I was still blocked by Cloudflare error 403, using 2FA by IMAP, backend: sse. The 2FA email was not even sent by Arlo. Today I tweaked pyarlo/backend.py , adding to def _auth_headers(self): "Cache-Control": "no-cache", "Pragma": "no-cache", and then it logged in again and all cameras appeared at next HA restart ! For test, I deleted these 2 added lines and restarted HA, and it logged in correctly again. I am not sure what exactly happend, but I suppose that at some point if Arlo made some changes on their side, it is needed to clear a cache. Currently using v0.7.4b16. Could this cache stuff be a clue ?

m0urs commented 1 year ago

Unfortunately that did not work for me :-( However, I do get a different Cloudflare message now (not sure if this because of the change of because Cloudflare changed something on their side the last 4 weeks):

current-cloudflare-error.txt old-cloudflare-error.txt

I changed the file extension from HTML to TXT as Github does not accept HTML files.

m0urs commented 1 year ago

I just checked again what is sent to Arlo if I log in via browser (which is working without a problem):

Just before the POST call to "https://ocapi-app.arlo.com/api/auth" there is an OPTION call to the same URL as well. This call seems to return two cookies

__cf_bm __cfruid

There are some other cookies which are sent by the POST command, like

_gid=GA1.2.1624826369.1688290977; _gat_UA-58158251-37=1; _ga_867B4ZNXMT=GS1.1.1688290977.1.0.1688290977.0.0.0; _ga=GA1.1.1895689830.1688290977; _ga_MT2NKYTC64=GS1.1.1688290977.1.0.1688290996.0.0.0

I am wondering, if pyarlo does a cookie handling as well? So, cookies received are also available if pyarlo sends a subsequent request?

Maybe that is our issue here which would also explain this message in the error message from Cloudflare:

"Please enable cookies."

Another difference I have found: In the request headers of the browser there is a header called "Content-Lengt" and the value is the number of characters of the POST request:

{"email":"xxx@yyyy.com","password":"XXXXXXXXXXX=","language":"de","EnvSource":"prod"}

@twrecked

twrecked commented 1 year ago

@KoolLSL I just got locked out and the cache change worked for me. I'll add it in later and I'll try to figure out why.

@m0urs I believe that stuff is handled by the cloudscraper plugin. I use it how it's described here. But you can add in some code to get at the information returned from cloudflare. See here on how to view the cookies, it should be easy enough to add debug to view this. Try the debug option. I'll check in later.