Closed pechouc closed 2 years ago
Why storing user headers while username / password seem to be enough ?
I agree for the creds but I think it was out of safety that this wasn't stored in the cache first. It's not exactly optimal to store unencrypted creds in the cache, but that might be our first step :man_shrugging:
--> I can't see a link between users and groups but I can see links between sites and groups as well as devices and owner_id (user_id ?)
Well in the API, group association was introduced for regional management. Being admin vs. user wasn't enough: as a user you can fetch your device, so if you try to get ALL devices, that will get you yours only. However if other user are in the same group as you are, you can read those devices s well :)
So typically, what we want is:
That will allow us to have (if needed) multiple users from the same regions sharing a similar data access authorization.
Adding coordinates and zoom for the Engine group in app/assets/data/group_correspondences.json
Let's make this dynamic later, but updating this manually for now :+1:
Site markers shown on the map now depend on the credentials entered by the user. Typically, the Engie user can only see the corresponding sites and devices.
The only issue that remains is the extinction of the token after 1 hour. For now, we refresh the token based on super admin credentials. As a consequence, after 1 hour, all sites and alerts are shown on the platform for any user. The simplest way to move forward while dealing with this issue would be to (i) store the credentials of the user and (ii) use them to refresh tokens. Of course, this is far from ideal but it can be implemented fairly easily.
Great, thanks a lot!
I would recommend implementing the user creds patch waiting for a better solution handled by the API
I agree with this temporary solution, I can quickly implement it during the weekend if we validate it.
e.g token refresh thanks to token instead of creds ?
I am afraid that, if the token were enough to keep access to the API for an unlimited amount of time thanks to refreshing, this would ruin the initial purpose of the token expiration.
Maybe an alternative could be to condition the refresh_token
action on a sort of private admin key, stored as a secret environment variable on Heroku, rather than on the credentials? This way, we could refresh the platform user's token, without taking the risk that someone outside the organisation could find an API token and use it indefinitely?
But this discussion may of course fall outside the scope of this PR and we may want to open another issue if we agree that edits should rather be made on the API side.
But this discussion may of course fall outside the scope of this PR and we may want to open another issue if we agree that edits should rather be made on the API side.
From a security point of view, we cannot issue unexpiring token for users :/
From a security point of view, we cannot issue unexpiring token for users
I might have not been very clear, but this is not what I was proposing 😅
For this PR, would you mind attaching a screenshot illustrating the "user-only" objects that are accessed?
Well noted, I will do the following during the weekend:
Also for testing purposes, in another PR, I'm starting to think we should move most functions to a local python package (so that we can test it outside of dash). That would make the app more robust and make the difference between backend & pure frontend features
Sounds great! Do you think you could open a dedicated issue? Because I would have 1 or 2 questions to make sure I understand how this should be organised 😊
And what about having _refreshtoken only executable by super admin client ?
I'm not sure that makes sense but the idea is something like :
This would keep token refresh safe as admin creds needed and would keep active user sessions for platform purposes
@pechouc , @frgfm
Sounds great! Do you think you could open a dedicated issue? Because I would have 1 or 2 questions to make sure I understand how this should be organised blush
@pechouc will do!
This would keep token refresh safe as admin creds needed and would keep active user sessions for platform purposes
@Akilditu that means that anyone will be able to read the creds as they will be stored as instance attributes :/
Hello, I added the storage components for the user's credentials and used those for refreshing the token. I have tested it, leaving the platform open for an hour, and it seemed to work well!
I agree that this should be by no means definitive and thanks for opening the dedicated issue. Should we merge this for now though?
I agree that this should be by no means definitive and thanks for opening the dedicated issue. Should we merge this for now though?
Yes !
@Akilditu has noticed an issue with the device status dashboard. The last commit ensures that:
last_ping
variable (which is the case of the two "Reolink" devices in the database) does not break the callback.Thanks! I imagine you have tested the PR to make sure it works? (after an hour)
Yes, this was tested.
I left some proposal as comments
Thanks a lot, I have either left a response here or implemented the changes. It now seems to work properly, but I have tested quickly and may have missed something.
We got an error from Sentry just after merging:
TypeError: update_live_alerts_data() missing 2 required positional arguments: 'user_headers' and 'user_credentials'
(1 additional frame(s) were not displayed)
...
File "flask/app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "flask/app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "flask/app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "dash/dash.py", line 1096, in dispatch
response.set_data(func(*args, outputs_list=outputs_list))
File "dash/dash.py", line 1017, in add_context
output_value = func(*args, **kwargs) # %% callback invoked %%
We got an error from Sentry just after merging:
TypeError: update_live_alerts_data() missing 2 required positional arguments: 'user_headers' and 'user_credentials' (1 additional frame(s) were not displayed) ... File "flask/app.py", line 1525, in full_dispatch_request rv = self.handle_user_exception(e) File "flask/app.py", line 1523, in full_dispatch_request rv = self.dispatch_request() File "flask/app.py", line 1509, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) File "dash/dash.py", line 1096, in dispatch response.set_data(func(*args, outputs_list=outputs_list)) File "dash/dash.py", line 1017, in add_context output_value = func(*args, **kwargs) # %% callback invoked %%
@frgfm how can we access full sentry logs ?
@Akilditu sure, but you have to login through Heroku ! Strangely, the traceback isn't available for higher level :/
It came from a MAC user on Safari
It came from a MAC user on Safari
This definitely sounds like me 😅 I did run into some bugs while testing the PR locally, which is likely reflected in these errors. They should be solved now, but I won't have time today to fully confirm this...
In relation to #83, this PR aims at using the user's credentials, used to log into the platform, for most of the following interactions with the database. This would allow to have multiple users on the platform, which is required with the Engie install. The tasks can be summarised as:
Fetching the ID of the user's group after login (is there an appropriate route or client method?)Adding coordinates and zoom for the Engine group inapp/assets/data/group_correspondences.json
How the platform looks after these changes
Once logged in, the users only see the sites (and the potential alerts) that are in their scope. This holds for the main page / the map as below. Note that the map is centred around the point obtained by taking the mean geographical coordinates of the in-scope sites.
But also, for the device status dashboard as shown here: