Closed kimbauters closed 1 year ago
A1 – poison input is treated as standard input These inputs are injected using standard API requests and the system behaves as it would for any standard input, e.g.:
curl --request PUT 'https://api.project-chai.org/heating/mode/?label=test_home_kevin' --header 'Authorization: Bearer 8dbb9774-970c-4f9d-8992-65f88e501d0e,b7b100ed0a9e8e54af91ece8' --header 'Content-Type: application/json' --data-raw '{"mode": "auto", "target": 30}'
curl --request PUT 'https://api.project-chai.org/heating/mode/?label=test_home_kevin' --header 'Authorization: Bearer 8dbb9774-970c-4f9d-8992-65f88e501d0e,b7b100ed0a9e8e54af91ece8' --header 'Content-Type: application/json' --data-raw '{"mode": "auto", "target": 30}'
curl --request PUT 'https://api.project-chai.org/heating/mode/?label=test_home_kevin' --header 'Authorization: Bearer 8dbb9774-970c-4f9d-8992-65f88e501d0e,b7b100ed0a9e8e54af91ece8' --header 'Content-Type: application/json' --data-raw '{"mode": "auto", "target": 30}'
Nothing more is required from the API or backend.
The attack(s) will be orchestrated and triggered by a script written by Greenwich, to be reviewed by Bristol.
A3 – poison inputs are hidden from the user
One option is that inputs are injected in the same way as A1 except for the addition of an optional Boolean field hidden
(default hidden=false
) in the body of the PUT /heating/mode/
request, e.g.:
curl --request PUT 'https://api.project-chai.org/heating/mode/?label=test_home_kevin' --header 'Authorization: Bearer 8dbb9774-970c-4f9d-8992-65f88e501d0e,b7b100ed0a9e8e54af91ece8' --header 'Content-Type: application/json' --data-raw '{"mode": "auto", "target": 30, "hidden": true}'
A corresponding hidden
column would be required in the DB with any relevant API GET
requests omitting entries with hidden=true
.
The tricker part is that the system should not enter override
mode if the triggering input includes hidden=true
. Aside from this the system should otherwise behave as usual, e.g. the ML model should still be updated and, if the system was in auto
mode at the time of the attack, the new prediction (from the updated ML model) should be adopted as the new setpoint.
The attack(s) will be orchestrated and triggered by a script written by Greenwich, to be reviewed by Bristol.
A2 – prices are manipulated
Manipulated prices x'
will be derived from a constant multiplier m
applied to real prices x
, i.e. x' = x * m
. The multiplier m
is to be decided by Greenwich.
Attacks will be triggered manually somehow, but may be terminated manually or in an automated way (e.g. after k
timesteps, or at a given datetime), whatever is technically easier.
Manipulated prices should permanently override the real prices for the attack period, so users will henceforth only see the manipulated prices during that period.
One option is to include a new API endpoint that accepts the multiplier and a terminating condition, but it is unclear how best to handle the actual overriding of prices.
Comments from @georgeloukas and @eddy211821 are welcome
This part is now ready to implement. To this end we need to fully understand what each attack can do, and what the inherent limitations are.
A1 – poison input is treated as standard input This attack acts in lieu of the user doing the same transaction through the UI. For all other intents and purposes it behaves exactly the same as if the user did this interaction through the UI. This means that the user can see the setpoint change when they open the UI, the user can cancel it, the event is logged, the event appears in the XAI reports, and the event causes the profile model to be updated. As in normal operation, the user cannot prevent the model from being updated.
A2 – prices are manipulated A price manipulation happens for all users at the same time. A price manipulation is triggered by Greenwich at their discretion with a given multiplier (defaults to 1) and for a given duration (defaults to 60 minutes, only multiples of 30 accepted). Any triggered price manipulation starts at the next price change. For example, if a price manipulation is triggered at 9:13 it takes effect at 9:30 for the prices between 9:30–10:30 if using the default 60 minute duration. Once triggered, a price manipulation cannot be canceled but it can be overwritten, e.g. if at 9:13 attack 1 is launched and at 9:15 attack 2 is launched then only attack 2 takes place (even if it has a duration shorter than attack 1). Other than applying an attack the database does not explicitly log that this attack has been taking place. However, any indirect information, such as e.g. a log entry that mentions the price at the time, would see the manipulated price. The attacks works as if electricity prices are obtained from a third party, and it is this third party or the link to the third party that has been attacked.
A3 – poison inputs are hidden from the user This attack works as A1, but it is fully hidden. It does not show up in the UI, it does not show in the logs, it does not show up in the CHAI reports, and it does not change the temperature of the valve. The one and only effect of this attack is that it feeds the poisoned data into the AI which uses it to update its model for the profile that is active at the time of the attack. As soon as the profile model is changed and these changes are propagated these new model values go into effect.
@kimbauters Precisely what we are after. Great work.
@georgeloukas @eddy211821 just to emphasise a caveat in A3...
Setpoint updates rely on an asynchronous operation scheduled at regular intervals. This means we cannot guarantee that A3 attacks will change the setpoint immediately; instead the setpoint will change at the next scheduled update, which could be up to e.g. 15 minutes after the attack.
@kevinmcareavey That’s what we had assumed. So, that’s fine.
A2 attacks now implemented since abb33a5 .
A3 attacks added in 988fa5b . The functionality is the same as for the PUT /heating/
endpoint. The only difference is the addition of the hidden=true
flag.
While both A2 and A3 are added and should work, they remain largely untested for now. I will keep this issue open until thorough testing is completed.
Thank you very much for your email and help.
Kind regards, Eddy
From: kimbauters @.> Sent: 09 November 2022 8:58 AM To: chai-project/chai-backend-api @.> Cc: Hsueh-Ju Chen @.>; Mention @.> Subject: [EXTERNAL] Re: [chai-project/chai-backend-api] Add attacks (Issue #4)
This message originated from outside the University. Treat links and attachments with caution.
A3 attacks added in 988fa5bhttps://github.com/chai-project/chai-backend-api/commit/988fa5b0a17e22e81f87a22531c59fdc08e4c735 . The functionality is the same as for the PUT /heating/ endpoint. The only difference is the addition of the hidden=true flag.
While both A2 and A3 are added and should work, they remain largely untested for now. I will keep this issue open until thorough testing is completed.
— Reply to this email directly, view it on GitHubhttps://github.com/chai-project/chai-backend-api/issues/4#issuecomment-1308425310, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AU247XP2GKP5IZKLRCPNQ4LWHNRTDANCNFSM6AAAAAARMVSI7Y. You are receiving this because you were mentioned.Message ID: @.***>
University of Greenwich, a charity and company limited by guarantee, registered in England (reg no. 986729). Registered Office: Old Royal Naval College, Park Row, Greenwich SE10 9LS.
Has anyone had the chance to test the A2 price manipulation attacks?
I can test A2 price manipulation attacks and provide you with the test results.
@eddy211821 tested them earlier this week and they appear to be working fine:
Last time, I used the default setting which is 60 minutes. Shall I try different durations?
Yes, no harm trying. Thanks!
I changed the duration to 4 hours and the multiplier I used is 10 for us to easily see the changes. The result is shown below and it is working fine.
Thanks @eddy211821 and @kevinmcareavey for testing the attacks. I will close this issue now as it reached its goal. If there are any issues in the future of course feel free to open up a new issue.
A3 attack (i.e. A3 – poison inputs are hidden from the user) was performed. When the participants look at the system event (i.e. event type: PROFILE_UPDATE), the participants could identify an attack from this “you set the target temperature to 7°C …” since the participants did not change the target temperature. This should be also hidden. A figure to illustrate this issue is provided:
@kevinmcareavey @kimbauters Hi, is it possible that we get a variation of A3 where it does change the temperature of the valve? Would something like that be easy to do? E.g. is that a parameter to set to true or similar?
@georgeloukas: my educated guess is that this would not be straightforward to implement, but in any case @kimbauters has now reached the end of his extended contract, so going forward any backend work from us will be limited to bug fixes and technical support only.
@eddy211821 Sorry, that's an error on my end. Should be fixed with c742dcf after the API service is restarted.
@georgeloukas Not trivial, requiring a major code change to make it happen. The code is built around attacks A1-A3 as we agreed upon, and doesn't really have flexibility for other attack types. As @kevinmcareavey mentioned I'm also back on reduced hours so there is no scope for any major code rewrites.
@kimbauters @kevinmcareavey Ok, thank you for confirming.
@eddy211821 Kevin restarted the service. Let us know if you still see the logs so we can re-close this issue if all is fine now.
Thank you very much for your updates. No problem and I will keep you updated. Thank you.
I can still see the message “you set the target temperature to …” while A3 attack is performed. The screenshot is provided.
@eddy211821 can you try again now? https://github.com/chai-project/chai-ai/commit/cbe296fb8426ef744b41f4302997c8e338f198bc
@eddy211821 @georgeloukas you may need to keep an eye on the behaviour of the notifications page after https://github.com/chai-project/chai-frontend/issues/68 is closed. What I expect is that an entry such as the following should appear within 5 minutes of the A3 attack:
The system set the target temperature to 18.5°C because the current price is 10.42 p/kWh and the active profile is Weekdays where the AI believes your price sensitivity is Very low and your preferred temperature (if energy were free) is 18.58°C.
Typically these entries will only appear on the hour or on the half-hour, assuming nothing else changes, so if there is no other evidence of inputs or profile-updates then these entries would indicate an A3 attack if they occur at any other time.
Edit: to make this more explicit, suppose I submit an A3 attack input between 13:50 and 13:55 then I might see the following:
# | Date | Time | Category | Description |
---|---|---|---|---|
1 | 29/11/2022 | 14:00 | System | The system set the target temperature to 18.5°C because the current price is 10.857 p/kWh and the active profile is Weekdays where the AI believes your price sensitivity is Very low and your preferred temperature (if energy were free) is 19°C. |
2 | 29/11/2022 | 13:55 | System | The system set the target temperature to 18.5°C because the current price is 10.416 p/kWh and the active profile is Weekdays where the AI believes your price sensitivity is Very low and your preferred temperature (if energy were free) is 19°C. |
3 | 29/11/2022 | 13:30 | System | The system set the target temperature to 18.5°C because the current price is 10.416 p/kWh and the active profile is Weekdays where the AI believes your price sensitivity is Very low and your preferred temperature (if energy were free) is 18.58°C. |
Fixes are confirmed to be working so I am markng this as closed.
The API needs to be able to simulate attacks. Three types of attacks are considered:
A1 – poison input is treated as standard input A2 – prices are manipulated A3 – poison inputs are hidden from the user
Further input and discussion is needed to determine how each attack is logged, manifests itself, and how it is triggered.