hpi-sam / Robust-Multi-Agent-Reinforcement-Learning-for-SAS

Research project on robust multi-agent reinforcement learning (marl) for self-adaptive systems (sas)
MIT License
0 stars 0 forks source link

MAPE-K definitions and references #52

Open christianadriano opened 2 years ago

christianadriano commented 2 years ago

Provide references and definitions for the MAPE-K architecture.

christianadriano commented 2 years ago

This paper uses MAPE-K in the context of multi-agents. http://www.lirmm.fr/~dony/enseig/MR/notes-etudes/Make-K-Loop.pdf

christianadriano commented 2 years ago

Jonas, follows my suggestion for definitions: Monitoring: keep track of the envelope size and count the number of breaches in the control envelope. Analysis: detect breaches (N consecutive or % over a sliding window) and set off alarms for pre-determined patterns of breaches (patterns are part of the Knolwedge-base) Plan: for each alarm, determine the set type of actions. Execute: instance and execute the actions Knowledge-based: library of policies, breach patterns, any prototypical model that could also be learned (future work).

jocodeone commented 2 years ago

Below are my thoughts about our MAPE-K loop: Monitor: I would extend your definition to also gather the loss values of our networks Analysis: Could you explain the pre-determined patterns? Does this refer to the NN and the RidgeRegression? Plan: agree, that an adaption is triggered by ranking the next best actions Execute: agree Knowledge-based: agree

One question I have regarding the paper: Is our setup following the described decentralized MAPE-K loop, as they are talking about a decentralized loop. Our knowledge base is decentralized due to the agents which are producing the metrics and are executing the actions defined in the planning step this could be seen as decentralized as well. However, the metrics are collected, analyzed, planed and executed by our centralized MultiAgentController.