Open alyhasansakr opened 4 years ago
First of all, excuse my language/wording 🤣 Many profanities and personal ideas/prejudice will be written from now on. Beside Aly and me who have discussed these topics, you may not understand what I am writing here but bear with us, when the result and writing paper phase comes, I will have a deeper understanding of those and will answer any questions if needed.
Main project: 1) History diagnostic. Will write about this in the below comment. Cuz this is my main focus.
Side projects: (rather optional, but would be great if we got these 2 covered, I will probably do them anyway) Rather lengthy but below are everything that I need to start them later on after I got (1) settled down
2) Apply prediction using different algorithms/formulas. A small 3 pages introduction for methods that I would like to use are here. Forecasting method.docx
A similar paper that used the big complex method one (the damn ARIMA
) in cloud computing can be found here: Time series forecasting of cloud data center workloads for dynamic resource provisioning
The main focus/pain in the ass probably the ARIMA
one, small tutorial of this can be found here The author of that book used R language (the damn one for statistic people) for his implementation (, we can either port his code shamelessly from here for our usage, but I rather create it by myself cuz I know nothing about R language 🤣
3) Implementation of this "A greedy algorithm for task offloading in mobile edge computing system" to prove that their algorithm is not practical enough. (Or in other way telling people that using energy consumed as a comparison criterion is stupid). I may be wrong but we will find out if I ended up doing these
energy saved = energy local - energy transfer to the cloud.
Use cases (not sure whether it's the proper wording for this but meh) Please chime in more ideas/usage/expectation of a diagnostic from history and what to do with please, I will add them here along the way.
02.02.2021
Oh lol take me long enough to start committing. Bad zero, bad bad detailed about commit above 91af246
Added a new class called TimeData, so I can use the object to put in a Multimap. Things would be different if we starting to implement a DB (SQL, Dongo) but I would keep things locally for now. This class seem to be a recreating of the LocalDateTime but I think we will have different use-cases for this. 🤞
Still have not come up with new ideas about how history diagnostic would work. I will just focus on making a framework/platform to easily get history data first. This should be a seperated module.
Zenith: Utility-Aware Resource Allocation for Edge Computing https://ieeexplore.ieee.org/document/8029256/
Their idea: (rough summary)
Useful/insight learn from them:
Propose ideas:
@zero212 In essence, they deal with a problem we don't deal with yet; When the service provider (SP) is different than (decoupled from) the infrastructure provider (ECIP), basically the SP makes profit by selling a service to a customer but has to buy infrastructure from ECIP to do so.
Their definition of "Utility" depends on the entity; For SP, it is exactly the same as our definition of "Quality of Service" with a hard limit for latency. And for ECIP, it is the absolute utilization of the infrastructure (how much is occupied) regardless of the gain in SP's utility (however, they assume there is always a gain for SP as well).
The "Cloud" part that they mention is similar to our solution of "Local Execution", both are an alternative to the edge that is assumed to provide less quality than the edge.
If you look at your proposed ideas and consider my points above, the problem reduces to the NaĂŻve approach, which we have already studied.
What do you think?
If you look at your proposed ideas and consider my points above, the problem reduces to the NaĂŻve approach, which we have already studied.
@alyhasansakr
Can you say a bit clearer regarding the 1st proposal? I understand that they deal with the "selling resource" bit, my idea was just to simply ignore that part and straight-up using the "gain in utility" as a criterion. But my preference lean toward the 2nd proposal to be fair
My 2nd proposal is a bit different than the original Naive, only a bit, that bit is the utilization part. I wanted to focus on the utilization of nodes, where we may skip the closest node to go for the node that will have higher utilization with the client assigned.
https://ieeexplore.ieee.org/document/8473379 Their idea:
System model consists of several UAVs, and several edge servers with VM for handling tasks, all tasks are data-partitioned-oriented, which means execution can be parallelized into processes to be executed simultaneously.
A system orchestrator will be in charge of how those tasks will be splitted and which edge servers will take which parts.
The problem they want to solve is to find an optimal wireless data rate and an optimal task allocation scheme to minimize energy consumption at UAV side while satisfying maximum latency. Data rate is proportional with consumption power,higher the data rate, the higher the consumption power.
Split to latency-prior and energy-prior cases, using optimization solving methods such as Simulated annealing (SA) combined with particle swarm optimization (PSO) to find the best resource allocation with minimal latency.
My insight from their paper:
https://ieeexplore.ieee.org/abstract/document/9295762/
Their work's idea:
Mentioned that most of current researches don’t consider small-cell-network (SCN) for task offloading/resource allocation.
Small-cell-network are defined as “small-cell-network consists of a series of small low-powered antennas, sometimes called nodes - that provide coverage and capacity in a similar way to a tower, with a few important distinctions. ...”
Their research model consist of 1 Macro base station (MBS) and multiple small base stations (SBSs), both have MEC server to provide computation work
Assuming that base stations have a limited number of channels, they consider the cross-tier interferences when devices (MCU, SCU) try to offload tasks to the MEC server.
Calculate computation resources, power consumption and subchannel resource allocation for all situations, and then do match making. Their conclusion:
Their algorithm (EERA - energy efficient resource allocation) is better than random resource allocation in terms of energy saving. (no shit Sherlock)
My insight from their paper:
Propose idea:
https://ieeexplore.ieee.org/document/9277773
Their scenario:
Useful/insight learn from them:
Proposed idea:
https://ieeexplore.ieee.org/document/8567678
Their idea:
Insight from them:
Previous proposal: 2 different methods Preemption or not is a big factor, need discusses
Note: For the sake of easy understanding. Client/node with letter (client A, node B) are entities that have already been matched, client/node with number (client 1, node 2) are entities that have not yet been matched or are going to be matched.
Main idea: Each client/application will be divided into different classes. Respectively class 1 to 9, whereas class 9 is lowest and 1 is the highest priority. Fairly simple, nothing new or groundbreaking
Scenario: A situation where preemption happens: Client 1,2,3 want to match, no nodes have enough resource Preemption happens depending on the client's priority, more info in the algorithm part.
Use case: (or benefit, don’t mind the wording) Accommodate situations where we need to prioritize the importance/urgentness or task. Without this, the algorithm will always execute small clients to keep the utilization number high
Algorithm: Client 1 have lowest priority (9) -> no match, no preemption, not doing anything, return fail messaged, ignore this
Client 2 have higher priority (2 - 8)
Client 3 have highest priority (1)
How to decide which node to match:
Proposals
Both proposal 1 and 2 go through the same flow as below:
Attempt: match with the highest score node (node A)
Still researching for this, been reading papers for related idea but not going anywhere okay. Need discussion @alyhasansakr
Main idea: Adding another factor P (priority) when calculating score.
Scenario / use case:
Algorithm:
How to decide which node to assign:
Main idea: Client’s execution depends on other clients’ completion.
Old algorithm: Use the total of the whole group as required resources, and do match making New algorithm: Split the group and match making each element in group to nodes
_Definition: client1 and its dependency (client 1a,1b,1c) will be call Gclient1 (group client 1)
Scenario: Scenario where this can happen:
Client 1 needs client 1a, 1b, 1c to be complete before execution. Local execution won’t be able to pull it off, resource too expensive E.g: AR application, depend on several aspect, rendering, tracking, etc
Algorithm: When G_client1 want to matchmaking:
Access client1’s dependency (1a,1b,1c…) and try to do matchmaking for each of them (not simultaneously, but one after another) G_client1 will only be matched if all of its elements are successfully matched
Comparing factors/things: Comparing old algorithm with this dependency to see how often the G_clients are matched
Either extend the score-based algorithm (#63) or develop a new algorithm altogether.
Whichever way you choose, you must explain it in an abstract way first.
Write everything you try in this issue, even if it sounds stupid.
Then we discuss and decide which way to go, then you can start implementation.