Closed jmcook1186 closed 1 year ago
Here are descriptions of the new models required by each of the case studies:
e-mem-tdp
msft-eshoppen uses an e-mem
model similar to Dow-MSFT but using a tdp specific to RAM, rather than the cpu tdp curve. We can call this model e-mem-tdp
. It calculates an e value in kwh for memory usage.e-mem-tdp = n-hours * n-chip * tdp-mem * tdp-coeff
e.g.
n-hour
= 1
n-chip
= 1
tdp-mem
= 12.16
tdp-coeff
= 0.12
e-mem-tdp = (1 * 1 * 12.16 * 0.12)/1000 = 0.01216 kwh
e-cpu
They also use their own tdp curve for e-cpu:
e-cpu = n-hours * n-chips * tdp * tdp-coeff
e-net
:The energy used by network traffic, in kwh
e-net = data-in + data-out * net-energy
e.g.
data-in
: 14 (GB)
data-out
: 1.6 (GB)
net-energy
: 0.001 (kwh/GB)
e-net = 14.3 + 1.16 * 0.001 = 0.01546 kwh
sci-accenture
:sci = sci_total * 1.05
where sci
is the total from all the other components
(i.e. the app-gateway is assumed to be 5% of the total from the other components. So multiply sum of other components by 1.05 to get total. The result is in gCO2
sci-aveva
their e values come from a bespoke model, not Teads. It is as follows:e = ( pl - pb ) x t
where:
PL
= Average of measured loaded system power consumption
PB
= Average of measured baseline system power consumption
t
= annual usage (hour)
e.g. for PL
= 16.009 W, PB
= 11.335 W and t
= 8322 hr
e
= (16.009 - 11.335) * 8332 = 39 kwh
switch
NB the case study does not provide enough dtaa to actually run this model - they just provide the final E value that this model would generate. Therefore, for the demo we can just echo back the e
value provided in the impl for each switch. This is a two stage calculation to find e
in kwh for a switch. The first stage finds the max power per port (p-max-per-port
) and the second stage finds e-per-port
(in kwh)
p-max-per-port = p-baseline / 1000 * n-hour / n-ports
e.g. for
p-baseline
= 92.5
n_hour= 1
n_ports` = 24
p-max-per-port
= 92.5/1000*1/24 = 3.85e-3 kwh/port
then
e-per-port
= p-max-per-port
* median(5 minute input rate
+ 5 minute output rate
) / duplex
) / link speed
e.g.
for
p-max-per-port
= 3.85e-3
5-min-input-rate
= 100
5-min-output-rate
= 100
duplex
= 2
link-speed
= 1000000
e-per-port
= 3.85e-3 * median(100 + 100) / 2 / 1000000000 = 3.85e-10 kwh
finally, e-sum
is the sum of the e-per-port
over all available ports
e
= sum(e-per port
)
server
NB the case study does not provide enough data to actually run this model - they just provide the final E value that this model would generate. Therefore, for the demo we can just echo back the e
value provided in the impl for each server.
e = average of Instantaneous power readings / 1000 n-hour pue
@jmcook1186 Can you provide a detailed YAML which has to be processed by these models?
Hi @gnanakeethan, yes the impls for each case study are in the PR here https://github.com/Green-Software-Foundation/ief/pull/80
Changing the branch range and across destination
Lets Compare the branch Across the Fous
@jmcook1186 can this be closed now, please close if so.
As discussed on IEF core meeting this is complete as per this PR: https://github.com/Green-Software-Foundation/ief/pull/80
Hi @narekhovhannisyan @gnanakeethan
On Thursday we are aiming to demonstrate how rimpl can be used to calculate SCI values for several impls derived from case studies here: https://github.com/Green-Software-Foundation/sci-guide/tree/dev/use-case-submissions
Here are the impls themselves: https://github.com/Green-Software-Foundation/ief/pull/80
There is quite a lot to do in the next few days to meet this deadline. This issue can be a sensible place to keep track of progress. Here is what I think is a good set of milestones for Monday - Thursday:
This is going to be challenging because we don't have access to the model itself - it is a black box to us and is only available to Paz, so iteration will be slow. Therefore, we need to get onto this as soon as possible. Here's the current state:
We need to get the rimpl PR merged, then make sure that when rimpl receives an impl with a model of
type==plugin
and apath !==''
, it calls the shell function and incorporates the returned data into the ompl as expected. The Intel model is end-to-end, no other models are required in the pipeline - it just needs to be called once and the return data dumped into the ompl. I can provide the example impl Paz has been using to test his model, but we won't be able to run the model for ourselves. Let's aim to get the rimpl PR merged and a sync call with Paz done on Monday morning, then focus on getting the Intel model to run reliably by Tuesday. I expect a substantial amount of time to be spent waiting for Paz to experiment and return results, so we can run the following tasks in parallel...This is the best first target for the builtin impls because it is a standard
sci-e -> sci-m -> sci-o -> sci
pipeline without anything too unusual happening along the way, so it should be straightforward to implement and the models are already available. It's also a good initial case because there are only two components to account for. If we can get this to work, we have the basic foundational model pipeline, which is a good win. Should be doable by end of Monday or early Tuesday.This is a good second target because it builds upon the foundations required for e-shoppen. The Dow-MSFT study requires one additional model. It is a very simple model that calculates E due to memory usage by simply multiplying the allocated RAM in GB by 0.38 and then dividing by 1000 to give E in kwh. This is then added to the Teads E-cpu to give the total E that is passed to the sci-o model. This model will also be used in other case studies.
The values for M are calculated using the sci-m model - the relevant values are all in the impl. The complication with this is that there are 4 components to account for, but if we implemented e-shoppen correctly this should just work....
Aim to complete on Tuesday.
Next the Accenture model. This is relatively straightforward, using the same models as Dow-MSFT but with one small twist - there is an "app-gateway" component - it's impact is calculated as 5% of the sum of all the other components, so it is applied after all the other SCI calculations are done. In the data they provided they give E directly so we have the option to skip the Teads calculation and go directly to sci-o with their provided E, M and I values. However, it would be better to compute the whole pipeline - I'm waiting for some additional values to enable this.
This should be doable on Tuesday/early Wednesday if e-shoppen goes smoothly as there is only a small amount of additional logic to implement.
The remaining models all have some unusual elements that require specific models to be built. The model for the Aveva study will require a custom model that takes in min/max/mean watts and returns E instead of using Teads.
Aim for Weds afternoon/Thurs morning.
The Farm Insights study simply provides high level SCI values and all the pipeline needs to do is convert to the functional unit. This should be very quick to implement - Weds afternoon?
The NTT on-premise model is awkward because it requires multiple models to be built to account for energy use by several types of switches and servers, in addition to the embodied carbon models. I'd leave this until last. Let's go for this if we still have time available on Thursday morning.
We should skip the microsot-green-ai, gsf-website and azure-yassine impls as they are all currently missing crucial data.