Plug4Green / Plug4Green

FIT4Green repository
6 stars 1 forks source link

OptimizerGlobalTest#testGlobalTooMuchVMs #12

Closed fhermeni closed 9 years ago

fhermeni commented 9 years ago

Expected 4 moves, got 12. Choco proved the solution optimality. This means the power model does not take effectively into account the impact of migrations ?

cdupont commented 9 years ago

I have: move VM id120000 from id100000 to id200000 move VM id120001 from id100000 to id300000 move VM id120003 from id100000 to id300000 move VM id120002 from id100000 to id300000 move VM id220003 from id100000 to id200000 move VM id320000 from id300000 to id200000 move VM id220000 from id100000 to id200000 move VM id220001 from id100000 to id300000 move VM id220002 from id100000 to id200000 move VM id320002 from id300000 to id200000 move VM id320001 from id300000 to id200000 move VM id320003 from id300000 to id200000

There is two problems:

fhermeni commented 9 years ago
cdupont commented 9 years ago

There is no overbook constraint in the resulting BtrPlace model. So indeed a factor of 1 is assumed

Right

There is no notion of cores also. Only views "CPU", and "RAM". For CPU it denotes their consumption in terms of mips or sth. similar ?

CPU view corresponds to the % of cores available. 2 CPU with 4 cores each = 800%.

Nodes provide 400 CPU unit each, VM consume 50 each. Nothing is wrong on my side yet

That's right, I fixed the test so that VMs consumption is 100%. Now the test doesn't seem to find a solution (BtrPlace model seem to be correct).

fhermeni commented 9 years ago
After 2669ms of search (terminated): 34649 opened search node(s), 69299 backtrack(s), 0 solution(s).

BtrPlace proved there is no solution. So the CSP model (btrPlace or P4G side) is incorrect if you expected one.

cdupont commented 9 years ago

What does that mean there is no solution? There is not enough space in the servers to host the VMs? Indeed when the problem starts, a server is overloaded (too much VMs), but this can be solved:

Mapping: node#2: vm#11 vm#10 vm#9 vm#8 node#1: - node#0: vm#7 vm#6 vm#5 vm#4 vm#3 vm#2 vm#1 vm#0

ShareableResource.ShareableResourceCPU: rc:ShareableResourceCPU: <node node#0,400>,<node node#1,400>,<node node#2,400><VM vm#0,100>,<VM vm#1,100>,<VM vm#2,100>,<VM vm#3,100>,<VM vm#4,100>,<VM vm#5,100>,<VM vm#6,100>,<VM vm#7,100>,<VM vm#8,100>,<VM vm#9,100>,<VM vm#10,100>,<VM vm#11,100> ShareableResource.ShareableResourceRAM: rc:ShareableResourceRAM:<node node#0,51200>,<node node#1,51200>,<node node#2,51200><VM vm#0,128>,<VM vm#1,128>,<VM vm#2,128>,<VM vm#3,128>,<VM vm#4,128>,<VM vm#5,128>,<VM vm#6,128>,<VM vm#7,128>,<VM vm#8,128>,<VM vm#9,128>,<VM vm#10,128>,<VM vm#11,128>

fhermeni commented 9 years ago

It means no variable assignment can lead to the satisfaction of all the constraints. You find a solution according to the constraints you have in mind. This means the model you have in mind differs from the one that is implemented. Remove some parts of the model to isolate the faults. The power model for example.

fhermeni commented 9 years ago

Removing the power stuff do not change the issue. Going down to 3 vms per server lead to a solution. Possibly a bug inside BtrPlace

fhermeni commented 9 years ago

Gotcha

The problem is the initial model is not viable. It currently states node0 is overloaded. In reality, no server can run at 200%. So the initial model should be revised to be viable while constraints state the resource demand for the VMs.

cdupont commented 9 years ago

What is the constraint that should be put? Anyway I will probably close the case and put @Ignore on the test because the test not relevant any more.

fhermeni commented 9 years ago

Preserve is the constraint to use to ask for resources