abhilekhsingh / gc3pie

Automatically exported from code.google.com/p/gc3pie
0 stars 0 forks source link

Please review how Quantities are used #432

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
openstack.py backends maps flavor.ram in gc3libs.quantity.MiB (line 95)
and compares it with job.requested_memory (that could be expressed in anything 
- also in MB for example)

while shellcmd backend (that gets into the game when the openstack backend is 
used - access to created subresources) maps total_memory, available_memory and 
max_memory_per_core in gc3libs.quantity.MB

so, for example, a flavor might be considered fitting an Application 
requirement while the job submission to the corresponding resource might fail 
(not enough memory) simply because of this different way of mapping resource 
quantities.

Please make sure all backends use the same mapping schema (either MB or MiB)

Example:
Flavor m1.xlarge
flv.ram is 15360

(Pdb) job.requested_memory
Memory(16, unit=GB)

(Pdb) flv.ram*MiB < job.requested_memory
True

# So in this case the flavor with 15GB of memory is selected as fitting, but 
then

mismatch of value `max_memory_per_core` on resource 
eed7ee87-8178-4194-8ad8-5af653470529@uzh.geo.space: configuration file says 
`15360MiB` while it's actually `15773MB`. Updating current value.

and then 
Ignoring error while submit to resource ...: Resource ... does not have enough 
available memory: 16000MB requested, but only 15773MB available..

I changed all references in openstack backend from MiB to MB
as it seems the only module using MiB instead of MB.
This seems to work (the m1.xlarge flavor is discarded)

but probably we need a proper guideline/agreement on what quantity shall we use 
to map resources that needs to be compared.

S.

Original issue reported on code.google.com by sergio.m...@gmail.com on 11 Apr 2014 at 12:27

GoogleCodeExporter commented 9 years ago
> Please make sure all backends use the same mapping schema (either MB or MiB)

No, no, no! That's exactly why `gc3libs.Quantity` exists in the first
place: so one should not care whether a memory quantity is expressed
in MiB, GB, kB, etc.

Any memory quantity can be compared with any other memory quantity and
arithmetic of memory quantities works as expected.  If it doesn't,
it's a bug in `gc3libs.quantity.Memory`.

> Example:
> Flavor m1.xlarge
> flv.ram is 15360
>
> (Pdb) job.requested_memory
> Memory(16, unit=GB)
>
> (Pdb) flv.ram*MiB < job.requested_memory
> True
>
> # So in this case the flavor with 15GB of memory is selected as fitting, but 
then
>
> mismatch of value `max_memory_per_core` on resource 
eed7ee87-8178-4194-8ad8-5af653470529@uzh.geo.space: configuration file says 
`15360MiB` while it's actually `15773MB`. Updating current value.
>
> and then
> Ignoring error while submit to resource ...: Resource ... does not have 
enough available memory: 16000MB requested, but only 15773MB available..

Which seems correct to me: **the `max_memory_per_core` field was
updated:** it's now 15773MB, which is less than 16000MB...

Original comment by riccardo.murri@gmail.com on 14 Apr 2014 at 8:36

GoogleCodeExporter commented 9 years ago
> I changed all references in openstack backend from MiB to MB
> as it seems the only module using MiB instead of MB.

We need to check whether this is correct: what units does the
OpenStack API use for reporting?  MBs or MiBs?

Again, we need not aim for uniformity in our backends, as that would
be incorrect.  We need to make sure each backend uses the same units
as the API or command it talks to.

Original comment by riccardo.murri@gmail.com on 14 Apr 2014 at 8:38

GoogleCodeExporter commented 9 years ago
> Which seems correct to me: **the `max_memory_per_core` field was
> updated:** it's now 15773MB, which is less than 16000MB...

This is correct for the scheduler (it indeed discards the resource as not 
fitting the mmeory requirement) but clearly there is the discrepancy introduced 
by the different metric used (MiB vs MB) [, the openstack backends treats 
quantities in MiB while shellcmd in MB. how could you expect this to be 
consistent when comparing them ?]

those 15773 should probably be MiB or the 15360 should be MB

I understand your comment on the purpose of Quantity, but this seems a mismatch 
introduced by the GC3Pie backends

S.

Original comment by sergio.m...@gmail.com on 14 Apr 2014 at 8:49

GoogleCodeExporter commented 9 years ago
> the openstack backends treats quantities in MiB while shellcmd in
> MB. how could you expect this to be consistent when comparing them

Again: this is exactly why we have the Quantity object: so we can
compare MBs with MiBs.

(Internally, `gc3libs.quantity.Memory` converts everything to bytes,
so the comparison and arithmetic are always correct; units like "MB"
or "MiB" are only used when printing the quantity value.)

Original comment by riccardo.murri@gmail.com on 14 Apr 2014 at 8:57

GoogleCodeExporter commented 9 years ago
This was fixed by introducing the `vm_os_overhead` configuration value.

Original comment by riccardo.murri@gmail.com on 30 Jun 2014 at 2:41