Closed naceee closed 1 week ago
It looks like the problem is that the C code doesn't handle dominated points correctly. Running it only with the non-dominated poins for the above example, computes the same hypervolume as we do (0.37037902191204
).
nondom_points = [
[0.6394267984578837, 0.025010755222666936, 0.27502931836911926, 0.22321073814882275],
[0.4219218196852704, 0.029797219438070344, 0.21863797480360336, 0.5053552881033624],
[0.026535969683863625, 0.1988376506866485, 0.6498844377795232, 0.5449414806032167],
[0.2204406220406967, 0.5892656838759087, 0.8094304566778266, 0.006498759678061017],
[0.8058192518328079, 0.6981393949882269, 0.3402505165179919, 0.15547949981178155],
[0.9572130722067812, 0.33659454511262676, 0.09274584338014791, 0.09671637683346401]
]
I also tried running the other algorithm (-R
flag) for computing hypervolume in 4D (they propose two algorithms), which confirms the results we obtained and computes the same hypervolume on both sets.
all points | non-dominated points | |
---|---|---|
C implementation | 0.371995494511654 | 0.37037902191204 |
C implementation with flag -R | 0.37037902191204 | 0.37037902191204 |
our implementation | 0.37037902191204 | 0.37037902191204 |
I will check the paper if I find some more information about this (or maybe @MilaNedic remembers something about it?) - maybe it's a feature and not a bug :). Otherwise it would probably be good to make an issue on their github, just to point it out, even though it hasn't been active for a long time now.
Original C code and our python implementation don't always produce the same result (but sometimes they do):
The following points we get a hypervolume of
0.37037902191204
, while in the original implementation it is0.371995494511654
.In this example with 100 points, the calculated hypervolumes differ even more (our:
0.6664533136930482
, original:0.790316084994994
).points_4d_100.txt