Open shakethatweight-simon opened 3 years ago
I've been looking further into this, we're trying to find ways to optimise the calculations of boxes containing large numbers of items. Currently we send each product with it's quantity to the packer. Would your calculations be better optimised if we grouped the items by size instead of by product name?
Example
Product | Quantity | Size |
---|---|---|
A | 10 | 1x10x2 |
B | 7 | 1x10x2 |
C | 9 | 1x10x2 |
D | 6 | 2x7x5 |
E | 12 | 2x7x5 |
F | 3 | 2x7x5 |
Would this be better submitted to your Packer as just 2 items the the combined quantities?
Product | Quantity | Size |
---|---|---|
A/B/C | 26 | 1x10x2 |
D/E/F | 21 | 2x7x5 |
I tried looking through OrientatedItemFactory::getBestOrientation and LayerPacker::packLayer, I can see when we add items with multiple quantities then they are listed individually in the ItemList, so the example order above would be an array of 47 items. When you look through for the next available item to fit in the remaining space, I think you are just popping off the next item in the array which could be the same dimension as the one you previously dismissed as too big?
Example: we already have AAA packed across, but no space for a 4th A, are you still checking all of the further 6 A products, and then individually checking all the B's etc.. (I know in reality you sort by size which I haven't considered here).
We sell products in different flavours, and it's not unusual for people to order 10/20/30 at a time. So that could be 100 of a specific sized type of item spread across different flavours.
Hi
Off the top of my head, the following performance things might apply
1) $packer->setMaxBoxesToBalanceWeight(0)
, which I believe you're already doing
2) Xdebug really slows things down, if you're benchmarking in dev then be aware that dev->prod performance will be drastically different. If you're running something like New Relic in prod, then it might be worth benchmarking with and without (it's a sad fact of life that performance monitoring incurs overhead that reduces performance)
3) Opcache - it's off by default for CLI scripts, try enabling it if you haven't already
4) Reduce the number of function calls made by your interface implementations. e.g.
public function getInnerWidth(): int
{
return $this->getWidth(); // common
}
public function getInnerWidth(): int
{
return $this->width; // but this is faster
}
Also, reuse of the same objects when dealing with quantities > 1
//Avoid
for ($i = 0; $i<$qty; $i++) {
$item = new Item($width,$length,$depth,...);
$itemList->insert($item);
}
//Better
$item = new Item($width,$length,$depth,...);
for ($i = 0; $i<$qty; $i++) {
$itemList->insert($item);
}
If your cart is not an item->qty mapping but a simple array where quantities > 1 means multiple entries in it, it's possible ORM code might accidentally produce something like this in effect.
How many different types of box size do you have? I'd start there (culling use of the smaller ones for large orders) before trying to do grouping of items. For reference, on my home machine (with a puny laptop-class CPU) https://github.com/dvdoug/BoxPacker/blob/3.x/tests/InfalliblePackerTest.php#L54 runs in ~2 seconds and https://github.com/dvdoug/BoxPacker/blob/3.x/tests/VolumePackerTest.php#L374 in ~3.5 seconds.
Hi @shakethatweight-simon
Just going through old issues to figure out what's actionable and what isn't - this one's quite old, I hope you managed to resolve it?
We have some orders where the user has selected a large number of items. It happens often enough that we need to find a solution. The script is run via cron, so it probably doesn't have any timeouts, so it can sit there for a long time trying to process the list of boxes required. Is there anyway of sacrificing accuracy for speed so that we get a quicker result for these large orders?
These are typically orders with 100's of small items, but we do occasionally receive orders with 1,000s of products