Closed julesgilson closed 2 years ago
Hi @julesgilson . I have a similar issue and i guess the high memory usage is due to the resulting models collection.
Hi @julesgilson . I have a similar issue and i guess the high memory usage is due to the resulting models collection.
If you are keeping references to the resulting model then naturally that would consume memory. But in my example I am only calling this class, there is no assignment - so there is no reference to the models produced. The task is purely to import the data into the DB.
From the docs
The returned model will be saved for you
So I would assume that the package would unset the models created after each batch is saved - else batching would be kinda pointless. That is my question, is it clearing these references up on each batch?
This bug report has been automatically closed because it has not had recent activity. If this is still an active bug, please comment to reopen. Thank you for your contributions.
Is the bug applicable and reproducable to the latest version of the package and hasn't it been reported before?
What version of Laravel Excel are you using?
3.1.33
What version of Laravel are you using?
8.76.2
What version of PHP are you using?
7.4.13
Describe your issue
I have a reasonably sized XLSX I am importing with the toModel concern. I have been testing the impact on memory by importing truncated versions of the file to see at what number of rows it becomes an issue to server memory. At around 3000 rows I reached the point that still has margin and I am comfortable with.
So I implemented the WithChunkReading concern but I do not see any change in the memory consumed. Image below shows memory usage in different tests. With a 3000 row version of the file, the amount of memory used is the same whether I use the chunking concern or not, and also the same if the chunk size is set to 500 or 1000.
I removed all other logic from the method to ensure it wasn't any of that causing an issue, so the simplified method is such:
Is there something I am missing? Does it need to be a long running process for the GC to collect, or is freeing the memory forced in the extension?
I did read a couple of other issues with similar problems, but they were not resolved so I didn't find an answer.
Thanks
How can the issue be reproduced?
using method shown in description and running Import on an XLSX file truncated to 3000 rows, 46 columns - filesize: 770KB
What should be the expected behaviour?
My understanding was the the extension will not read the entire file into memory if chunking is used, so I would expect less memory consumption