Closed pheyvaer closed 1 year ago
This is a known issue ... happens to larger datasets. We have been exploring manners to overcome this issue, but still need to implement them. https://gitlab.ilabt.imec.be/svrstich/ldes-in-solid-semantic-observations-replay/-/issues/1
Ok, this is not a deal breaker for me then. Do you have a link to a dataset that should not have this issue in most cases?
That works! Is it ok to mention this dataset in the README and use it for the whole flow?
Of course, this is only a slimmed-down selection of the entire participant 1 dataset.
Hello, I am wondering if this issue was resolved because I still face the issue when loading up heavier datasets with the engine. Can you help me with this? @svrstich
Hi Kush, I'm in iGent tomorrow. Have you tried the latest version, as the replay with the remaining observations should now be using a batch kind approach, so that the OoM should not happen anymore.
Sure, I tried the latest version. But I will try it again. Otherwise, will see you tomorrow :)
When submitting remaining observation via the the Web app I get the following error in the engine:
Do you expect the engine to need so much memory?