persistence.js is an asynchronous Javascript database mapper library. You can use it in the browser, as well on the server (and you can share data models between them).
When loading/displaying large datasets I have noticed that something within the session, I assume trackedObjects, grows almost continuously. Even as objects are no longer needed and leave scope they remain in trackedObjects.
For large datasets this begins to cause serious performance issues, manipulating roughly 100MB of data uses almost 500MB of memory which to my knowledge can only be reclaimed by calling persistence.clean(). Calling persistence.clean() is an effective solution but it kills all tracking which could break other areas of the codebase that expect to be able to persists data normally.
From what I understand I am using persistencejs correctly and the growth of trackedObjects is a natural consequence of the mechanisms persistencejs uses to keep everything tracked and synchronized. If this is the case I am wondering if maybe there should be some sort of noTrack() or readOnly() filter so that large datasets can be loaded and displayed without permanently residing in memory?
Has anyone else encountered similar issues with 100MB plus datasets? It's possible I'm using the library wrong, but it's definitely something in the persistencejs session that's eating up memory.
When loading/displaying large datasets I have noticed that something within the session, I assume trackedObjects, grows almost continuously. Even as objects are no longer needed and leave scope they remain in trackedObjects.
For large datasets this begins to cause serious performance issues, manipulating roughly 100MB of data uses almost 500MB of memory which to my knowledge can only be reclaimed by calling persistence.clean(). Calling persistence.clean() is an effective solution but it kills all tracking which could break other areas of the codebase that expect to be able to persists data normally.
From what I understand I am using persistencejs correctly and the growth of trackedObjects is a natural consequence of the mechanisms persistencejs uses to keep everything tracked and synchronized. If this is the case I am wondering if maybe there should be some sort of noTrack() or readOnly() filter so that large datasets can be loaded and displayed without permanently residing in memory?
Has anyone else encountered similar issues with 100MB plus datasets? It's possible I'm using the library wrong, but it's definitely something in the persistencejs session that's eating up memory.