Open mrfrase3 opened 4 years ago
:grimacing: so you are using debugger on remote server? I am not sure if i understand:
To support large payloads/long recordings this will require something more serious than nedb
which is not maintained anyways. I can think about it, but it's like using the tool as dashboard.
at the moment extension is storing ctx.data
(but it's still not displayed, need to update the UI). Quick fix would be to allow user to disable storing of ctx.data
and to cap the nedb collection to N records.
Sorry, should clarify a little further, we have a remote development server that contains realistic dummy data that the front-end developers can use. (Them being able to easily debug their queries would make my life easier)
It seems that right before the server stopped responding, there was a peak in max outbound network traffic. My theory is that it was death by a thousand slightly larger than usual papercuts, unfortunately I cannot recover the db file to find out.
There are probably other optimisations to be made, like the ability to exclude certain fields (we have a field called summary on some services that contains a lot of semi-redundant data.), Alternatively detecting if a field on a data/result is too large and replacing it with a 'too large for the debugger' string. Also if the result from a find is too large (> 10) the debugger only stores a sample.
I know from experience that mongo doesn't take kindly to large documents, taking up to minutes to return some if they are that big, I can imagine there will be similar limitations with other databases, so optimising the data being stored would need to be done anyway.
If you want help implementing anything, let me know, I don't wanna cowboy in a bunch of stuff without considering it. 🤠
I like the idea, we can start the PR and work on those changes. I do not have a lot of experience with large payloads on node server. It's even impressive how it handles 1gb of text (I think using streams is better option for this scenario but not sure if it can be implemented in feathers directly).
If you are interested I can add you as contributor, PRs would be gladly accepted!
Another thing comes up to my mind: @daffl -- I think this has opportunity to become a feathers dashboard, or dashboard as a service which can monetize the framework.
Cool, I'll throw something together when I have a free minute.
I've actually been contemplating a admin dashboard/ui interface for feathers for a while, something like Keystone, Strapi, Drupal, etc. I think it's the missing link to making feathers a serious competitor. I actually built somewhat of a prototype at the company I'm at. Although, this is probably a conversation for elsewhere, maybe slack.
Woke up this morning to find that our dev server was in a crash loop, logs indicate that nedb can't handle a threshold of data.
Running parseInt on 0x3fffffe7 reveals 1,073,741,799 which, assuming we are dealing with mostly ascii characters, is about a gigabyte.
The update to our dev server was pushed before I finished yesterday, AWS monitoring suggests that it was fine until someone did a large request in the morning. Also, I was collecting all the data with a 15-minute expiry. Maybe implementing a max number of records would prevent this sort of thing from happening?
I've disabled tracing on the server for now as we don't have much use for it until the extension displays event data.