Closed rmccue closed 4 years ago
I wonder, could we have XHProf handle that itself?
The code notes: "Send a XRay trace document to AWS using the HTTP API. This is slower than using the XRay Daemon, but more convenient."
This code is no longer used, we do use the daemon. The issue atm is that the XHProf trace can be too large though. We technically should be able to chunk that though.
As I think detailed in #26, we can't split the data further unless we make faux-segments just to send data. E.g. we could create a segment called 'superglobals' and that segment includes the super globals, that way we don't need to also send that data with the main transaction. We could do the same for errors, but I think there's unique benefits to have errors on the root segment, IIRC.
Let's close this out.
Right now, we hit a few problems with X-Ray, typically related to the size of the data. #14 is an example of this, as is https://github.com/humanmade/hm-platform/pull/81.
Rather than sending data in a big bang at the end of a request, if we can send it more regularly, that would be very useful. This would solve out-of-memory issues, and would presumably make the shutdown callback faster too (not that it matters greatly).
The code notes: "Send a XRay trace document to AWS using the HTTP API. This is slower than using the XRay Daemon, but more convenient."
Can we swap out the current system for the daemon and stream data to it? What is the overhead of sending data via UDP to the daemon? Can we periodically call this (perhaps using ticks)?