Closed Treora closed 8 years ago
Hmm, this is fairly new. #930 was fixed at that time. I'm guessing our queued streamer refactorings are behind it.
I think it has been slow for longer than that.. But maybe it became even slower.. Or maybe the (tremendous! ;) ) growth of the database or such factors are involved?
That can be the cause too. :)
It could be a lot of things. We need to profile a bit.
My bet is that it has a lot to do with the sequence of requests we make to get from page loaded to annotations displayed.
1 and 2 can happen simultaneously. We should check that they do. 3 and 4 can happen simultaneously since our websocket connection is session-authenticated. They don't right now.
We could cache 1 and 2. The first we should definitely cache. We should even be able to do that just with caching headers. There's complexity introduced by caching 2 because if the app thinks our session is live but the session was lost on the server (redis restart, for instance) then we need to recover later. Making 3 and 4 simultaneous should be easy.
Now that your API stuff has been merged and the store supports analysis on the search endpoint we should also do an initial fetch via /api/search rather than the websocket. I don't know the websocket protocol that well, but I suspect it's more than one round trip for the handshake. So there's a hidden RT in there.
So, the tl;dr is that it's probably 5 or 6 ound trips, at least 4 or 5 of which are sequential, and you're doing this from Europe while our servers are in northern California.
To get a rough idea, here's some backbone latencies: https://www.dotcom-tools.com/internet-backbone-latency.aspx
Let's assume Colorado <-> Amsterdam is a close enough approximation. The latency on the backbone is 120ms by itself. So let's call it 150ms, and let's add some wiggle for the processing and the protocol overhead and call it 200ms. Multiple times 5 and you've got a full second. The first request is also incurring an SSL setup overhead, which could be another 200-400ms. We're looking at maybe almost a second and a half, assuming things are going well, before the data even gets to your browser, and that's mostly ignoring the application server and the database.
Add in the overhead of SSL on the first connection setup, which may be a few hundred ms. If you're seeing annotations within 2s things are going pretty well.
We can try to cut this down, but we should do so by measuring rather than guessing. Crack open those dev tools and check out your network timeline. The above should get you started.
I misspoke, we should definitely cache /api. Caching /app is less easy and obvious.
I also counted SSL in there twice. Oops.
Then again, I see about 1-2s here. It's not much shorter if I perform a search after the page loads (which should only be one RT over the websocket). Which means that, unsurprisingly, there's probably a lot of time spent by compiling and linking the DOM and rendering. That part I did make about several times faster when I did the refactoring of our stream view. I profiled that heavily for reflows and repaints and put in several hours of work to make sure we were batching digests well. There are probably still other huge wins to be made by isolating some directives better.
So, it looks like /app and /api are parallel. That's good. But the websocket doesn't actually need to wait for /api, since that route is hard coded. /api could be, but I like that it's not, and we could make it a no-op 99% of the time simply with caching headers.
Switching to normal websockets might work better, too. I think more recent versions of sockjs don't have the extra /info request that delays the real websocket connection but pyramid_sockjs seems pretty unmaintained. I have a branch nearly ready to go that switches to ws4py, which would eliminate that RT by using the underlying websockets directly.
I was just making a similar screenshot. Mediocre minds think alike. :) Everything is a bit slower here (but it's also over a 3G connection).
So caching of API tokens and using a /api/search request rather than a websocket might decrease the network time by half a second here, and maybe a full second or more where you are.
Here's a shot of the timeline if I do a search and then benchmark what happens when I clear it. The highlighted section of time is from after the click event that clears the search until things are fully painted.
The "Timer fired" there is angular performing a digest after we've loaded the thread. The pie chart on the right shows what it's doing. Mostly, it's parsing the HTML for the thread templates and instantiating their controllers. Very little time is spent rendering because linking the annotation data is deferred by the threads to animation frames, giving us the responsive feel of having the annotations cascade in. Otherwise, we used to have higher throughput, but totally interface freeze while this was happening. So, while it takes 1.5s-2s to render all the annotations, it's only about .5s before they start rendering whereas it used to be more like a full second or more.
Also, that's 1.5s-2s to render 50 annotations when only about 5 are on screen. We could load fewer with our first request and things would seem much snappier.
I agree, let's load only a few for first time.
For clarification, after this sequence of network activity shown in my screenshot, it takes about four more seconds before the annotations show up. The activity of the websocket is not shown in the diagram, but I suppose it is active in those ~four seconds (Wireshark suggests so but it's difficult to understand encrypted packets..).
using a /api/search request rather than a websocket
Would this mean doing the same thing we do in the sidebar (if I'm correct): using api/search to retrieve existing annotations, and opening a stream to listen for newly created additions?
Would this mean doing the same thing we do in the sidebar (if I'm correct): using api/search to retrieve existing annotations, and opening a stream to listen for newly created additions?
Exactly.
Updated this ticket to reflect that it contains some performance enhancement ideas, but doesn't really constitute an issue, per se.
Could maybe close as an issue, but tag (h_repo performance) for future reference.
I would prefer not to close this unless we open targeted issues for some of the recommendations I made above.
OK. Will revisit once the dust has settles from the anchoring rewrite.
Hi there! I'm going to close this as part of a clean-up of all issues currently open on this repo that represent ideas or features rather than reports of bugs or technical chores.
I want to be clear that this isn't intended to say anything at all about the content of this issue—it certainly doesn't mean we're no longer planning to do the work discussed here—just that we don't want to use GitHub issues to track feature requests or ideas, because the threads can get long and somewhat unwieldy.
If you're interested in what we are working on at the moment, you can check out our Trello board and, for a longer-term view, our roadmap.
And, if you're interested in following up on this issue, please do continue the discussion on our developer community mailing list. You might also want to check out our contributing guide.
Came up in #930. It can take several seconds before annotations show up. What would be the bottleneck here?