Closed Andrzej-W closed 4 years ago
Blazor send to the javascript library a stream of changes to reflect on ui and some others values like disposed object and events. For server mode, stream in encoded by server, send by network and decoded by the blazor javascript layer in same way as if in shared memory. That stream transports only the necessary data but
24289 bytes: aggiornamento per 4 componenti
A) component id 1 (DIV.SIDEBAR) -> no changes (but use bytes in stream) (here because StateHasChanged is fired by the event)
B) component id 2 (DIV.SIDEBAR) -> no changes
C) component id 3 (DIV.SIDEBAR) -> 6 steps ->
1. STEP IN to DIV.MAIN
2. STEP IN to div.content.px-4
3. REMOVE the first child (delete last content)
4. PREPEND a component (12, is the FetchData.cshtml component)
5. STEP OUT
6. STEP OUT
D) component id 12 (just added) -> 9 steps ->
1. PREPEND markup (weather forecast)
2. PREPEND text '\n'
3. PREPEND markup (this component demostrates etc...)
4. PREPEND text '\n'
5. PREPEND spaces ' '
6. INSERT table (with 760 frames)
7. PREPEND text '\n'
8. INSERT MARKUP (till ...</td></tr></thread>)
9. PREPEND space and cr
10. INSERT TBODY
11. PREPEND space and cr (more times)
12. INSERT TR
13. PREPEND space and cr (more times)
14. INSERT TD
15. PREPEND VALUE (ex 28/07/2018)
15. PREPEND space and cr (more times)
16. INSERT TD
15. PREPEND VALUE (ex 1)
17. PREPEND space and cr (more times)
18. PREPEND space and cr (more times)
19. INSERT TR
etc... for 50 times
20. INSERT space and cr (more times)
21. INSERT p
insert a (previous, next, spaces, ect...)
....
So, 26kb could be reduced by:
Have frames not fixed length But by now this involves too many changes, and reduction size maybe is too small
Remove unnecessary statement (like C5 and C6)
Kill interstitial spaces and cr Removing that, stream will be 13kb
Cache string by sending same strings only 1 time (so that same string with same reference is send only 1 time). It's possibile in RenderBatchWriter just because strings are collected with reference: now stream is 10kb
Different speech for the client mode, where the logic remains virtually identical but nothing is transmitted via the network because everything is in shared memory. Even there it is possible to do some optimization, since the Frames are aligned to 64 bits while mono runs at 32 bit, but I think that it is intentionally wanted.
- INSERT table (with 760 frames)
As this is an insert of an entirely new element it could just be?
var t = document.createElement(`table`);
// add any attribute
t.innerHtml = markupblob; // inner blob from server
parent.appendChild(t);
Is there any reason to create all the inner nodes individually?
Is there any reason to create all the inner nodes individually?
it's the smarter way to fast detect changes and streams only those to client. React works in same way, performance are not a problem since it's completely javascript (in my pc took 21 milliseconds to update tables and inner nodes).
Thank you @uazo for explanation. I have forgotten that we have to send some instructions to remove old HTML elements. I haven't looked into source code (I don't know where to look) but removing unnecessary white spaces is probably easy and it will make a big difference. Someone can say that we can enable compression on the server to remove this overhead but there are known attacks on compressed encrypted streams, so it is not recommended.
Cache string by sending same strings only 1 time
This is very interesting, but what is "string" in this case? Let's assume I have a table with 50 rows and some rows look like this
<tr class="bg-danger">...</tr>
Can we cache individual tags like <tr class="bg-danger">
? In forms we can have a lot of <div class="form-group">
elements.
What about this:
<input type="text" class="form-control" @bind="FirstName">
<input type="text" class="form-control" @bind="LastName">
A lot of duplication but different binding expressions, and probably ids, placeholders, etc. Maybe individual attributes are strings in this case?
I have just updated published applications to demonstrate bug aspnet/Blazor#1223. Now it returns 500 forecasts. Test it and you will see yourself that server version is much faster than client version. It is visible in Firefox and Edge, but in Chrome client version is simply unacceptable.
I think, if it is possible, server should send data separately and what to do with this data commands separately. For example forecast data should be transferred as responce for regular async http call and UI changes part wait to done receiving data and than inject them when they will be actually processed by javascript to update UI. In this case we will win some time/size and also it will better solution for large, multiple and different changes (I mean like ajax if documents different parts will update different time and as answer of different user actions). Expecting more clean architecture and better monitoring, debugging. In this case difference between client and server side blazor are only changes describing commands because data should be received in all cases.
Also when we say that transferring data from server to client is problem because of size and time consuming, I think we should remember virtualization with we use even in native software. How many objects can we have at one time on browser window? How much can be changes even if it is changing completely? If we have data 1 million rows we are not using all of them yes? only actual part, using paging, scrolling or using another virtualization technique. I think it can not be any size, it is limited already with screen dimensions and isn't big problem against server side blazor. Sure we should try to reduce any transfer but I don't think it is big issue and will have big difference comparing to client side.
As I mentioned above UI changes should be split for small parts and processed paralelly. For example user click changed something, timer changed some live data from server and etc.
This all is not strong position I can proof their advantages, it's just my opinion.
@Andrzej-W
but removing unnecessary white spaces is probably easy and it will make a big difference.
Yes, it is, better also for client-side
A lot of duplication but different binding expressions, and probably ids, placeholders, etc. Maybe individual attributes are strings in this case?
Yes, tag names, attributes names, also value, all can be deduplicated. See https://github.com/aspnet/Blazor/blob/master/src/Microsoft.AspNetCore.Blazor.Server/Circuits/RenderBatchWriter.cs#L210
@Lupusa87
I mean like ajax if documents different parts will update different time and as answer of different user actions)
Yes, Blazor works just like that,. Without ajax and text/html
As I mentioned above UI changes should be split for small parts
it's already like that
and processed paralelly.
but dom is updated via js, that's have only 1 thread
Thanks @uazo for the detailed investigation!
One further thing we can do to make a dramatic difference is to enable compression for the websocket frames. We did that in an earlier prototype and it cut the traffic by ~80%. This implicitly takes care of your suggestion 4, and makes the cost of repeating frames (e.g., to insert whitespace or newlines) far less.
Additionally, re-enabling the markup frame type that we disabled in 0.5.1 will make insertion of fixed markup blocks much cheaper (even more so if compression is also enabled).
I'm going to put this in the 0.6.0 milestone to make sure we do a further optimisation pass over this.
This is now being tracked with more info in aspnet/AspNetCore#5580.
Default Blazor application is relatively simple, but for those interested I have just published two versions:
http://blazorserwer.azurewebsites.net/ (blazorserver name is already taken) http://blazorclient.azurewebsites.net/
Both applications are in the same S1 plan in Western Europe data centre. Blazor version 0.5.1.
What is worth testing?
Home
andCounter
menu. In Blazor client version there is no internet transmission at all. In server versionclick
is transmitted to the server and server responds with updated page. About 1.5 - 2 KB.Click me
button inCounter
page. Yet again no transmission in client version and about 600 bytes in server version on every click.Fetch data
. In client version data is transmitted from the server in JSON format - about 5 KB. In server version we receive 26 KB. I have increased number of items to 50. UPDATE 2018-07-28: current version returns 500 forecasts.In my opinion server version looks responsive. It is normal that some transmission is required in experiment number 1 and 2. But I think that in experiment number 3 server version transmits to much data. If I understand correctly server has to send whole table in HTML format. This table is very simple:
Single table row has about 65-70 characters. Multiply this by 50, add ~200 bytes for page header and first paragraph, and we are well below 5 KB. Why do you need 26 KB to send it.