Open matthieu-fesselier opened 4 years ago
3s is long indeed. However, are we sure it is an LDflex problem? Because the above involves (independently of whether you are using LDflex or not) fetching 70 resources over HTTP. I suspect those are the source of the 10–90ms delay.
A solution would be to parallelize the for
loop.
When I inspect the network requests, as all the resources are in the 1rst graph, only one request to https://apiprod.happy-dev.fr/clients/
is made
I see; the img
is local indeed. Will investigate!
Ah ah, cool!!! We were all hoping you'd say that. The team will celebrate that response :D
Hi @RubenVerborgh,
Any news here?
This one is a priority for us since it is turning our apps into lazy slugs :) Don't hesitate to tell us if you think we can help.
Don't hesitate to tell us if you think we can help.
Any profiling or insights where time is being spent, would help me get this solved faster. Booked time for this early next week.
I don't know if it helps but here are some observations:
await solid.data.context.extend(base_context)
const resource = solid.data["https://apiprod.happy-dev.fr/clients/1/"];
console.time();
await resource.name;
console.timeEnd(); // first call, need to request server: ~150-200ms
console.time();
await resource.img;
console.timeEnd(); // no need to request server: ~10ms
console.time();
await resource['http://happy-dev.fr/owl/#img']; // with full path, same time to resolve: ~10ms
console.timeEnd();
// same observations with this syntax
console.time();
const name = await solid.data["https://apiprod.happy-dev.fr/clients/2/"].name;
console.log(name.toString());
console.timeEnd(); // ~200ms
console.time();
const img = await solid.data["https://apiprod.happy-dev.fr/clients/2/"].img;
console.log(img.toString());
console.timeEnd(); // ~10ms
// and same without extending our context
const res = solid.data["https://apiprod.happy-dev.fr/clients/3/"];
console.time();
console.log((await res['rdfs:label']).toString());
console.timeEnd(); // ~200ms
console.time();
console.log((await res['http://happy-dev.fr/owl/#img']).toString());
console.timeEnd(); // ~10ms
This is interesting indeed, looks like we're chasing some specific requests in particular. Thanks.
When testing the "without extending our context" example, I get consistently good performance in Node.js 12.6.0:
https://apiprod.happy-dev.fr/clients/1/: 2.918ms https://apiprod.happy-dev.fr/clients/2/: 1.596ms https://apiprod.happy-dev.fr/clients/3/: 1.518ms https://apiprod.happy-dev.fr/clients/4/: 1.481ms https://apiprod.happy-dev.fr/clients/5/: 1.806ms https://apiprod.happy-dev.fr/clients/6/: 1.760ms https://apiprod.happy-dev.fr/clients/7/: 1.465ms https://apiprod.happy-dev.fr/clients/8/: 1.302ms https://apiprod.happy-dev.fr/clients/9/: 1.230ms https://apiprod.happy-dev.fr/clients/10/: 1.301ms https://apiprod.happy-dev.fr/clients/11/: 1.234ms https://apiprod.happy-dev.fr/clients/12/: 1.222ms https://apiprod.happy-dev.fr/clients/13/: 1.556ms https://apiprod.happy-dev.fr/clients/14/: 1.379ms https://apiprod.happy-dev.fr/clients/15/: 3.331ms https://apiprod.happy-dev.fr/clients/16/: 1.756ms https://apiprod.happy-dev.fr/clients/17/: 1.573ms https://apiprod.happy-dev.fr/clients/18/: 1.355ms https://apiprod.happy-dev.fr/clients/19/: 1.262ms https://apiprod.happy-dev.fr/clients/20/: 1.478ms https://apiprod.happy-dev.fr/clients/21/: 1.408ms https://apiprod.happy-dev.fr/clients/22/: 1.378ms https://apiprod.happy-dev.fr/clients/23/: 1.483ms https://apiprod.happy-dev.fr/clients/24/: 1.281ms https://apiprod.happy-dev.fr/clients/25/: 1.314ms https://apiprod.happy-dev.fr/clients/26/: 2.273ms https://apiprod.happy-dev.fr/clients/27/: 1.299ms https://apiprod.happy-dev.fr/clients/28/: 1.477ms https://apiprod.happy-dev.fr/clients/29/: 1.282ms https://apiprod.happy-dev.fr/clients/30/: 1.580ms https://apiprod.happy-dev.fr/clients/31/: 1.331ms https://apiprod.happy-dev.fr/clients/32/: 3.948ms https://apiprod.happy-dev.fr/clients/33/: 1.317ms https://apiprod.happy-dev.fr/clients/34/: 1.268ms https://apiprod.happy-dev.fr/clients/35/: 1.332ms https://apiprod.happy-dev.fr/clients/36/: 1.316ms https://apiprod.happy-dev.fr/clients/37/: 1.244ms https://apiprod.happy-dev.fr/clients/38/: 1.394ms https://apiprod.happy-dev.fr/clients/39/: 1.330ms https://apiprod.happy-dev.fr/clients/40/: 1.471ms https://apiprod.happy-dev.fr/clients/41/: 1.430ms https://apiprod.happy-dev.fr/clients/42/: 1.130ms https://apiprod.happy-dev.fr/clients/43/: 1.132ms https://apiprod.happy-dev.fr/clients/44/: 1.092ms https://apiprod.happy-dev.fr/clients/46/: 1.167ms https://apiprod.happy-dev.fr/clients/47/: 1.102ms https://apiprod.happy-dev.fr/clients/48/: 1.127ms https://apiprod.happy-dev.fr/clients/49/: 1.108ms https://apiprod.happy-dev.fr/clients/50/: 1.078ms https://apiprod.happy-dev.fr/clients/51/: 1.099ms https://apiprod.happy-dev.fr/clients/52/: 1.130ms https://apiprod.happy-dev.fr/clients/53/: 1.099ms https://apiprod.happy-dev.fr/clients/54/: 1.086ms https://apiprod.happy-dev.fr/clients/57/: 1.298ms https://apiprod.happy-dev.fr/clients/58/: 1.237ms https://apiprod.happy-dev.fr/clients/59/: 1.181ms https://apiprod.happy-dev.fr/clients/61/: 1.307ms https://apiprod.happy-dev.fr/clients/63/: 1.446ms https://apiprod.happy-dev.fr/clients/64/: 2.788ms https://apiprod.happy-dev.fr/clients/65/: 1.138ms https://apiprod.happy-dev.fr/clients/66/: 1.084ms https://apiprod.happy-dev.fr/clients/67/: 1.031ms https://apiprod.happy-dev.fr/clients/68/: 1.433ms https://apiprod.happy-dev.fr/clients/69/: 1.071ms https://apiprod.happy-dev.fr/clients/70/: 0.975ms https://apiprod.happy-dev.fr/clients/72/: 0.947ms https://apiprod.happy-dev.fr/clients/73/: 0.937ms https://apiprod.happy-dev.fr/clients/75/: 0.936ms https://apiprod.happy-dev.fr/clients/76/: 1.142ms https://apiprod.happy-dev.fr/clients/77/: 0.993ms https://apiprod.happy-dev.fr/clients/78/: 1.033ms https://apiprod.happy-dev.fr/clients/79/: 1.311ms https://apiprod.happy-dev.fr/clients/: 445.242ms
So two questions:
I am testing in a browser (Chrome 78). Without extending our context, with this code:
const container = solid.data["https://apiprod.happy-dev.fr/clients/"];
console.time(container.toString())
for await (const r of container.ldp_contains) {
console.time(r.toString())
await r['rdfs:label']
console.timeEnd(r.toString())
}
console.timeEnd(container.toString())
I get different results than yours:
Here is the context I used:
const base_context = {
'@vocab': 'http://happy-dev.fr/owl/#',
rdf: 'http://www.w3.org/1999/02/22-rdf-syntax-ns#',
rdfs: 'http://www.w3.org/2000/01/rdf-schema#',
ldp: 'http://www.w3.org/ns/ldp#',
foaf: 'http://xmlns.com/foaf/0.1/',
name: 'rdfs:label',
acl: 'http://www.w3.org/ns/auth/acl#',
permissions: 'acl:accessControl',
mode: 'acl:mode',
email: 'http://happy-dev.fr/owl/#email',
};
Thanks, will test on Chrome! Although it's the same V8 engine, so I wonder what goes wrong.
Slower timings in Chrome and Firefox confirmed.
Added comparative tests to the branch https://github.com/solid/query-ldflex/tree/test/45-performance. Run in node with node test.js
; in browser with npm start demo
.
I've traced the problem back to Comunica: it is the this._engine.query(sparql, { sources })
call in to Comunica in LDflex-Comunica that is slower in both Firefox and Chrome compared to Node.js. Assigning to Comunica people so they can trace on their end.
My first suspect was transpiling: perhaps the transpiled version of asynchronous iteration is slower than the native version. However, the same transpiled version is used for browsers and Node.js (I expect this to be the case all the way down Comunica as well). So I suspect the problem is rather in a shim.
Thank @RubenVerborgh for your time here.
@rubensworks, @joachimvh: Let us know if we can help you finding out why the query
function is being slow in browsers. Don't we polyfill anything in it?
@rubensworks, @joachimvh: Any good news you would like to share with us? :-)
Let us know if we can help you finding out why the
query
function is being slow in browsers.
I think it is a shim or polyfill indeed. If you could identify the different code being called in command-line or browser, that would definitely help. Once possible example could be doing a loop of 1000 await solid.data["https://apiprod.happy-dev.fr/clients/2/"].img
with a profiler, and comparing the results.
I just did the above; I see a quite some time spent in setImmediate
. Perhaps we are using a non-optimal shim for that one.
@matthieu-fesselier: If you have any further insight that might help...
Indeed, the function is not supported by both Chrome and Firefox : https://developer.mozilla.org/en-US/docs/Web/API/Window/setImmediate
I'll dig deeper tomorrow afternoon.
Our code relies heavily on setImmediate
, which is fast in Node.js, but I suspect insufficiently so in browser shims. Tried replacing it with requestAnimationFrame
, but that didn't go great either.
For simple calls such as await solid.data["https://apiprod.happy-dev.fr/clients/2/"].img
, we apparently use around 60 setImmediate
calls (in asynciterator
alone). That's definitely more than the browser shim can handle. We should either improve the browser shim, or (not much room for improvement) reduce that number.
In the meantime, @happy-dev, could you check how far asyncMap
with concurrency (8–128) gets you (instead of for await
)? There's no need to execute those in sequence anyway; I've seen some gains from parallel.
@matthieu-fesselier could you please check that?
@RubenVerborgh any idea how long a fix could take? Sorry about the pressure, but it's blocking all of our applications. :)
@sylvainlb No budget is currently assigned to LDflex, so it's best effort. We are looking into ways to fund a developer on Comunica and related projects, such that they do not primarily depend on efforts by researchers. Any help or suggestions there are definitely welcome.
I think the parallel road is definitely a good one for your applications. Even if the duration of individual calls is reduced, it still makes sense to perform multiple operations in parallel such that the longer ones do not block the shorter ones.
As for the duration of a fix, in addition to the above budget constraints, the main Comunica developer is currently on holidays. Given that my suspect is the setImmediate
mechanism, which is at the heart of the stream communication, I'm afraid that this will take some serious effort. The use case that uncovered this performance bottleneck involves a) a larger sequence of simple queries b) in the browser, whereas our performance testing has focused on complex queries, and we benchmark on server environments. This explains why the additional couple of milliseconds per request (which of course easily add up) was not caught before.
I plan to perform some testing tonight in order to create a more specific issue in the appropriate repository.
Thanks @RubenVerborgh for the feedback. We totally understand the situtation. Can we help with a resource? We could ask someone to look into it.
Could you elaborate on what you mean by "parallel road"? I'm not sure to follow you.
Thank you for your efforts. Let us know how we can help. We're definitely willing to put some effort on this issue, so if you know where our efforts would be best used, we'll do as much as we can.
Just fyi, apparently setImmediate is deprecated and will not make it into the standards. It's probably a good idea to get rid of it if we can.
MDN suggests replacements: https://developer.mozilla.org/en-US/docs/Web/API/Window/setImmediate#Notes
Can we help with a resource? We could ask someone to look into it.
What we really want to find out is: what's happening differently in the browser? The aforementioned setImmediate
is one difference that popped up, but further experiments have made me think that something else is contributing more significantly. Given the very small timescale (it's literally a couple of milliseconds difference per query), it's hard to find out what that is. My main frustration is that I currently see close to 50% idle time in a performance test. Any help with that is appreciated.
I am writing up an issue for Comunica; will link it into this thread.
Just fyi, apparently setImmediate is deprecated and will not make it into the standards.
setImmediate
will indeed not make it into browsers; but it is part of Node.js, for which we write Comunica and where it is well supported. The suggested replacements by MDN are actually those that are currently used in the browser version of LDflex, where webpack picks the right alternative (typically postMessage
).
I have experimented replacing setImmediate
with alternative implementations to measure its effect. However, when I use the same alternatives on both Node.js and the browser, Node.js is still faster, which is the reason why I suspect that setImmediate
is not the primary cause (although we definitely want to reduce the number of calls to it in general).
Could you elaborate on what you mean by "parallel road"? I'm not sure to follow you.
The current code is for … await
ing through every r
of container.ldp_contains
sequentially. So they all have to wait for each other. Whereas if you'd do them in parallel (for instance with asyncMap
linked above), then some of the waiting time overlaps.
In the above case specifically, you could also just do for await (const r of container.ldp_contains.img)
(but I expect that this code is part of a larger block, where this might not be possible).
I think it's already clear for you, but we use ldflex in the browser without WebPack. I don't know if it changes something to what you just said.
The current code is for … awaiting through every r of container.ldp_contains sequentially. So they all have to wait for each other. Whereas if you'd do them in parallel (for instance with asyncMap linked above), then some of the waiting time overlaps.
@matthieu-fesselier what do you think?
I think it's already clear for you, but we use ldflex in the browser without WebPack.
You're likely using some kind of packaging, or it wouldn't work at all 🙂 What is your packaging method?
Regardless, the problem occurs with webpack as well.
Another possibility is that we build a mechanism for preloading certain properties. Something like container.ldp_contains.children({ img: true })
or so, which would load them all in batch. However, I wanted to keep the individual .img
calls as light as possible (and that works in Node.js) to avoid the need of such a call.
I let @matthieu-fesselier answer this one, he'll be way more specific.
@matthieu-fesselier: My understanding is that we should try replacing that one https://git.happy-dev.fr/startinblox/framework/sib-core/blob/master/src/libs/store/store.ts#L47 with asyncMap
and see how much this helps.
I am referring to this comment: https://github.com/solid/query-ldflex/issues/45#issuecomment-549597585
Based on the new performance data we get with that change, we might be able to dig deeper more easily.
@happy-dev this is not the main performance issue because in this loop, we just do a toString()
on each resource to get its @id
, which resolves instantaneously.
The whole loop, for a list of 60 clients, takes ~450ms, which is not so bad.
@RubenVerborgh for what I've seen, using asyncMap
to get one property from each resource improves a bit the performances.
From ~1300ms to display each client name, we go down to ~800ms. I don't see a big difference by changing the concurrency
between 8 and 128.
The challenge is now to find out how to include this in the framework.
About our packaging method, we only use babel to transpile the typescript and polyfill some functions. Then we use it directly as modules in browsers.
Another possibility is that we build a mechanism for preloading certain properties. Something like
container.ldp_contains.children({ img: true })
or so, which would load them all in batch. However, I wanted to keep the individual.img
calls as light as possible (and that works in Node.js) to avoid the need of such a call.
You mean to implement this in LDFlex?
@matthieu-fesselier the point was not that the loop was taking too much time, but rather that we could parallelize the calls to LDFlex to avoid summing up the times. Let me know if you need a call about it.
@matthieu-fesselier regarding the packaging, wouldn't it be possible to configure babel to replace the setImmediate method?
@matthieu-fesselier regarding the packaging, wouldn't it be possible to configure babel to replace the setImmediate method?
What do you mean by replace
here. It is already the case I believe as the function wouldn't work in Chrome or Firefox. My understanding is that it is already being polyfied.
@RubenVerborgh we've been doing some testings but they're not really positive. Would it be possible to have a call tomorrow for example to see how we could solve the problem?
Thanks
@RubenVerborgh for what I've seen, using
asyncMap
to get one property from each resource improves a bit the performances. From ~1300ms to display each client name, we go down to ~800ms. I don't see a big difference by changing theconcurrency
between 8 and 128.
OK, that's a good start. Parallel is the way to go in any case.
About our packaging method, we only use babel to transpile the typescript and polyfill some functions. Then we use it directly as modules in browsers.
Transpilation might affect for … await
etc. performance.
It depends on the compatibility you need, but here are my settings, which do not lead to polyfilled generators: https://github.com/solid/query-ldflex/blob/d883519265cf3436c3327e44dcc2e3eac73f0795/.browserslistrc
Another possibility is that we build a mechanism for preloading certain properties.
You mean to implement this in LDFlex?
Yes. It would probably be similar to this mechanism in ActiveRecord: https://blog.bigbinary.com/2013/07/01/preload-vs-eager-load-vs-joins-vs-includes.html (ActiveRecord and LDflex have very similar constraints.)
wouldn't it be possible to configure babel to replace the setImmediate method?
Webpack (and others) do that for you; the problem is not replacing setImmediate
, but rather that no single alternative is as good as setImmediate
. So our best course of action is to reduce the number of calls to setImmediate
(see https://github.com/comunica/comunica/issues/561).
@RubenVerborgh we've been doing some testings but they're not really positive.
In what sense?
Would it be possible to have a call tomorrow for example to see how we could solve the problem?
The situation is:
setImmediate
, although we should reduce our usage of it for other reasons.So I'm happy to talk, but I don't think that will help us get past 2.i.
@RubenVerborgh: Thank for your answer.
When would "later this month" be exactly?
Would you or someone else be able to guide someone of our team beyond 2.i tomorrow?
What options do we have to get this issue fixed within days?
When would "later this month" be exactly?
Would you or someone else be able to guide someone of our team beyond 2.i tomorrow?
The problem is that both LDflex and Comunica are non-funded projects at the moment. I.e., we do them in our spare time, or as part of other projects. It's not simply a matter of finding a budget for this one issue, but rather that other paid projects are scheduled for the next weeks. If we find a long-term solution to budget LDflex/Comunica, then we can hire a dedicated developer who can spend all their time on it, and whenever issues like this one pop up, they can prioritize them as they do not have other projects and duties.
Don't get me wrong, this issue is important to me. However, we are talking here about a problem in the order of milliseconds; it might take several days for it to be diagnosed, and a couple of more to find a sustainable fix. Even if I had a budget for that—suppose for a moment I pay for it out of pocket—then I'd still have the problem that other projects prevent me from clearing up our researchers' agendas for the coming two weeks.
What options do we have to get this issue fixed within days?
For this specific issue:
for await (const r of container.ldp_contains)
to for await (const r of container.ldp_contains.img)
or similar.img
that fetches the specific needed triple rather than executing a SPARQL query with Comunica.For other issues like this in the future:
I will try to spend some more hours of this the coming days, like I already have the past days, but I am also constrained by my daytime job.
@RubenVerborgh we've been doing some testings but they're not really positive.
In what sense?
I realise that we already fetch the values in parallel with our framework. Basically, for each property we want to display, an element is inserted in the DOM. At the end, all the elements are responsible for fetching their own value in parallel. For my tests, I display :
It seems that the more calls we have in parallel, the more the browser struggle to resolve it.
I made some other tests "from scratch", outside of the framework (only javascript, no transpilation). I try to display 3 properties for each user from a container of 60. Here are the timings for each loop to resolve entirely :
// for await : 3200ms
// asyncMap, concurrency 1 : 3700ms
// asyncMap, concurrency 2 : 3100ms
// asyncMap, concurrency 4 : 3000ms
// asyncMap, concurrency 16 : 3000ms
// asyncMap, concurrency 32 : 2900ms
// asyncMap, concurrency 64 : 2900ms
// asyncMap, concurrency 128 : 2900ms
Here is the code I used:
// test 1
const container = solid.data["https://api.alpha.happy-dev.fr/users/"];
console.time(container.toString())
for await (const r of container.ldp_contains) {
await r['rdfs:label']
await r['http://happy-dev.fr/owl/#username']
await r['http://happy-dev.fr/owl/#email']
}
console.timeEnd(container.toString())
// test 2
const container = solid.data["https://api.alpha.happy-dev.fr/users/"];
const mapResources = await asyncMap(
128,
async r => ({
"name": (await r['rdfs:label']).toString(),
"username": (await r['http://happy-dev.fr/owl/#username']).toString(),
"email": (await r['http://happy-dev.fr/owl/#email']).toString(),
}),
container.ldp_contains
);
console.time(container.toString())
for await (let r of mapResources) {
console.log(r)
}
console.timeEnd(container.toString())
It seems that the more calls we have in parallel, the more the browser struggle to resolve it.
Yeah, at some point you want to introduce throttling; there is a cut-off point for parallel gains for any task in the browser. Here's where I am doing that for the React components: https://github.com/solid/react-components/blob/6ea5e5b975a91b9734f2faea5f05e013718ed773/src/ExpressionEvaluator.js#L27
I made some other tests "from scratch", outside of the framework (only javascript, no transpilation).
But in Node.js then? Or the browser? Because my Node.js timings have been good so far.
Yes in the browser too for these tests
Node.js timing is around 145ms on my machine with concurrency; so again a magnitude of difference.
Yes in the browser too for these tests
Yeah, but then the code in the back is still transpiled.
@happy-dev
What options do we have to get this issue fixed within days?
I have sketched a possible solution here: https://github.com/RubenVerborgh/LDflex/issues/44
While this does not address the root cause (= queries have an overhead in browsers), it will reduce the number of queries overall. We have not hit the threshold of a problematic number of queries in Node.js yet (I think 145ms for 180 SPARQL queries is quite acceptable), but we will eventually hit that point if more queries are needed (as do similar solutions like ActiveRecord).
In parallel, we can look at what is slowing down browsers.
Testing progress: the branch https://github.com/solid/query-ldflex/tree/test/45-performance now has a ./run-test
script that executes the same code on the Node version and the browser version.
The timings are currently:
So we can test things without a browser now.
Don't get me wrong, this issue is important to me.
I can tell, and that is totally clear to me, even though you are right to say it, things are often better understood when said :-)
Help us find sustainable funding for Comunica, so we can hire a dedicated developer.
I'd be happy to dig into that, even though that won't help us here but I am totally up for catching that ball
We reach back to you tomorrow regarding our progress towards a short term solution.
In any case, we're definitely spending way too much time on SPARQL parsing (https://github.com/RubenVerborgh/SPARQL.js/issues/94). It's a major contributor to time.
Here's the top time spent in the Node version:
2428 57.1% t __ZN2v88internal12_GLOBAL__N_115InstallFunctionEPNS0_7IsolateENS0_6HandleINS0_8JSObjectEEENS4_INS0_6StringEEENS0_12InstanceTypeEiiNS4_INS0_10HeapObjectEEENS0_8Builtins4NameE
1037 42.7% t __ZN2v88internal12_GLOBAL__N_115InstallFunctionEPNS0_7IsolateENS0_6HandleINS0_8JSObjectEEENS4_INS0_6StringEEENS0_12InstanceTypeEiiNS4_INS0_10HeapObjectEEENS0_8Builtins4NameE
229 22.1% LazyCompile: *end /Users/ruben/Documents/JavaScript/LDflex-Comunica/node_modules/asynciterator/asynciterator.js:263:13
229 100.0% LazyCompile: *processImmediate internal/timers.js:396:28
729 17.1% T _fcntl$NOCANCEL
compared to top time in the browser version (unshimmed setImmediate):
58709 68.8% T _fcntl$NOCANCEL
16884 19.8% t __ZN2v88internal12_GLOBAL__N_115InstallFunctionEPNS0_7IsolateENS0_6HandleINS0_8JSObjectEEENS4_INS0_6StringEEENS0_12InstanceTypeEiiNS4_INS0_10HeapObjectEEENS0_8Builtins4NameE
8693 51.5% t __ZN2v88internal12_GLOBAL__N_115InstallFunctionEPNS0_7IsolateENS0_6HandleINS0_8JSObjectEEENS4_INS0_6StringEEENS0_12InstanceTypeEiiNS4_INS0_10HeapObjectEEENS0_8Builtins4NameE
3602 41.4% LazyCompile: *processPromiseRejections internal/process/promises.js:163:34
So some more of promise rejections apparently.
But close to 70% of time is spent on _fcntl$NOCANCEL
, but currently unclear where that comes from.
I can confirm that the problem is in the browser-supplied version of stream
, which is stream-browserify
. When swapping it out for the native Node.js stream
implementation, the performance between the browser and Node.js versions becomes identical.
And the root issue is likely this one: https://github.com/defunctzombie/node-process/issues/86
I face performances issues when accessing datas on a container. It makes all our apps not useable for now with LDFlex. For example, with the following code:
It takes ~3000ms to display 70 times the
img
property of a resource. For each resource, it takes between 10ms to 90ms to display itsimg
.Is there something we can do to improve this?