Closed ozjerry closed 5 years ago
That looks suspicious indeed! Thanks for the investigation and the detailed comparison! I didn't really start on performance yet, as I was focusing so far on functionality-related bugs and missing libraries. We need some sort of perf testing added to the code base, and some baselines. In the meantime, I think some profiling would point out the suspect. Performance should increase linearly as expected since it is just a .net dictionary under the hood. I suspect it is just a side effect from calculating something, which blows up on large arrays:
Thanks @OmarTawfik. If you need someone to do real world performance testing later on then let me know. I kind of geek out on that stuff.
Awesome! please go ahead if you would like :) If you want pointers, I'd start with writing a unit test like this one below, which you can profile the run and see what is taking so long on larger arrays. It is probably one of two things:
1) you'll find that the tests are slowing down, which means part of the execution engine is misbehaving, and it is probably a silly side effect somewhere that we need to fix. 2) the runs will scale linearly as expected, and in that case, it might be something related with Blazor (just the UI is slowing down for some reason), and we need to look into that instead.
@OmarTawfik , we just jumped a few steps ahead of my level of expertise :). I was thinking more along the lines of performance testing in the browser...
But I'm game for a challenge so I'll definitely give it a try. Give me a day or two to get visual studio installed and then I'll give it a try.
@ozjerry oh, no worries at all! :) I thought you already looked into that. I'll try to dedicate some time tomorrow to look at it as well and update you with what I find.
So, I think I found what's wrong. Here are the steps I made in case you're curios and wanted to try it out:
[Fact]
public Task TestPerf()
{
return new SmallBasicCompilation(@"
n = 1000000
start = Clock.ElapsedMilliseconds
for i = 1 to n
a[i] = i
EndFor
end = Clock.ElapsedMilliseconds
TextWindow.WriteLine(end - start + ""ms"")
").VerifyLoggingRuntime();
}
Record CPU Profile
while it is running.Break All
to stop it.ExecuteStoreArrayElementInstruction
, and the constructor of ArrayValue
. A quick look at it reveals the culprit:Where we copy the whole memory on every array assignment. A workaround I did during development to debug code easier, but forgot to create an issue to fix it before we go live 😏 I'll look into fixing this soon! Thanks so much for the info and the analysis.
In V0.91, arrays are reasonably efficient and the time taken to iterate over them increases in line with size. But in V1, as the array gets larger the time to iterate increases by several orders of magnitude.
Take the following simple program:
In V0.91 it performs like so: n = 10: ~1ms n = 100: ~7ms n = 1000: ~70ms n = 10000: ~750ms
But in V1 it performs like this: n = 10: ~15ms n = 100: ~200ms n = 1000: ~12000ms n = 10000: ~never finishes
So for V0.91, as the array gets bigger, the iteration time increases fairly linearly. But in V1, the iteration is growing at an exponential rate.
The V1 scores are noticeably slower overall, but V1 is slower than V0.91 in all aspects, so I assume that is just lack of optimisation while you get the features sorted out.