Closed ChrisRackauckas closed 7 years ago
Running the containerized version of the server on my beefy desktop I see
using ...
- 6 secon Heroku
using
- 13 secI think what I said earlier about spinning it up every time was not correct, or at least it's not now that we are on the hobby plan. I don't think preboot would help if I'm understanding it correctly. I'll keep thinking about it.
Yeah, from the explanation here I understand the preboot won't help:
I setup the major sections to instead do elapsed times. It was pretty stable at:
Diff equ: begin # none, line 2:
dx = (a * x - b * x * y) - 2x # none, line 3:
dy = -c * y + d * x * y
end
Params: Expr[:(a => 1.5),:(b => 1),:(c = 3),:(d = 1)]
tspan: (0.0,10.0)
u0: [1.0,1.0]
vars: Symbol[:x,:y] type: Array{Symbol,1}
Setup time: 0.114283197
Solve time: 1.322976144
Interp time: 0.073161357
Plot time: 0.243408674
Done, took this long: 1.75515146
If the same command was run twice, it was much faster. So this was clearly JIT time. JIT time is incurred each time because in Julia a Function is an abstract type, so all of the typing is done concretely directly to each new function. For longer simulations, this is a good strategy. For our purposes, it's likely not. So I setup a new problem and solution type so that way it wouldn't be concretely typed to stop the JIT, and made a new optional argument to turn off specialization of the Integrator
. Then I went into the solve command itself to find out what was taking all of the time.
Setup time: 0.102605476
"init dt time" = "init dt time"
0.094342 seconds (53.33 k allocations: 1.911 MB, 14.58% gc time)
"opts time" = "opts time"
0.000018 seconds (24 allocations: 544 bytes)
"cache time" = "cache time"
0.000099 seconds (158 allocations: 8.573 KB)
"interp time" = "interp time"
0.005830 seconds (9.47 k allocations: 407.344 KB)
"sol time" = "sol time"
0.006739 seconds (13.38 k allocations: 557.391 KB)
"integrator time" = "integrator time"
0.021466 seconds (54.90 k allocations: 2.199 MB)
0.133771 seconds (131.72 k allocations: 5.082 MB, 10.28% gc time)
0.000562 seconds (1.01 k allocations: 47.000 KB)
Solve time: 0.134676033
Interp time: 0.098140461
Plot time: 0.279194113
Done, took this long: 0.615901738
This is what it ends up being. What this shows is that the solve time is fairly minimal now. The 0.09
seconds was then chipped by instead inlining the initdt
calculation (since otherwise it's behind a function call, cause JIT lag)
Diff equ: begin # none, line 2:
dx = a * x - b * x * y # none, line 3:
dy = -c * y + d * x * y
end
Params: Expr[:(a => 1.5),:(b => 1),:(c = 3),:(d = 1)]
tspan: (0.0,10.0)
u0: [1.0,1.0]
vars: Symbol[:x,:y] type: Array{Symbol,1}
Setup time: 0.096056555
Init dt time: 0.007433241
"init dt time" = "init dt time"
"opts time" = "opts time"
0.000010 seconds (24 allocations: 544 bytes)
"cache time" = "cache time"
0.000122 seconds (158 allocations: 8.573 KB)
"interp time" = "interp time"
0.007405 seconds (9.47 k allocations: 407.203 KB)
"sol time" = "sol time"
0.008374 seconds (13.38 k allocations: 557.250 KB)
"integrator time" = "integrator time"
0.037656 seconds (54.90 k allocations: 2.199 MB, 29.93% gc time)
0.054905 seconds (78.25 k allocations: 3.163 MB, 20.53% gc time)
0.000799 seconds (1.48 k allocations: 68.969 KB)
Solve time: 0.055960593
Interp time: 0.099727999
Plot time: 0.28205398
Done, took this long: 0.542838202
Essentially what we're left with then is plot and server transfer time. The plotting was recently refactored to make it easy to get the vectors that it would plot. I'll next try to directly compute those from the solution and go back to plotting using Plotly in the user's browser.
Requires using OrdinaryDiffEq.jl master and DiffEqWebBase.jl
Got more things precompiling and stopped the double interpolation. Same computer:
Diff equ: begin # none, line 2:
dx = a * x - b * x * y # none, line 3:
dy = -c * y
end
Params: Expr[:(a => 1.5),:(b => 1),:(c = 3),:(d = 1)]
tspan: (0.0,10.0)
u0: [1.0,1.0]
vars: Symbol[:x,:y] type: Array{Symbol,1}
Setup time: 0.100075413
Init dt time: 0.007034807
Solve time: 0.018159045
length(sol) = 29
Plot time: 0.106762212
Done, took this long: 0.233386267
I refactored DiffEqBase's plot recipe a little bit and then I tested what happened when the plot was deconstructed:
Diff equ: begin # none, line 2:
dx = a * x - b * x * y # none, line 3:
dy = -c * y
end
Params: Expr[:(a => 1.5),:(b => 1),:(c = 3),:(d = 1)]
tspan: (0.0,10.0)
u0: [1.0,1.0]
vars: Symbol[:x,:y] type: Array{Symbol,1}
Setup time: 0.084397555
Init dt time: 0.006362853
Solve time: 0.016455056
Solution handling time: 0.099741817
Plot time: 0.0012479
Done, took this long: 0.209583435
It turns out that generating the plot itself takes almost no time, and almost all of the time is in the solution handling. So if we don't want to send back the (newt,newu)
values, we can go back to using plot(sol)
and there's essentially no difference in time. But if we keep the deconstructed form of the plot, then the solution handling isn't done twice and we gain another .1 second. In case we end up changing our mind, I made a branch deconstruct_plot
which holds how to do this, but am keeping it in for now.
Final damage:
Requires OrdinaryDiffEq.jl master, DiffEqBase.jl master, and the new DiffEqWebBase (for the non-specializing problem / solution). Timing result:
Diff equ: begin # none, line 2:
dx = a * x - b * x * y # none, line 3:
dy = -c * y
end
Params: Expr[:(a => 1.5),:(b => 1),:(c = 3),:(d = 1)]
tspan: (0.0,10.0)
u0: [1.0,1.0]
vars: (:x,:y) type: Tuple{Symbol,Symbol}
Setup time: 0.097958538
Init dt time: 0.006935318
Solve time: 0.018173828
Solution handling time: 0.105786954
Plot time: 0.001061532
Done, took this long: 0.231002539
I didn't test if the main cost in the setup time was due to the ParameterizedFunctions.jl ODE parsing, but I assume it would be, and it would take quite a bit of work to get that down (I am sure it can probably go down since it uses a lot of Dicts, but that's a project for re-writing a parser and there's a lot more I want to do there). Even then, that's maxed at ~0.1 seconds that we can gain. The other part is solution handling. That's interpolating the output at more points to get a good plot. That's unavoidable, and is ~0.1 seconds. That leaves us at 0.2 seconds. Given that Heroku is twice as slow, that's ~0.4 seconds.
Everything else seems negligible, unless there's "server transfer time". Then, given this setup, we could just send back and plot the plot_vecs
(since the plot recipe is deconstructed) along with the other plotting variables (they're all right there in the plot
command). and have the user plot it on their computer. but since it's less than 1% of the compute time, it would only be for if it can reduce the server transfer time, so that's someone we'd have to measure.
One little extra note is that I made it throw an error if maxiters is hit. Otherwise it would stall for awhile trying to make a huge plot. We may want to drop what we are using for maxiters a bit to constrain the user a bit more.
I'll try and see if I can find out how to deploy this, and see what the response times are like.
Getting about the same results in the containerized image:
Diff equ: begin # none, line 2:
dx = a * x - b * x * y # none, line 3:
dy = -c * y + 2x
end
Params: Expr[:(a => 1.5),:(b => 1),:(c = 3),:(d = 1)]
tspan: (0.0,10.0)
u0: [1.0,1.0]
vars: Symbol[:x,:y] type: Array{Symbol,1}
Setup time: 0.103284423
Init dt time: 0.007445684
Solve time: 0.016999836
Solution handling time: 0.103298754
Plot time: 0.00134516
Done, took this long: 0.233630466
The response time is very reasonable now on the deployed version.
It may be useful to track in the future whether all of this to avoid the JIT is necessary after changes to the compiler.
What are some ways we can get the response time down? Would getting the standard plan with Preboot help? I would be willing to pay for the Standard dyno if it would help, just let me know. The other things along path would be to setup ParameterizedFunctions, DiffEqBase, and OrdinaryDiffEq in the sysimg, and trying out precompilation in ParameterizedFunctions (it may actually work already? It should).