Closed blitzcode closed 7 years ago
I'm afraid that this problem may be inherent to the approach used by Threepenny-GUI. Essentially, building or modifying HTML elements with Threepenny is done by sending JavaScript expressions to the browser over a WebSocket connection, which are then evaluated with eval
.
There are several bottlenecks:
eval
functionLooking at your code, it appears that bottleneck 2 may be relevant. In particular, whenever you use the getElementById
function, the server has to wait for the browser to send, well, a representation of the element (essentially a UID) before it can continue. It would be more efficient if you do not rely on element IDs, but instead use ordinary Haskell variables to hold the elements to be modified later. This way, the server does not need to wait for answers from the client when building the page.
To quickly test whether this does indeed improve performance: The current Github version of Threepenny includes a function timestamp
that prints timestamp information in the browser console and can be used to debug performance bottlenecks. (You may need to activate a cookie in the browser by running Haskell.log.enable(true)
in the JS console.)
getElelementbyID is only used once to get the place to insert, and then to register the event handlers. I also thought this might be the bottleneck as it clearly requires a full roundtrip. But that suspicion did not turn out to be true. Removing them has negligible impact on the performance.
If we only leave the DOM building code active, I can still see the elements appear slowly, one-by-one it the browser. It can take >100ms for each iteration of the loop, making the construction of the page take seconds.
It should be possible to generate the entire page with essentially a single client/server communication as the entire HTML code could be build on the server, sent over in one go and put into the DOM with a single assignment to innerHTML. For instance, I could use blaze-html to generate everything, dump it into a file and embed it with an iframe into my site. Doing it this way would complete orders of magnitudes faster, so I can't imagine that there is some inherent problem why a couple of dozen basically static DOM elements wold require seconds to be created in the browser.
I did some more experiments to see if there is any fundamental reason for things to be this slow and if I could work around those limitations. If I stop using threepenny's HTML combinator library and simply generate the page myself, then submit it using the FFI in one go, like this...
runFunction $ ffi "document.getElementById('content').innerHTML = \"<Content>\""
...the page loading / updating time goes from seconds to milliseconds. The addition of event handlers introduces some noticeable delays again, though. It's unfortunately not obvious how to batch submit those without digging deep into the library. In any case, it seems threepenny does hundreds / thousands of completely redundant FFI calls, submitting elements and attributes one-by-one. This results in page construction times that feel like browsing the web on a 90s 56k modem. There really needs to be some kind of batching with the UI monad / HTML combinators, otherwise this entire part of the library seems completely unusable for anything but very simple GUIs. This is especially bad in combination with bug #130, which requires frequent page reloads on mobile clients.
Sorry if all this seems overly critical. I really like the threepenny concept, would love to keep using all of its components and it's very unfortunate that this entire part of the library is basically unusable with the performance of the current implementation. I'll just hack around this for now!
As indicated, the possible bottlenecks are
I actually have some code that has the potential to address bottleneck 3, that is to address the relative slowness of JavaScript's eval
function. However, judging from your experience, this does not seem to be the limiting factor.
In any case, it seems threepenny does hundreds / thousands of completely redundant FFI calls, submitting elements and attributes one-by-one.
I probably don't want to change the fact that everything is an FFI call, as changing this would go against the spirit of the library. However, if it's too slow, then something ought to be done.
It turns out that the bottleneck is actually 2! Whenever Threepenny creates a new HTML Element
, it uses callFunction
to create the element, because it has to get a stable pointer for the foreign object. This is the main bottleneck. I will look into this.
Sorry if all this seems overly critical. I really like the threepenny concept, [...]
That's alright, if it doesn't work, then it doesn't work; no point denying it. :smile: But I'm happy that you like it.
You're right, given that the performance difference between localhost & LAN is so severe, it must be some form of back- and forth communication, a latency issue.
I've rewritten my application yesterday to do the generation with blaze-html and then submit the page with a single FFI call. It now loads perceptually as fast as static HTML from the local disk. I still have to register ~75 event handlers after that, which unfortunately can take a good second or two over LAN. Each handler needs an getElementById
followed by the actual on
call, four trips over the websocket, I guess.
Couldn't you batch the FFI calls somehow? For instance, I don't actually run the page building in the UI monad, I just build up a list of UI actions to register event handlers and submit them all later once the DOM is there. Couldn't you just build up all the required actions inside your RWS stack and then have a submit :: UI a -> UI a
combinator that actually clears the build up actions and does everything in a single, efficient websocket message + eval() call? It hope something in this spirit could be done without changing the API.
Couldn't you batch the FFI calls somehow?
That doesn't help with bottleneck 2, only with bottleneck 1. Whenever an FFI call returns a result, the client has to communicate that result to the server, because the server could, in principle, decide to perform an IO action or something else depending on the value of that result. Put differently, the UI monad is not a "closed world", it allows arbitrary IO , so there is no way around having the client send results to the server.
Bottleneck 1 can be addressed by batching FFI calls, at least those that use runFunction
.
Pulling back from looking at the actual monad and FFI APIs, the HTML combinators seem declarative in nature and don't return anything. Of course, if you mix building up DOM elements with some FFI calls you could end up in a situation where you add elements, then eval() JS that depends on these elements being in place already, making deferring / batching break the API.
I'm just looking for a way to add maybe a few hundred elements to the DOM without it taking seconds. My solution so far is to build my own HTML with blaze-html and insert it on the client with a single threepenny FFI call. Can't the speed of this solution be somehow replicated with threepenny's HTML combinators as well?
Right now, after changing the HTML construction to blaze + 1xFFI, the slowest part of page building is registering events. I'm basically doing this ~75x:
getElementById window "button" >>= \button ->
on UI.click button $ \_ -> do
...
This does a number of roundtrips over the websocket, meaning those ~75x events can take up to a second to be registered. This could, theoretically, all be done in a tiny fraction of that time with a single one way message containing some JS to be eval()'ed in the spirit of this:
getElementById('button').onclick = function() { sendWSClickEvent('button'); }
(another 74x this)
I'm not sure if that can be expressed with the current API. Just spitballing here.
The latest commit 3e9c4e193227d5920bf9fdec2957bba4676b4828 addresses bottleneck 2. Whenever Threepenny creates a new HTML Element
, it now uses the equivalent of runFunction
to create the element; the browser no longer has to send a response.
Could you test whether this helps with your original code?
(This change is a prerequisite for batching FFI calls: In the current API (UI
monad), it is not possible to batch calls where the server awaits a response.)
I hope I didn't make a mistake, but performance does not seem to change perceptively (didn't measure) from Hackage 0.6.0.6. I build with my last version that used threepenny HTML elements (a980d0c) and made sure Stack pulls in the commit you just made from GitHub.
Ok. Assuming that the old Threepenny code was not in the compilation cache, then my commit didn't help as much as I had hoped for.
Say, would it be possible for you to make a branch in your hue-dashboard
project that does not depend on the Hue API and instead displays a collection of mock lights? (Obviously, I don't have a collection of smart LED lights.) This way, I could directly test performance using your code and wouldn't have to ask you again and again if I made a change. :smile:
Stack rebuild threepenny and it worked fine when testing out the %-escape fix, so I'd assume it build fine :/
Having an 'offline' mode has been on my TODO list for sure. I'd implement that right now, but the problem is that the code which actually uses the threepenny HTML combinators is quite a few revisions behind. So it wouldn't help much if I implemented it on top of HEAD, I guess. Maybe I could do so and create a branch where I do some cherry picking and rebasing and whatever to backport these changes. There also seem to be external tools to mock a RESTful API. I'd have simple requirements, basically just a few static responses to a handful of queries. If you happen to have a favorite web service mocking tool, I'd be happy to just use that and provide you with the dataset to run my program. Otherwise, I'll have a look at implementing & backporting an offline mode.
If you happen to have a favorite web service mocking tool, I'd be happy to just use that and provide you with the dataset to run my program. Otherwise, I'll have a look at implementing & backporting an offline mode.
For the purpose of debugging this issue, I think there is a simpler way. Looking at your code, why not simply print
the contents of the _aePC
and lights
variable to the console, and embed the result in a static .hs
file? Since I only need a setup
function that creates many HTML elements, it's enough if the data types for these variables are in scope. Code relating to interactivity and so on can be removed for the purpose of creating a minimal test case.
You're completely right, I'm way overthinking this. I'll make a branch with what you suggest, but it might take a few days.
Sounds good!
There's now a branch called 'threepenny-offline', hacked up from the latest revision that used the threepenny HTML combinators for building the initial page.
Right now, in HEAD with blaze for the actual page building, this is the slowest part for me:
getElementById window "button" >>= \button ->
on UI.click button $ \_ -> do
...
I'm a bit worried this will be too slow if somebody has 30-40 lights. It seems the speed of the handler registration is also dependent on the client. Desktop browsers register the events much faster than mobile ones. Server speed is irrelevant, moving from a Mac to a Raspberry Pi had no impact. Latency also matters, of course, localhost is faster than LAN.
Thanks for the threepenny-offline
branch! I'm looking into this, but setting up the build environment takes some time: downloading GHC 7.10.3, etc. Will try again later.
Great! Setup should all happen automatically with stack. I recently did a build in a fresh VM and it worked without any manual intervention etc.
I managed to try out the threepenny-offline
branch (very cool app, by the way!), and conducted a few tests. It appears that batching FFI calls is indeed the way to go.
Thanks, good to hear! ;-)
The current development branch v1.1
adds quite a lot of functionality over the offline branch you just build. It works without any Hue hardware out-of-the-box, so it's a good test case for the second issue here. In that branch all the HTML generation has been replaced by blaze-html + 1x threepenny FFI call, but there's enough events being registered now to cause an initial delay of a couple of seconds when loading up the app on my iPhone 6. The page displays a 'spinner' symbol at the top that starts as soon as the HTML is ready and stops when all UI events are registered, so you can quite easily see the delay. You can make the delay more prolonged by creating a few more scenes / schedules (+ button), as any 'tile' added to the UI adds a few more event handlers to be registered.
but there's enough events being registered now to cause an initial delay of a couple of seconds
I'm confident that this can be fixed, too. There are two ways to obtain an Element
:
newElement
, orgetElementById
.When registering events, your code uses the latter approach, which is probably more familiar from JavaScript. However, the former approach has the advantage that the browser never has to send anything back to the server (thanks to commit 3e9c4e193227d5920bf9fdec2957bba4676b4828 ), which allows us to batch this call. Once batching is in place, the first approach will be much faster. It is also more natural from the Haskell point of view: elements are referenced by variables, not by "id" strings.
That's good news, in principle. Currently, I of course need to rely on getElementById
because I can't use threepenny
's Element
for performance reasons. I never have an Element
, it's all blaze-html. Is there still a way to optimize this? I'd settle for getElementsById (multiple elements, single roundtrip, less latency?). If DOM construction with threepenny's
Element` would be fixed (3e9c4e1 didn't seem to make a difference for me) to be as fast as blaze-html, I could of course switch back to using the native HTML combinators. To be honest, I'd rather avoid rewriting all of my HTML generation again, though ;-)
Is there any progress / ETA on this? I don't need a nice or elegant solution, just anything at all to work around this bug. I'd be perfectly happy to just submit everything in one batch for better performance. The DOM building stuff isn't even important as there is at least a viable work around, but the event handling is more tricky to fix from outside the library. It's difficult to provide a good user experience (especially with #130 also being a factor) if the page load times are several seconds even after not using any HTML combinators, and it's not really possible to do much work if any UI handler adds another ~50ms or so to the load time.
I know what I need to do in order to fix both the DOM building and the event handling (batch everything), it's just a matter of finding free time right now… It's the next programming task I do, but it may be another couple of weeks before enough free time has accumulated.
That would be great! ;-)
I just pushed a basic benchmarking commit to the v1.1
branch (works offline out-of-the box), some interesting results, prints something like this:
FF localhost:
Page building: 0.11s / Event registration: 0.67s
FF LAN, faster machine:
Page building: 0.07s / Event registration: 0.65s
Safari, localhost:
Page building: 0.00s / Event registration: 0.45s
iPhone 6 Safari LAN:
Page building: 0.01s / Event registration: 3.23s
This is assuming the FFI calls are blocking. It's interesting to see that page building now seems to be largely independent of the network and server machine. LAN vs localhost makes no difference, but i.e. Safari seems to be a lot faster than Firefox, faster client machines result in faster times. For page building there are 3 FFI calls (one to get the user ID cookie, one to get the client's time and the final one to submit / insert all HTML). In any case, since it's at worst ~100ms all should be fine, and the limiting factor appears to be how fast the client can parse / insert the HTML.
Event registration is slow in generally but only really unacceptably slow on iOS. It fluctuates wildly between 2s and 3.5s on an iPhone 6, considerable slower on an iPad Air. This is of course the worst problem as it delays the readyness of the applications by a second or so for every few dozen events.
Why do you think the event registration takes so long on mobile devices? It can't be that they're just generally slow, the entire page building process seems to be very fast, the website comes up very quickly. It also can't be the network, as other machines on the LAN do much better. I just want to make sure we're actually looking at the correct issue, not just improving something that doesn't make much of a difference.
Why do you think the event registration takes so long on mobile devices?
No idea. The only thing I can think of is that the implementation of the WebSockets protocol may differ, or other differences with event registration (touch events?).
Note that currently, the code for registering an event expects a response from the client. Fortunately, I think this can be eliminated as well. Then, batching should yield good results.
Speaking of batching. I don't have a mobile device at hand. If you want, you can run the TestSpeed.hs example in the performance
branch and tell me the timing results to confirm that batching helps on mobile as well. But it's not strictly necessary, I'm pretty sure that it will fix one important problem (and probably also the one we care about).
I can build and run TestSpeed.hs, but where is it supposed to write the results? I don't see anything in the developer console of Firefox, that's where it's supposed to go, isn't it?
Threepenny uses a cookie to decide whether to log to the console or not. Execute
Haskell.log.enable(true)
in the console to enable that cookie.
Thanks, that seems to work fine, not sure if that helps with iOS devices, though. I can't just open up the JS console on an iPhone. There are some tools available, it seems (I never did any web development), but it looks like they're not available on my version of OS X (still on 10.10).
What I can tell at least is that on localhost with FF it takes several seconds for the DOM to be build, while on iOS everything appears instantly.
Oh, false alarm, that's only with the debug console open, same speed if it's closed. Sorry, I have very little experience with the browser developer tools.
It's christmas time. :christmas_tree: I have removed the unnecessary client response when registering event handlers. Also, commit 42203ef28d9a48b5c4f119ab5cdb398bc166927b implements batching for FFI calls. The only major bottleneck now is getElementById
. Does this improve matters for you?
Oh, much sooner than expected ;-)
So I gave it a shot with the v1.1
branch which does something like 50-100x (depending on what the user has setup) getElementByID
followed by on
. It unfortunately does not seem to make difference, the event registration time on iPhone 6 still jumps around between 2-4s. Desktop browsers on localhost don't seem to show any improvement either.
I tried to see if the problem is getElementByID
or the handler registration by replacing the get call with UI.button
, which should not block, just to see how the timings change. This actually makes things slower on some desktop browsers, while others don't show any change. For the iPhone 6 the event registration time drops to something around 1.5s, indicating that getElementByID
makes up for about half of the time spent and the 2-4s time is split evenly between getting elements and registering events.
Just re-running the benchmarks from my post above with your changes:
FF localhost:
Page building: 0.10s / Event registration: 0.66s
FF LAN, faster machine:
Page building: 0.08s / Event registration: 0.64s
Safari, localhost:
Page building: 0.00s / Event registration: 0.48s
iPhone 6 Safari LAN:
Page building: 0.02s / Event registration: 3.46s
and when replacing getElementById
with UI.button
:
FF localhost:
Page building: 0.14s / Event registration: 1.00s
FF LAN, faster machine:
Page building: 0.06s / Event registration: 0.50s
Safari, localhost:
Page building: 0.00s / Event registration: 0.46s
iPhone 6 Safari LAN:
Page building: 0.01s / Event registration: 1.92s
Ok, this is interesting... I just had a large interference spike on the Wifi, and event registration time jumped up to a max of 75 seconds (!!!) for mobile devices. I tried putting my laptop on the Wifi, and I get equally poor performance. Should've tried that earlier / do some network diagnostics. Apologies. It seems that the main difference causing the slower times of the iPhone is the different network speed (wired vs WiFi).
In summary:
I'm not familiar enough with your v1.1
branch to be able to make a qualified comment.
But I have just taken a look at the threepenny-offline branch, and replaced almost all calls to getElementById
with a direct reference to the corresponding Element
. Using threepenny in the form of commit 42203ef28d9a48b5c4f119ab5cdb398bc166927b , the complete page building time (HTML elements + event registration) went down from ~460ms (with getElementById
) to about ~150ms (without getElementById
). That looks like a success to me?
I will draw up a pull request shortly.
v1.1
submits all HTML in one go and the only thing taking any time is getElementById
+ on
, pretty evenly split between them. Nothing really special to understand or dig into, it's literally just that taking up the time. It's really just what I explained.
Like I said, I already replaced all the DOM building code with blaze-html
, so that's all fine. I also think it's actually preferable to keep it that way, as it's likely unbeatable in terms of performance and has other advantages like being able to debug the output by running it through the blaze pretty printer, caching the HTML etc. This also means I never have an Element
I could pass directly to on
. This would be the same if a user wanted to register events for some static HTML from the root .html file, DOM elements created client-side etc.
The offline
branch is really old / not meaningful anymore and much simpler than the current state of the application, so even 150ms means that with the current code and a more complex application state with more lights/scenes/schedules it would take a substantial amount of time to get the page ready. And I assume you measured on localhost, which is <0.1ms vs LAN <1.0ms vs WiFi <5.0ms (or much worse if you turn on the microwave oven...).
I think the issue is really that registering events should either not require a network round trip or that there should be a way to batch all the events together so they can be registered on a single round trip. If there's no way to automatically fix this in the library, I'd be perfectly happy with either onElementID
or GetElementPluralSById
or anything like that. Just anything that allows me to submit a bunch of event handlers without having the cost of a network round trip for each one. Without that, I don't see this problem going away.
I think the issue is really that registering events should either not require a network round trip or that there should be a way to batch all the events together so they can be registered on a single round trip.
Hm, that's the part I don't quite understand yet. As of commit 77bfa45e151a240288dc06f2b19669ace0e5c07a , registering an event to an existing Element
does not take a round trip anymore. The only thing that induces round trips is getElementById
, so I'm confused.
(It is, however, currently true that if you call getElementById
, then the server will send a message with all previous API calls, and then send a second message with just the call to getElementById
. This may actually explain why each part still takes half the time — each part is a separate message, and I did not combine the messages yet. Perhaps that is what is going on.)
Like I said, I already replaced all the DOM building code with blaze-html, so that's all fine. I also think it's actually preferable to keep it that way, [..] This also means I never have an
Element
I could pass directly to on.
I guess what I want to say is that I have optimized the case where Threepenny is used for DOM building. Your approach goes a bit beyond my original vision for Threepenny, but then again, I don't see a reason to not include support for your use case.
I'd be perfectly happy with either
onElementID
orGetElementPluralSById
or anything like that.
That seems to be the only way if you want to stick with HTML IDs as opposed to Element
references. I will think about it. The function onElementID
seems to be preferable to the function GetElementPluralSById
.
I'm probably 'abusing' threepenny a bit at this point. I'm also new to the world of web application development, so thanks for your patience ;-)
You're completely right, currently I only have the round trip because of the getElementById
. I understand that you intended the native native HTML combinators and Element
s to be used, but I think there's also a decent case to be made for supporting an alternative approach with a faster getElementById
+ on
implementation:
innerHTML
+ runFunction
works really fast. It's probably the quickest way of getting a complex page delivered to the client. If the client is fast enough, generating + submitting + inserting takes <10ms for complex cases for me, even using the slowest String
based blaze-html
rendererstatic/index.html
that just needs to be wired uprunFunction
. If one wants to check a thousand checkboxes and Element
s are too slow, runFunction
and some jQuery can fix this. I don't see a clever workaround to speed up event registration from outside the library.I think it would be super useful to have some way of registering events on plain element IDs without the current networking overhead.
Hi @HeinrichApfelmus i tested your batch branch and observed huge speedups in load times of many elements pages and near zero problems, the only problem i found was that run call messages were retained when a server push occurs. My solution was to add a polling flush thread to the call buffer . Is there anything pending for merging to master? If you want to the code i can send a pull request.
@massudaw
the only problem i found was that run call messages were retained when a server push occurs
Could you be more specific? The library user has to call flushCallBuffer
every now and then in order to make sure that JavaScript functions are actually executed. But you seem to have found an additional problem? What do you mean by "server push"?
@HeinrichApfelmus might be this
The library user has to call flushCallBuffer every now and then in order to make sure that JavaScript functions are actually executed.
i never call flushCallBuffer in the client. But in my application i have problems only when the server generate messages without any client event triggering (what i callserver push) . All the other client calls works fine because of the flush after the event handler in the Foreign.Javascript exportHandler function.
btw, is there anything here I ought to test? Maybe I did not follow the discussion / commits correctly, but is there any branch / commit etc. that contains a fix for this issue? Those from Sep 16th?
@blitzcode No, you didn't miss anything. You have already seen the commits from Sep 16th, they are simply a rebase of the older performance
branch.
I still need to write up (and test) the workaround…
@blitzcode Finally! I have written up the workaround that you are probably looking for. 😄
As of commit a32b849d0884605b1233cb25663ab1f726327a26 , you can now use ffiExport
to create an event handler without waiting for a response from the browser window. Then, you can use another call to runFunction
to register it with the element whose ID you know. The code in samples/WorkaroundFastEvent.hs
demonstrates how to do that.
As far as I can tell, you only use on UI.click
in your hue-dashboard
project. You can copy & paste the onElementId
function from the workaround example and use it in your code. Does that speed up your program? If not, does using the batch-ffi-calls
branch in addition to this workaround help?
EDIT: (If you use the batch-ffi-calls
branch, keep in mind that you have to add flushCallBuffer
to the end of the event handler in the onElementId
function.)
@HeinrichApfelmus Sweet! I'll have time over the next week to try this out in detail, I'll report back!
Update: I have just pushed the batching / buffering stuff into master. However, for reasons of backwards compatibility, the default mode is still "no buffering" . You have to use
UI.setCallBufferMode BufferRun
in order to enable the new buffering mode. It should speed up page loading considerably!
See the Chat.hs
file for an example.
So, I had a shot trying out your changes. Looks really neat! I think it's a very helpful addition and quite elegant. I like that I can turn the new behavior on & off, so I can register my event handlers and then go back to the simpler, synchronous mode, i.e.
setCallBufferMode BufferRun
sequence_ . reverse $ page ^. pgUIActions
flushCallBuffer
setCallBufferMode NoBuffering
Very cool. Seems to work as designed as well, in my perf stats the event registration time is down quite a bit.
Two questions:
ffiExport
I noticed 'FIXME: At the moment, the function is not garbage collected.'. Is this per-connection or for the runtime of the application? My software generally runs for weeks and months, so I'm very wary of anything leaking memory ;-)Seems to work as designed as well, in my perf stats the event registration time is down quite a bit.
Great! :smile:
Two questions:
Yes, absolutely. You can use ffiExport
to export functions with multiple arguments, as long as they are members of the FromJS
class. Example:
let myhandler :: JSObject -> Int -> Int -> IO ()
myhandler = ...
myfun <- ffiExport myhandler
runFunction $ ffi "(%1)({ test: 2}, 3, 5)" myfun
I have tried to clarify the documentation in the latest commit. The JSObject
representing the exported function can be garbage collection as soon as the browser Window
is garbage collected (e.g. closed) (unless you hold a reference to it yourself, of course), but it will not be collected before that. It should be perfectly fine for your use case; this is only relevant if the HTML elements change a lot and new event handlers are created again and again within a single window.
Ok, thanks for clarifying!
I was able to duplicate the behavior of UI.mousedown
like this:
onElementIDMouseDown :: String -> (Int -> Int -> UI void) -> UI ()
onElementIDMouseDown elementID handler = do
window <- askWindow
exported <- ffiExport (\mx my -> runUI window (handler mx my) >> return ())
runFunction $ ffi
( "$(%1).on('mousedown', function(e) " ++
"{ var offs = $(this).offset(); %2(e.pageX - offs.left, e.pageY - offs.top); })"
)
("#" ++ elementID)
exported
A bit messy, but I guess that is what you do?
One strange thing I noticed is that some events were failing. After some debugging, I noticed some of my element IDs had spaces in them. That actually worked fine with getElementById
+ on
, but fails with the new method here. The right answer is probably to just not have spaces in element IDs, but still a bit confusing. I'd have expected that the string escaping etc. done by ffi
would result in the same behavior in any case. Hm.
One strange thing I noticed is that some events were failing. After some debugging, I noticed some of my element IDs had spaces in them.
Actually, it turns out that spaces are not allowed in DOM element IDs. The jQuery idiom $(..)
will make use of this fact, and interpret the input string different from what you wanted. See a StackOverflow answer.
A bit messy, but I guess that is what you do?
Looks fine to me! :smile: I do essentially the same thing, except that instead of declaring a new function, I use a JavaScript function from an external file. Code. The difference should be negligible, however.
Sure, I wasn't complaining about the spaces issue, more of a 'how did that ever work?' type reaction. Fixed on my side.
So, thanks again for these fixes! Not only did you improve all the build in combinators, you also implemented a new way to do event registration and a batching mode for runFunction
. Great!
Are you doing a new Hackage release with all the various fixes? If I could have one more xmas gift in the next release, it would be a fix for #130. The only workaround I could come up with for that one has some very unfortunate side-effects, like being incompatible with the special HTML-app modes that iOS/Android have. With these you could add a threepenny application to the home screen, have it load as its own process, no browser controls etc. It would go a long way in making threepenny based software feel more slick and native on mobile devices. I'd love to support that, especially now that the loading times are so fast...
So, thanks again for these fixes! Not only did you improve all the build in combinators, you also implemented a new way to do event registration and a batching mode for runFunction. Great!
You're welcome! I'm glad I could help and I'm happy that you appreciate it. :smile:
If I could have one more xmas gift in the next release
Well, I can certainly look into it. :wink: But that's probably best discussed in the other issue ticket. For this issue, am I right in assuming that the original problem, slow DOM building, has been solved to your satisfaction and this issue can be closed?
Building up even just a few dozens of DOM elements can take a very long time. Example:
https://github.com/blitzcode/hue-dashboard/blob/8a9e5ac466aadd8b6f8fa4557fe2d931a89d03d2/WebUI.hs#L124
On localhost it can easily take a second for the content to arrive in the browser's DOM, on a mobile device over LAN it can take several seconds. I suspect this is because there's back & forth between threepenny and the browser for every single element added? A workaround could be to just build up some HTML locally with something like blaze-html, but I'd of course think it would be better if threepenny just did the batching by itself.
The performance right now makes it difficult to generate even a moderately complex page in time, dynamically building things like a modal dialog etc. seems impractical.