Open doofin opened 7 years ago
Right now, the performance is most likely pretty bad. The transport protocol is JSON, and the server uses the built-in websockets WS server. Even worse, a new WS connection is opened and closed for each server call, which is incredibly wasteful.
Previously, the server used warp's WS implementation, used a binary encoding and reused connections whenever possible, but that was all taken out to simplify the implementation for the paper. The plan is to put all those things back in again, but I haven't gotten around to it yet.
Of course, you can still use warp (or any other web server) to serve up the client parts, since Haste doesn't deal with that part at all unless you're using something like haste-standalone, which also hasn't been ported to the latest version yet.
I am new to haste and is totally fascinated by its approach,but there does not seems to be any performance test , and the haste package does not depend on existing server implementations like wai and warp.Does it contains own server side implementation? Anc can i use warp as backend?