Closed zudov closed 8 years ago
Right now we are sharing the connection amongst multiple requests within a single thread. It seems that our current backend (http-streams) has a similar withManager
type function. called withConnection
that we could use to do exactly the same thing.
withConnection :: IO Connection -> (Connection -> IO γ) -> IO γ
Does using withConnection
allows to use one connection concurrently without worrying about properly locking it, etc. Right?
It seems like issuing concurrent requests is doing some strange stuff. Made an issue here: https://github.com/afcowie/http-streams/issues/74 Depending on its resolution we'll discuss the backend. Adding an extra "with" suffix to the end of all API kind of sucks :-/ It would be cool if we could do something like the following:
[a,b] <- hackerNewsConcurrently [getUser $ UserId "blah", getUser $ UserId "blah"]
The documentation says this.
but it's up to you to ensure you match the number of requests sent to the number of responses read, and to process those responses in order
Isn't that what causes an issue?
Adding an extra "with" suffix to the end of all API kind of sucks :-/
I agree. I'll change it.
Well, at the moment you can do it in this way.
[a,b] <- mapConcurrently (getUserWith opts . UserId) ["blah", "blah1"]
When I rename getUserWith
to getUser
it would be a bit shorter.
Did you take a look at examples? https://github.com/zudov/hackernews/blob/wreq/examples/Examples.hs
I'm going to add the EitherT
stuff before we make a decision on the backend.
OK.
I'll update this branch now.
Hmm. At the moment HackerNews
monad is not used in that branch. I didn't use it as there was no particular need for it. However, now I noticed that it's not a bad idea to store Options
in the Reader
part of HackerNews
stack. What do you think?
Yea I think that's a good idea. Would be nice to keep everything inside the hacker news do block.
After thinking about it, I think it's cool if we use the wreq backend and lenses. So, after we add all that, and add some more tests, I think we'll be ready for a 1.0
release. Or, do you think 1.0
is premature? Snap isn't even 1.0
yet :)
I think it's cool if we use the wreq backend and lenses
Great. In that case it makes sense to add a lens interface for API types.
I think we can release 1.0
when the API is moreorless stable, or at least when we would be sure that we likely wouldn't have such big breaking changes, like this type changes.
I am still scratching my head about an impovement with types (I mean that 'superclass' Item).
We could go back to the snake type... but make everything Maybe a
, to avoid impartial functions. Then parse them all differently depending on the type. Let me know if that doesn't make sense, I can explain further / better.
Well, if we use snake type with Maybe a
fields, then it doesn't make much sense to provide distinct constructors for different Items, so it there would be only two type constructors with optional (Maybe
) fields:
data Snake = User { userId :: Text
, karma :: Int
...
}
| Item { itemId :: Int
, deleted :: Bool
, time :: UnixTime
, url :: Maybe Text
, title :: Maybe Text
...
}
But I don't really like this as it brings some ambiguosness. For example users who simply want to fetch a Story
would have to deal with these Maybe
s, unpacking them e.g.
I was thinking about making a type ItemProps
which would contain fields which are present in any item and include it into each item type. Something like:
data ItemProps = ItemProps { itemId :: Int
, deleted :: Bool
, time :: UnixTime
, by :: UserId
...
}
data Story = Story { creds :: ItemCredentials
, story :: Text
...
}
I think we should close this in favor of a servant
-only backend
I've reemplimented the backend code with wreq. For simplicity I removed everything what was there before, but I want to discuss the implementation of multiple backends if you think that it's necessary.
The biggest advantage which the wreq implimentation gives is the simplicity of sharing the connection. Connections are managed by manager which can be safely shared between concurrent threads (see examples), so the server applications can create a manager on startup and then use it for the whole application runtime (yesod users usually already have a manager for some other needs, and they can use it here).
The disadvantage is that wreq has heavy dependency,
lens
. However this can be overcomed by usinghttp-client
(a refactored fork of http-conduit) whichwreq
is based on. Wreq is kind of high level interface tohttp-client
, with some additional features. That idea just striked me, and it seems great because depending on something as huge aslens
for no big reason is not very good.I haven't yet think about a go way to support multiple backends, current
wreq
implimentation is done in a way that would be ideal if wreq was the only backend, so that you can take a look.By the way, now API has functions like
get*
andget*With
. The second one takesOptions
(with manager inside) as an argument, first one uses default options and therefore creates a new manager on request. I consider removing those that use default options, as it's smart to use them only in case if one request is done over application life, or the requests are done with a big time interval; such usecases are very rare, so there is no need for distinct shortcuts. At the same time it would discourage the wrong usage of API when the manager is created on each request.